Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Pensieri a caso
Photography
A Box Of Stories
Reading Space
Homepage

Twitter
Facebook

Contact

Archive

Jan
27th
Sun
permalink

Daniel C. Dennett on an attempt to understand the mind; autonomic neurons, culture and computational architecture

image

"What Darwin and Turing had both discovered, in their different ways, was the existence of competence without comprehension."

— Daniel C. Dennett, What Darwin’s theory of evolution teaches us about Alan Turing and artificial intelligence, Lapidarium

"I’m trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine. This is called homuncular functionalism, because you take the whole person. You break the whole person down into two or three or four or seven sub persons that are basically agents. They’re homunculi, and this looks like a regress, but it’s only a finite regress, because you take each of those in turn and you break it down into a group of stupider, more specialized homunculi, and you keep going until you arrive at parts that you can replace with a machine, and that’s a great way of thinking about cognitive science. It’s what good old-fashioned AI tried to do and still trying to do.

The idea is basically right, but when I first conceived of it, I made a big mistake. I was at that point enamored of the McCulloch-Pitts logical neuron. McCulloch and Pitts had put together the idea of a very simple artificial neuron, a computational neuron, which had multiple inputs and a single branching output and a threshold for firing, and the inputs were either inhibitory or excitatory. They proved that in principle a neural net made of these logical neurons could compute anything you wanted to compute. So this was very exciting. It meant that basically you could treat the brain as a computer and treat the neuron as a sort of basic switching element in the computer, and that was certainly an inspiring over-simplification. Everybody knew is was an over-simplification, but people didn’t realize how much, and more recently it’s become clear to me that it’s a dramatic over-simplification, because each neuron, far from being a simple logical switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.

The question is, what happens to your ideas about computational architecture when you think of individual neurons not as dutiful slaves or as simple machines but as agents that have to be kept in line and that have to be properly rewarded and that can form coalitions and cabals and organizations and alliances? This vision of the brain as a sort of social arena of politically warring forces seems like sort of an amusing fantasy at first, but is now becoming something that I take more and more seriously, and it’s fed by a lot of different currents.

Evolutionary biologist David Haig has some lovely papers on intrapersonal conflicts where he’s talking about how even at the level of the genetics, even at the level of the conflict between the genes you get from your mother and the genes you get from your father, the so-called madumnal and padumnal genes, those are in opponent relations and if they get out of whack, serious imbalances can happen that show up as particular psychological anomalies.

We’re beginning to come to grips with the idea that your brain is not this well-organized hierarchical control system where everything is in order, a very dramatic vision of bureaucracy. In fact, it’s much more like anarchy with some elements of democracy. Sometimes you can achieve stability and mutual aid and a sort of calm united front, and then everything is hunky-dory, but then it’s always possible for things to get out of whack and for one alliance or another to gain control, and then you get obsessions and delusions and so forth.

You begin to think about the normal well-tempered mind, in effect, the well-organized mind, as an achievement, not as the base state, something that is only achieved when all is going well, but still, in the general realm of humanity, most of us are pretty well put together most of the time. This gives a very different vision of what the architecture is like, and I’m just trying to get my head around how to think about that. (…)

The vision of the brain as a computer, which I still champion, is changing so fast. The brain’s a computer, but it’s so different from any computer that you’re used to. It’s not like your desktop or your laptop at all, and it’s not like your iPhone except in some ways. It’s a much more interesting phenomenon. What Turing gave us for the first time (and without Turing you just couldn’t do any of this) is a way of thinking in a disciplined way about phenomena that have, as I like to say, trillions of moving parts. Until late 20th century, nobody knew how to take seriously a machine with a trillion moving parts. It’s just mind-boggling.

You couldn’t do it, but computer science gives us the ideas, the concepts of levels, virtual machines implemented in virtual machines implemented in virtual machines and so forth. We have these nice ideas of recursive reorganization of which your iPhone is just one example and a very structured and very rigid one at that.

We’re getting away from the rigidity of that model, which was worth trying for all it was worth. You go for the low-hanging fruit first. First, you try to make minds as simple as possible. You make them as much like digital computers, as much like von Neumann machines, as possible. It doesn’t work. Now, we know why it doesn’t work pretty well. So you’re going to have a parallel architecture because, after all, the brain is obviously massively parallel.

It’s going to be a connectionist network. Although we know many of the talents of connectionist networks, how do you knit them together into one big fabric that can do all the things minds do? Who’s in charge? What kind of control system? Control is the real key, and you begin to realize that control in brains is very different from control in computers. Control in your commercial computer is very much a carefully designed top-down thing.

You really don’t have to worry about one part of your laptop going rogue and trying out something on its own that the rest of the system doesn’t want to do. No, they’re all slaves. If they’re agents, they’re slaves. They are prisoners. They have very clear job descriptions. They get fed every day. They don’t have to worry about where the energy’s coming from, and they’re not ambitious. They just do what they’re asked to do and do it brilliantly with only the slightest tint of comprehension. You get all the power of computers out of these mindless little robotic slave prisoners, but that’s not the way your brain is organized.

Each neuron is imprisoned in your brain. I now think of these as cells within cells, as cells within prison cells. Realize that every neuron in your brain, every human cell in your body (leaving aside all the symbionts), is a direct descendent of eukaryotic cells that lived and fended for themselves for about a billion years as free-swimming, free-living little agents. They fended for themselves, and they survived.

They had to develop an awful lot of know-how, a lot of talent, a lot of self-protective talent to do that. When they joined forces into multi-cellular creatures, they gave up a lot of that. They became, in effect, domesticated. They became part of larger, more monolithic organizations. My hunch is that that’s true in general. We don’t have to worry about our muscle cells rebelling against us, or anything like that. When they do, we call it cancer, but in the brain I think that (and this is my wild idea) maybe only in one species, us, and maybe only in the obviously more volatile parts of the brain, the cortical areas, some little switch has been thrown in the genetics that, in effect, makes our neurons a little bit feral, a little bit like what happens when you let sheep or pigs go feral, and they recover their wild talents very fast.

Maybe a lot of the neurons in our brains are not just capable but, if you like, motivated to be more adventurous, more exploratory or risky in the way they comport themselves, in the way they live their lives. They’re struggling amongst themselves with each other for influence, just for staying alive, and there’s competition going on between individual neurons. As soon as that happens, you have room for cooperation to create alliances, and I suspect that a more free-wheeling, anarchic organization is the secret of our greater capacities of creativity, imagination, thinking outside the box and all that, and the price we pay for it is our susceptibility to obsessions, mental illnesses, delusions and smaller problems.

We got risky brains that are much riskier than the brains of other mammals even, even more risky than the brains of chimpanzees, and that this could be partly a matter of a few simple mutations in control genes that release some of the innate competitive talent that is still there in the genomes of the individual neurons. But I don’t think that genetics is the level to explain this. You need culture to explain it.

'Culture creates a whole new biosphere'

This, I speculate, is a response to our invention of culture; culture creates a whole new biosphere, in effect, a whole new cultural sphere of activity where there’s opportunities that don’t exist for any other brain tissues in any other creatures, and that this exploration of this space of cultural possibility is what we need to do to explain how the mind works.

Everything I just said is very speculative. I’d be thrilled if 20 percent of it was right. It’s an idea, a way of thinking about brains and minds and culture that is, to me, full of promise, but it may not pan out. I don’t worry about that, actually. I’m content to explore this, and if it turns out that I’m just wrong, I’ll say, “Oh, okay. I was wrong. It was fun thinking about it,” but I think I might be right.

I’m not myself equipped to work on a lot of the science; other people could work on it, and they already are in a way. The idea of selfish neurons has already been articulated by Sebastian Seung of MIT in a brilliant keynote lecture he gave at Society for Neuroscience in San Diego a few years ago. I thought, oh, yeah, selfish neurons, selfish synapses. Cool. Let’s push that and see where it leads. But there are many ways of exploring this. One of the still unexplained, so far as I can tell, and amazing features of the brain is its tremendous plasticity.

Mike Merzenich sutured a monkey’s fingers together so that it didn’t need as much cortex to represent two separate individual digits, and pretty soon the cortical regions that were representing those two digits shrank, making that part of the cortex available to use for other things. When the sutures were removed, the cortical regions soon resumed pretty much their earlier dimensions. If you blindfold yourself for eight weeks, as Alvaro Pascual-Leone does in his experiments, you find that your visual cortex starts getting adapted for Braille, for haptic perception, for touch.

The way the brain spontaneously reorganizes itself in response to trauma of this sort, or just novel experience, is itself one of the most amazing features of the brain, and if you don’t have an architecture that can explain how that could happen and why that is, your model has a major defect. I think you really have to think in terms of individual neurons as micro-agents, and ask what’s in it for them?

Why should these neurons be so eager to pitch in and do this other work just because they don’t have a job? Well, they’re out of work. They’re unemployed, and if you’re unemployed, you’re not getting your neuromodulators. If you’re not getting your neuromodulators, your neuromodulator receptors are going to start disappearing, and pretty soon you’re going to be really out of work, and then you’re going to die.

In this regard, I think of John Hollands work on the emergence of order. His example is New York City. You can always find a place where you can get gefilte fish, or sushi, or saddles or just about anything under the sun you want, and you don’t have to worry about a state bureaucracy that is making sure that supplies get through. No. The market takes care of it. The individual web of entrepreneurship and selfish agency provides a host of goods and services, and is an extremely sensitive instrument that responds to needs very quickly.

Until the lights go out. Well, we’re all at the mercy of the power man. I am quite concerned that we’re becoming hyper-fragile as a civilization, and we’re becoming so dependent on technologies that are not as reliable as they should be, that have so many conditions that have to be met for them to work, that we may specialize ourselves into some very serious jams. But in the meantime, thinking about the self-organizational powers of the brain as very much like the self-organizational powers of a city is not a bad idea. It just reeks of over-enthusiastic metaphor, though, and it’s worth reminding ourselves that this idea has been around since Plato.

Plato analogizes the mind of a human being to the state. You’ve got the rulers and the guardians and the workers. This idea that a person is made of lots of little people is comically simpleminded in some ways, but that doesn’t mean it isn’t, in a sense, true. We shouldn’t shrink from it just because it reminds us of simpleminded versions that have been long discredited. Maybe some not so simpleminded version is the truth.

There are a lot of cultural fleas

My next major project will be trying to take another hard look at cultural evolution and look at the different views of it and see if I can achieve a sort of bird’s eye view and establish what role, if any, is there for memes or something like memes and what are the other forces that are operating. We are going to have to have a proper scientific perspective on cultural change. The old-fashioned, historical narratives are wonderful, and they’re full of gripping detail, and they’re even sometimes right, but they only cover a small proportion of the phenomena. They only cover the tip of the iceberg.

Basically, the model that we have and have used for several thousand years is the model that culture consists of treasures, cultural treasures. Just like money, or like tools and houses, you bequeath them to your children, and you amass them, and you protect them, and because they’re valuable, you maintain them and prepare them, and then you hand them on to the next generation and some societies are rich, and some societies are poor, but it’s all goods. I think that vision is true of only the tip of the iceberg.

Most of the regularities in culture are not treasures. It’s not all opera and science and fortifications and buildings and ships. It includes all kinds of bad habits and ugly patterns and stupid things that don’t really matter but that somehow have got a grip on a society and that are part of the ecology of the human species in the same way that mud, dirt and grime and fleas are part of the world that we live in. They’re not our treasures. We may give our fleas to our children, but we’re not trying to. It’s not a blessing. It’s a curse, and I think there are a lot of cultural fleas. There are lots of things that we pass on without even noticing that we’re doing it and, of course, language is a prime case of this, very little deliberate intentional language instruction goes on or has to go on.

Kids that are raised with parents pointing out individual objects and saying, “See, it’s a ball. It’s red. Look, Johnny, it’s a red ball, and this is a cow, and look at the horsy” learn to speak, but so do kids who don’t have that patient instruction. You don’t have to do that. Your kids are going to learn ball and red and horsy and cow just fine without that, even if they’re quite severely neglected. That’s not a nice observation to make, but it’s true. It’s almost impossible not to learn language if you don’t have some sort of serious pathology in your brain.

Compare that with chimpanzees. There are hundreds of chimpanzees who have spent their whole lives in human captivity. They’ve been institutionalized. They’ve been like prisoners, and in the course of the day they hear probably about as many words as a child does. They never show any interest. They never apparently get curious about what those sounds are for. They can hear all the speech, but it’s like the rustling of the leaves. It just doesn’t register on them as worth attention.

But kids are tuned for that, and it might be a very subtle tuning. I can imagine a few small genetic switches, which, if they were just in a slightly different position, would make chimpanzees just as pantingly eager to listen to language as human babies are, but they’re not, and what a difference it makes in their world! They never get to share discoveries the way we do and to share our learning. That, I think, is the single feature about human beings that distinguishes us most clearly from all others: we don’t have to reinvent the wheel. Our kids get the benefit of not just what grandpa and grandma and great grandpa and great grandma knew. They get the benefit of basically what everybody in the world knew in the years when they go to school. They don’t have to invent calculus or long division or maps or the wheel or fire. They get all that for free. It just comes as part of the environment. They get incredible treasures, cognitive treasures, just by growing up. (…)

A lot of naïve thinking by scientists about free will

Moving Naturalism Forward" was a nice workshop that Sean Carroll put together out in Stockbridge a couple of weeks ago, and it was really interesting. I learned a lot. I learned more about how hard it is to do some of these things and that’s always useful knowledge, especially for a philosopher.

If we take seriously, as I think we should, the role that Socrates proposed for us as midwives of thinking, then we want to know what the blockades are, what the imagination blockades are, what people have a hard time thinking about, and among the things that struck me about the Stockbridge conference were the signs of people really having a struggle to take seriously some ideas which I think they should take seriously. (…)

I realized I really have my work cut out for me in a way that I had hoped not to discover. There’s still a lot of naïve thinking by scientists about free will. I’ve been talking about it quite a lot, and I do my best to undo some bad thinking by various scientists. I’ve had some modest success, but there’s a lot more that has to be done on that front. I think it’s very attractive to scientists to think that here’s this several-millennia-old philosophical idea, free will, and they can just hit it out of the ballpark, which I’m sure would be nice if it was true.

It’s just not true. I think they’re well intentioned. They’re trying to clarify, but they’re really missing a lot of important points. I want a naturalistic theory of human beings and free will and moral responsibility as much as anybody there, but I think you’ve got to think through the issues a lot better than they’ve done, and this, happily, shows that there’s some real work for philosophers.

Philosophers have done some real work that the scientists jolly well should know. Here’s an area where it was one of the few times in my career when I wanted to say to a bunch of scientists, “Look. You have some reading to do in philosophy before you hold forth on this. There really is some good reading to do on these topics, and you need to educate yourselves.”

A combination of arrogance and cravenness

The figures about American resistance to evolution are still depressing, and you finally have to realize that there’s something structural. It’s not that people are stupid, and I think it’s clear that people, everybody, me, you, we all have our authorities, our go-to people whose word we trust. If you want to question about the economic situation in Greece, for instance, you need to check it out with somebody whose opinion on that we think is worth taking seriously. We don’t try to work it out for ourselves. We find some expert that we trust, and right around the horn, whatever the issues are, we have our experts, and so a lot of people have as their experts on matters of science, they have their pastors. This is their local expert.

I don’t blame them. I wish they were more careful about vetting their experts and making sure that they found good experts. They wouldn’t choose an investment advisor, I think, as thoughtlessly as they go along with their pastor. I blame the pastors, but where do they get their ideas? Well, they get them from the hierarchies of their churches. Where do they get their ideas? Up at the top, I figure there’s some people that really should be ashamed of themselves. They know better.

They’re lying, and when I get a chance, I try to ask them that. I say, “Doesn’t it bother you that your grandchildren are going to want to know why you thought you had to lie to everybody about evolution?” I mean, really. They’re lies. They’ve got to know that these are lies. They’re not that stupid, and I just would love them to worry about what their grandchildren and great grandchildren would say about how their ancestors were so craven and so arrogant. It’s a combination of arrogance and cravenness.

We now have to start working on that structure of experts and thinking, why does that persist? How can it be that so many influential, powerful, wealthy, in-the-public people can be so confidently wrong about evolutionary biology? How did that happen? Why does it happen? Why does it persist? It really is a bit of a puzzle if you think about how they’d be embarrassed not to know that the world is round. I think that would be deeply embarrassing to be that benighted, and they’d realize it. They’d be embarrassed not to know that HIV is the vector of AIDS. They’d be embarrassed to not understand the way the tides are produced by the gravitational forces of the moon and the sun. They may not know the details, but they know that the details are out there. They could learn them in 20 minutes if they wanted to. How did they get themselves in the position where they could so blithely trust people who they’d never buy stocks and bonds from? They’d never trust a child’s operation to a doctor that was as ignorant and as ideological as these people. It is really strange. I haven’t got to the bottom of that. (…)

This pernicious sort of lazy relativism

[T]here’s a sort of enforced hypocrisy where the pastors speak from the pulpit quite literally, and if you weren’t listening very carefully, you’d think: oh my gosh, this person really believes all this stuff. But they’re putting in just enough hints for the sophisticates in the congregation so that the sophisticates are supposed to understand: Oh, no. This is all just symbolic. This is all just metaphorical. And that’s the way they want it, but of course, they could never admit it. You couldn’t put a little neon sign up over the pulpit that says, “Just metaphor, folks, just metaphor.” It would destroy the whole thing.

You can’t admit that it’s just metaphor even when you insist when anybody asks that it’s just metaphor, and so this professional doubletalk persists, and if you study it for a while the way Linda [pdf] and I have been doing, you come to realize that’s what it is, and that means they’ve lost track of what it means to tell the truth. Oh, there are so many different kinds of truth. Here’s where postmodernism comes back to haunt us. What a pernicious bit of intellectual vandalism that movement was! It gives license to this pernicious sort of lazy relativism.

One of the most chilling passages in that great book by William James, The Varieties of Religious Experience, is where he talks about soldiers in the military: "Far better is it for an army to be too savage, too cruel, too barbarous, thant to possess too much sentimentality and human reasonableness.” This is a very sobering, to me, a very sobering reflection. Let’s talk about when we went into Iraq. There was Rumsfeld saying, “Oh, we don’t need a big force. We don’t need a big force. We can do this on the cheap,” and there were other people, retrospectively we can say they were wiser, who said, “Look, if you’re going to do this at all, you want to go in there with such overpowering, such overwhelming numbers and force that you can really intimidate the population, and you can really maintain the peace and just get the population to sort of roll over, and that way actually less people get killed, less people get hurt. You want to come in with an overwhelming show of force.”

The principle is actually one that’s pretty well understood. If you don’t want to have a riot, have four times more police there than you think you need. That’s the way not to have a riot and nobody gets hurt because people are not foolish enough to face those kinds of odds. But they don’t think about that with regard to religion, and it’s very sobering. I put it this way.

Suppose that we face some horrific, terrible enemy, another Hitler or something really, really bad, and here’s two different armies that we could use to defend ourselves. I’ll call them the Gold Army and the Silver Army; same numbers, same training, same weaponry. They’re all armored and armed as well as we can do. The difference is that the Gold Army has been convinced that God is on their side and this is the cause of righteousness, and it’s as simple as that. The Silver Army is entirely composed of economists. They’re all making side insurance bets and calculating the odds of everything.

Which army do you want on the front lines? It’s very hard to say you want the economists, but think of what that means. What you’re saying is we’ll just have to hoodwink all these young people into some false beliefs for their own protection and for ours. It’s extremely hypocritical. It is a message that I recoil from, the idea that we should indoctrinate our soldiers. In the same way that we inoculate them against diseases, we should inoculate them against the economists’—or philosophers’—sort of thinking, since it might lead to them to think: am I so sure this cause is just? Am I really prepared to risk my life to protect? Do I have enough faith in my commanders that they’re doing the right thing? What if I’m clever enough and thoughtful enough to figure out a better battle plan, and I realize that this is futile? Am I still going to throw myself into the trenches? It’s a dilemma that I don’t know what to do about, although I think we should confront it at least.”

Daniel C. Dennett is University Professor, Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University, The normal well-tempered mind, Edge, Jan 8, 2013.

'The Intentional Stance'

"Dennett favours the theory (first suggested by Richard Dawkins) that our social learning has given us a second information highway (in addition to the genetic highway) where the transmission of variant cultural information (memes) takes place via differential replication. Software viruses, for example, can be understood as memes, and as memes evolve in complexity, so does human cognition: “The mind is the effect, not the cause.” (…)

Daniel Dennett: "Natural selection is not gene centrist and nor is biology all about genes, our comprehending minds are a result of our fast evolving culture. Words are memes that can be spoken and words are the best example of memes. Words have a genealogy and it’s easier to trace the evolution of a single word than the evolution of a language." (…)

I don’t like theory of mind. I coined the phrase The Intentional Stance. [Dennett’s Intentional Stance encompasses attributing feelings, memories and beliefs to others as well as mindreading and predicting what someone will do next.] Do you need a theory to ride a bike? (…)

Riding a bike is a craft – you don’t need a theory. Autistic people might need a theory with which to understand other minds, but the rest of us don’t. If a human is raised without social interaction and without language they would be hugely disabled and probably lacking in empathy.”

Daniel C. Dennett, Daniel Dennett: ‘I don’t like theory of mind’ – interview, The Guardian, 22 March 2013.

See also:

Steven Pinker on the mind as a system of ‘organs of computation’, Lapidarium notes
Quantum minds: Why we think like quarks - ‘To be human is to be quantum’, Lapidarium notes
Human Connectome Project: understanding how different parts of the brain communicate to each other
How Free Is Your Will?, Lapidarium notes
Susan Blackmore on memes and “temes”
Mind & Brain tag on Lapidarium notes

Dec
11th
Tue
permalink

Researchers discover surprising complexities in the way the brain makes mental maps

                     image
Spatial location is closely connected to the formation of new memories. Until now, grid cells were thought to be part of a single unified map system. New findings from the Norwegian University of Science and Technology demonstrate that the grid system is in fact composed of a number of independent grid maps, each with unique properties. Each map displays a particular resolution (mesh size), and responds independently to changes in the environment. A system of several distinct grid maps (illustrated on left) can support a large number of unique combinatorial codes used to associate new memories formed with specific spatial information (illustrated on right).

Your brain has at least four different senses of location – and perhaps as many as 10. And each is different, according to new research from the Kavli Institute for Systems Neuroscience, at the Norwegian University of Science and Technology. (…)

The findings, published in the 6 December 2012 issue of Nature, show that rather than just a single sense of location, the brain has a number of “modules” dedicated to self-location. Each module contains its own internal GPS-like mapping system that keeps track of movement, and has other characteristics that also distinguishes one from another.

"We have at least four senses of location," says Edvard Moser, director of the Kavli Institute. "Each has its own scale for representing the external environment, ranging from very fine to very coarse. The different modules react differently to changes in the environment. Some may scale the brain’s inner map to the surroundings, others do not. And they operate independently of each other in several ways."

This is also the first time that researchers have been able to show that a part of the brain that does not directly respond to sensory input, called the association cortex, is organized into modules. The research was conducted using rats. (…)

Technical breakthroughs

A rat’s brain is the size of a grape, while the area that keeps track of the sense of location and memory is comparable in size to a small grape seed. This tiny area holds millions of nerve cells.

A research team of six people worked for more than four years to acquire extensive electrophysiological measurements in this seed-sized region of the brain. New measurement techniques and a technical breakthrough made it possible for Hanne Stensola and her colleagues to measure the activity in as many as 186 grid cells of the same rat brain. A grid cell is a specialized cell named for its characteristic of creating hexagonal grids in the brain’s mental map of its surroundings.

"We knew that the ‘grid maps’ in this area of the brain had resolutions covering different scales, but we did not know how independent the scales were of each other," Stensola said. "We then discovered that the maps were organized in four to five modules with different scales, and that each of these modules reacted slightly differently to changes in their environment. This independence can be used by the brain to create new combinations - many combinations - which is a very useful tool for memory formation.

After analysing the activity of nearly 1000 grid cells, researchers were able to conclude that the brain has not just one way of making an internal map of its location, but several. Perhaps 10 different senses of location.

Perhaps 10 different senses of location

image
The entorhinal cortex is a part of the neocortex that represents space by way of brain cells that have GPS-like properties. Each cell describes the environment as a hexagonal grid mesh, earning them the name ‘grid cells’. The panels show a bird’s-eye view of a rat’s recorded movements (grey trace) in a 2.2x2.2 m box. Each panel shows the activity of one grid cell (blue dots) with a particular map resolution as the animal moved through the environment. Credit: Kavli Institute for Systems Neuroscience, NTNU

Institute director Moser says that while researchers are able to state with confidence that there are at least four different location modules, and have seen clear evidence of a fifth, there may be as many as 10 different modules.

He says, however, that researchers need to conduct more measurements before they will have covered the entire grid-cell area. “At this point we have measured less than half of the area,” he says.

Aside from the time and challenges involved in making these kinds of measurements, there is another good reason why researchers have not yet completed this task. The lower region of the sense of location area, the entorhinal cortex, has a resolution that is so coarse or large that it is virtually impossible to measure it.

"The thinking is that the coordinate points for some of these maps are as much as ten metres apart," explains Moser. "To measure this we would need to have a lab that is quite a lot larger and we would need time to test activity over the entire area. We work with rats, which run around while we make measurements from their brain. Just think how long it would take to record the activity in a rat if it was running back and forth exploring every nook and cranny of a football field. So you can see that we have some challenges here in scaling up our experiments."

New way to organize

Part of what makes the discovery of the grid modules so special is that it completely changes our understanding of how the brain physically organizes abstract functions. Previously, researchers have shown that brain cells in sensory systems that are directly adjacent to each other tend to have the same response pattern. This is how they have been able to create detailed maps of which parts of the sensory brain do what.

The new research shows that a modular organization is also found in the highest parts of the cortex, far away from areas devoted to senses or motor outputs. But these maps are different in the sense that they overlap or infiltrate other. It is thus not possible to locate the different modules with a microscope, because the cells that work together are intermingled with other modules in the same area.

“The various components of the grid map are not organized side by side,” explains Moser. “The various components overlap. This is the first time a brain function has been shown to be organized in this way at separate scales. We have uncovered a new way for neural network function to be distributed.”

A map and a constant

The researchers were surprised, however, when they started calculating the difference between the scales. They may have discovered an ingenious mathematical coding system, along with a number, a constant. (Anyone who has read or seen “The Hitchhiker’s Guide to the Galaxy” may enjoy this.) The scale for each sense of location is actually 42% larger than the previous one. “

We may not be able to say with certainty that we have found a mathematical constant for the way the brain calculates the scales for each sense of location, but it’s very funny that we have to multiply each measurement by 1.42 to get the next one. That is approximately equal to the square root of the number two,” says Moser.

Maps are genetically encoded

Moser thinks it is striking that the relationship between the various functional modules is so orderly. He believes this orderliness shows that the way the grid map is organized is genetically built in, and not primarily the result of experience and interaction with the environment.

So why has evolution equipped us with four or more senses of location?

Moser believes the ability to make a mental map of the environment arose very early in evolution. He explains that all species need to navigate, and that some types of memory may have arisen from brain systems that were actually developed for the brain’s sense of location.

“We see that the grid cells that are in each of the modules send signals to the same cells in the hippocampus, which is a very important component of memory,” explains Moser. “This is, in a way, the next step in the line of signals in the brain. In practice this means that the location cells send a different code into the hippocampus at the slightest change in the environment in the form of a new pattern of activity. So every tiny change results in a new combination of activity that can be used to encode a new memory, and, with input from the environment, becomes what we call memories.”

Researchers discover surprising complexities in the way the brain makes mental maps, Medical press, Dec 5, 2012.

The article is a part of doctoral research conducted by Hanne and Tor Stensola, and has been funded through an Advanced Investigator Grant that Edvard Moser was awarded by the European Research Council (ERC).

See also:

☞ Hanne Stensola, Tor Stensola, Trygve Solstad, Kristian Frøland, May-Britt Moser & Edvard I. Moser, The entorhinal grid map is discretized, Nature, 5 Dec 2012.
Mind & brain tag on Lapidarium notes

Nov
18th
Sun
permalink

Human Brain Is Wired for Harmony


Musical score based on the neurological activity of a 31-year-old woman. Image: Lu et al./PLoS One

"Since the days of the ancient Greeks, scientists have wondered why the ear prefers harmony. Now, scientists suggest that the reason may go deeper than an aversion to the way clashing notes abrade auditory nerves; instead, it may lie in the very structure of the ear and brain, which are designed to respond to the elegantly spaced structure of a harmonious sound. (…) If the chord is harmonic, or “consonant,” the notes are spaced neatly enough so that the individual fibers of the auditory nerve carry specific frequencies to the brain. By perceiving both the parts and the harmonious whole, the brain responds to what scientists call harmonicity. (…)

“Beating is the textbook explanation for why people don’t like dissonance, so our study is the first real evidence that goes against this assumption” (…) It suggests that consonance rests on the perception of harmonicity, and that, when questioning the innate nature of these preferences, one should study harmonicity and not beating.” (…)

Sensitivity to harmonicity is important in everyday life, not just in music,” he notes. For example, the ability to detect harmonic components of sound allows people to identify different vowel sounds, and to concentrate on one conversation in a noisy crowd.”

See also:

☞ M.Cousineaua, J. H. McDermottb, I. Peretz, The basis of musical consonance as revealed by congenital amusia (2012)
☞ S.Leinoa, E. Bratticob, M.Tervaniemib, P. Vuust, Representation of harmony rules in the humanbrain: Further evidence from event-related potentials, 2007
☞ Brandon Keim, Listen: The Music of a Human Brain, Wired Science, Nov 15, 2012.

Aug
22nd
Wed
permalink

The Nature of Consciousness: How the Internet Could Learn to Feel

       

“The average human brain has a hundred billion neurons and synapses on the order of a hundred trillion or so. But it’s not just sheer numbers. It’s the incredibly complex and specific ways in which these things are wired up. That’s what makes it different from a gigantic sand dune, which might have a billion particles of sand, or from a galaxy. Our Milky Way, for example, contains a hundred billion suns, but the way these suns interact is very simple compared to the way neurons interact with each other. (…)

It doesn’t matter so much that you’re made out of neurons and bones and muscles. Obviously, if we lose neurons in a stroke or in a degenerative disease like Alzheimer’s, we lose consciousness. But in principle, what matters for consciousness is the fact that you have these incredibly complicated little machines, these little switching devices called nerve cells and synapses, and they’re wired together in amazingly complicated ways.

The Internet now already has a couple of billion nodes. Each node is a computer. Each one of these computers contains a couple of billion transistors, so it is in principle possible that the complexity of the Internet is such that it feels like something to be conscious. I mean, that’s what it would be if the Internet as a whole has consciousness. Depending on the exact state of the transistors in the Internet, it might feel sad one day and happy another day, or whatever the equivalent is in Internet space. (…)

What I’m serious about is that the Internet, in principle, could have conscious states. Now, do these conscious states express happiness? Do they express pain? Pleasure? Anger? Red? Blue? That really depends on the exact kind of relationship between the transistors, the nodes, the computers. It’s more difficult to ascertain what exactly it feels. But there’s no question that in principle it could feel something. (…)

Q: Would humans recognize that certain parts of the Internet are conscious? Or is that beyond our understanding?

That’s an excellent question. If we had a theory of consciousness, we could analyze it and say yes, this entity, this simulacrum, is conscious. Or because it displays independent behavior. At some point, suddenly it develops some autonomous behavior that nobody programmed into it, right? Then, people would go, “Whoa! What just happened here?” It just sort of self-organized in some really weird way. It wasn’t a bug. It wasn’t a virus. It wasn’t a botnet that was paid for by some nefarious organization. It did it by itself. If this autonomous behavior happens on a regular basis, then I think many people would say, yeah, I guess it’s alive in some sense, and it may have conscious sensation. (…)

Q: How do you define consciousness?

Typically, it means having subjective states. You see something. You hear something. You’re aware of yourself. You’re angry. You’re sad. Those are all different conscious states. Now, that’s not a very precise definition. But if you think historically, almost every scientific field has a working definition and the definitions are subject to change. For example, my Caltech colleague Michael Brown has redefined planets. So Pluto is not a planet anymore, right? Because astronomers got together and decided that. And what’s a gene? A gene is very tricky to define. Over the last 50 years, people have had all sorts of changing definitions. Consciousness is not easy to define, but don’t worry too much about the definition. Otherwise, you get trapped in endless discussions about what exactly you mean. It’s much more important to have a working definition, run with it, do experiments, and then modify it as necessary. (…)

I see a universe that’s conducive to the formation of stable molecules and to life. And I do believe complexity is associated with consciousness. Therefore, we seem to live in a universe that’s particularly conducive to the emergence of consciousness. That’s why I call myself a “romantic reductionist.”

Christof Koch, American neuroscientist working on the neural basis of consciousness, Professor of Cognitive and Behavioral Biology at California Institute of Technology, The Nature of Consciousness: How the Internet Could Learn to Feel, The Atlantic, Aug 22, 2012. (Illustration: folkert: Noosphere)

See also:

Google and the Myceliation of Consciousness
Richard Doyle on Creativity, evolution of mind and the rhetorical membrane between humans and an informational universe
Consciousness tag on Lapidarium

Aug
11th
Sat
permalink

Is there any redundancy in human memory?

            

Are there two physical copies of the same memory in the brain, such that if some cells storing a particular memory die, the memory is still not lost?

Yes. “Memories are stored using a “distributed representation,” which means that each memory is stored across thousands of synapses and neurons. And each neuron or synapses is involved in thousands of memories.

So if a single neuron fails, there are still 999 (for example) other neurons collaborating in the representation of that memory. With the failure of each neuron, thousands of memories get imperceptibly weaker, a property called “graceful degradation.”

Some people like to use the metaphor of a hologram. In a hologram, the 3D image is spread across the sheet of glass, and if the glass is broken, the full image can be seen in each separate shard. This is not exactly how memory works in the brain, but it is not a bad metaphor.

In some ways the brain is like a RAID array of disks, except instead of 3 hard disks, there are millions (or billions) of neurons sharing the representation of memories. (…)

Figure: Structural comparison of a RAID disk array and the type of hierarchical distributed memory network used by the brain.

Memory in the brain is resilient against catastrophic failure. Many memories get weaker with each neuron failure, but there is no point at which the failure of “one-too-many neurons” causes a memory to suddenly disappear. And the process of recalling a memory can strengthen it by recruiting more neurons and synapses to its representation. Also, memory in the brain is not perfect. Memory recall is more similar to reconstructing an earlier brain state than retrieving stored data. The recalled memory is never exactly the same as what was stored. And more typically, memories are not recalled but instead put to use. When you turn left at a familiar intersection, you are using knowledge, not recalling it.

One extreme example of the brain’s resiliency to catastrophic failure is stroke. A stroke is an event (often a burst blood vessel) that kills possibly hundreds of millions of brain cells within a very short time-frame (hours). Even the brain’s enormous redundancy cannot prevent memory loss under these circumstances. And yet the ability of stroke victims to recover language and skills shows that the brain can reorganize itself and somehow recover and quickly relearn knowledge that should have been destroyed.

In Alzheimers Disease, brain cells die at an accelerating rate. At some point, the reduction in brain cells overtakes the memory redundancy of the brain and memories do become permanently lost.

There is still a lot that is not known about how exactly memories are organized and represented in the brain. Neuron-level mechanisms have been figured out, and quite a lot of information has been gathered about the brain region specialized for coding and managing memory storage (the hippocampus). But the exact structure and coding scheme of memories has yet to be determined.”

Related on Quora:
Are forgotten life events still visible in the brain and if so how?
Why is forgetting different for driving and calculus?
How is it that a chip of a hologram still holds the entire image?
Neuroscience: Are there any connections (axons) in the brain that are redundant?


Paul King, Computational Neuroscientist, currently a visiting scholar at the Redwood Center for Theoretical Neuroscience at UC Berkeley, Is there any redundancy in human memory?, Quora, Aug 2012. (Illustration source)

See also:

Memory tag on Lapidarium notes

Jul
21st
Sat
permalink

What Neuroscience Tells Us About Morality: 'Morality is a form of decision-making, and is based on emotions, not logic'

           

Morality is not the product of a mythical pure reason divorced from natural selection and the neural wiring that motivates the animal to sociability. It emerges from the human brain and its responses to real human needs, desires, and social experience; it depends on innate emotional responses, on reward circuitry that allows pleasure and fear to be associated with certain conditions, on cortical networks, hormones and neuropeptides. Its cognitive underpinnings owe more to case-based reasoning than to conformity to rules.”

Patricia Smith Churchland, Canadian-American philosopher and neuroscientist noted for her contributions to neurophilosophy and the philosophy of mind, in John Bickle, The Oxford Handbook of Philosophy and Neuroscience, Chapter 16 "Inference to the best decision", Oxford Handbooks, 2009, p.419.

"Although many philosophers used to dismiss the relevance of neuroscience on grounds that what mattered was “the software, not the hardware”, increasingly philosophers have come to recognize that understanding how the brain works is essential to understanding the mind."

Patricia Smith Churchland, introductory message at her homepage at the University of California, San Diego.

"Morality is a form of decision-making, and is based on emotions, not logic."

Jonah Lehrer, cited in delancey place, 2009

"Philosophers must take account of neuroscience in their investigations.

While [Patricia S.] Churchland's intellectual opponents over the years have suggested that you can understand the “software” of thinking, independently of the “hardware”—the brain structure and neuronal firings—that produced it, she has responded that this metaphor doesn't work with the brain: Hardware and software are intertwined to such an extent that all philosophy must be “neurophilosophy.” There’s no other way.

Churchland, professor emerita of philosophy at the University of California at San Diego, has been best known for her work on the nature of consciousness. But now, with a new book, Braintrust: What Neuroscience Tells Us About Morality (Princeton University Press), she is taking her perspective into fresh terrain: ethics. And the story she tells about morality is, as you’d expect, heavily biological, emphasizing the role of the peptide oxytocin, as well as related neurochemicals.

Oxytocin’s primary purpose appears to be in solidifying the bond between mother and infant, but Churchland argues—drawing on the work of biologists—that there are significant spillover effects: Bonds of empathy lubricated by oxytocin expand to include, first, more distant kin and then other members of one’s in-group. (Another neurochemical, aregenine vasopressin, plays a related role, as do endogenous opiates, which reinforce the appeal of cooperation by making it feel good.)

The biological picture contains other elements, of course, notably our large prefrontal cortexes, which help us to take stock of situations in ways that lower animals, driven by “fight or flight” impulses, cannot. But oxytocin and its cousin-compounds ground the human capacity for empathy. (When she learned of oxytocin’s power, Churchland writes in Braintrust, she thought: “This, perhaps, Hume might accept as the germ of ‘moral sentiment.’”)

From there, culture and society begin to make their presence felt, shaping larger moral systems: tit-for-tat retaliation helps keep freeloaders and abusers of empathic understanding in line. Adults pass along the rules for acceptable behavior—which is not to say “just” behavior, in any transcendent sense—to their children. Institutional structures arise to enforce norms among strangers within a culture, who can’t be expected to automatically trust each other.

These rules and institutions, crucially, will vary from place to place, and over time. “Some cultures accept infanticide for the disabled or unwanted,” she writes, without judgment. “Others consider it morally abhorrent; some consider a mouthful of the killed enemy’s flesh a requirement for a courageous warrior, others consider it barbaric.”

Hers is a bottom-up, biological story, but, in her telling, it also has implications for ethical theory. Morality turns out to be not a quest for overarching principles but rather a process and practice not very different from negotiating our way through day-to-day social life. Brain scans, she points out, show little to no difference between how the brain works when solving social problems and how it works when solving ethical dilemmas. (…)

[Churchland] thinks, with Aristotle’s argument that morality is not about rule-making but instead about the cultivation of moral sentiment through experience, training, and the following of role models. The biological story also confirms, she thinks, David Hume’s assertion that reason and the emotions cannot be disentangled. This view stands in sharp contrast to those philosophers who argue that instinctual reactions must be scrutinized by reason. The villains of her books are philosophical system-builders—whether that means Jeremy Bentham, with his ideas about maximizing aggregate utility (“the greatest good for the greatest number”), or Immanuel Kant, with his categorical imperatives (never lie!), or John Rawls, erector of A Theory of Justice.

Churchland thinks the search for what she invariably calls “exceptionless rules” has deformed modern moral philosophy. “There have been a lot of interesting attempts, and interesting insights, but the target is like perpetual youth or a perpetual-motion machine. You’re not going to find an exceptionless rule,” she says. “What seems more likely is that there is a basic platform that people share and that things shape themselves based on that platform, and based on ecology, and on certain needs and certain traditions.”

The upshot of that approach? “Sometimes there isn’t an answer in the moral domain, and sometimes we have to agree to disagree, and come together and arrive at a good solution about what we will live with.”

Owen Flanagan Jr., a professor of philosophy and neurobiology at Duke University and a friend of Churchland’s, adds, “There’s a long tradition in philosophy that morality is based on rule-following, or on intuitions that only specially positioned people can have. One of her main points is that that is just a completely wrong picture of the genealogical or descriptive story. The first thing to do is to emphasize our continuity with the animals.” In fact, Churchland believes that primates and even some birds have a moral sense, as she defines it, because they, too, are social problem-solvers.

Recognizing our continuity with a specific species of animal was a turning point in her thinking about morality, in recognizing that it could be tied to the hard and fast. “It all changed when I learned about the prairie voles,” she says—surely not a phrase John Rawls ever uttered.

She told the story at the natural-history museum, in late March. Montane voles and prairie voles are so similar “that naifs like me can’t tell them apart,” she told a standing-room-only audience (younger and hipper than the museum’s usual patrons—the word “neuroscience” these days is like catnip). But prairie voles mate for life, and montane voles do not. Among prairie voles, the males not only share parenting duties, they will even lick and nurture pups that aren’t their own. By contrast, male montane voles do not actively parent even their own offspring. What accounts for the difference? Researchers have found that the prairie voles, the sociable ones, have greater numbers of oxytocin receptors in certain regions of the brain. (And prairie voles that have had their oxytocin receptors blocked will not pair-bond.)

"As a philosopher, I was stunned," Churchland said, archly. "I thought that monogamous pair-bonding was something one determined for oneself, with a high level of consideration and maybe some Kantian reasoning thrown in. It turns out it is mediated by biology in a very real way.”

The biologist Sue Carter, now at the University of Illinois at Chicago, did some of the seminal work on voles, but oxytocin research on humans is now extensive as well. In a study of subjects playing a lab-based cooperative game in which the greatest benefits to two players would come if the first (the “investor”) gave a significant amount of money to the second (the “trustee”), subjects who had oxytocin sprayed into their noses donated more than twice as often as a control group, giving nearly one-fifth percent more each time.

Paul Zak, an economist at Claremont Graduate University, was an author of that study, as well as others that Churchland cites. He is working on a book called “The Moral Molecule” and describes himself as “in exactly the same camp” as Churchland.

Oxytocin works on the level of emotion,” he says. “You just get the feeling of right and wrong. It is less precise than a Kantian system, but it’s consistent with our evolved physiology as social creatures.”

The City University of New York Graduate Center philosopher Jesse Prinz, who appeared with Churchland at a Columbia University event the night after her museum lecture, has mostly praise for Churchland’s latest offering. “If you look at a lot of the work that’s been done on scientific approaches to morality—books written for a lay audience—it’s been about evolutionary psychology. And what we get again and again is a story about the importance of evolved tendencies to be altruistic. That’s a report on a particular pattern of behavior, and an evolutionary story to explain the behavior. But it’s not an account of the underlying mechanism. The idea that science has moved to a point where we can see two animals working together toward a collective end and know the brain mechanism that allows that is an extraordinary achievement.”

Nevertheless, he says, how to move from the possibility of collective action to “the specific human institution of moral rules is a bit of connective tissue that she isn’t giving us.”

Indeed, that’s one of the most striking aspects of Braintrust. After Churchland establishes the existence of a platform for moral decision-making, she describes the process through which moral decisions come to be made, but she says little about their content—why one path might be better than another. She offers the following description of a typical “moral” scenario. A farmer sees a deer breaching his neighbor’s fence and eating his apples while the neighbor is away. The farmer will not consult a Kantian rule book before deciding whether to help, she writes, but instead will weigh an array of factors: Would I want my neighbor to help me? Does my culture find such assistance praiseworthy or condescending? Am I faced with any pressing emergencies on my own farm? Churchland describes this process of moral decision-making as being driven by “constraint satisfaction.”

"What exactly constraint satisfaction is in neurobiological terms we do not yet understand,” she writes, “but roughly speaking it involves various factors with various weights and probabilities interacting so as to produce a suitable solution to a question.”

"Various" factors with "various" weights? Is that not a little vague? But Duke’s Owen Flanagan Jr. defends this highly pragmatic view of morality. "Where we get a lot of pushback from philosophers is that they’ll say, ‘If you go this naturalistic route that Flanagan and Churchland go, then you make ethics merely a theory of prudence.’ And the answer is, Yeah, you kind of do that. Morality doesn’t become any different than deciding what kind of bridge to build across a river. The reason we both think it makes sense is that the other stories”—that morality comes from God, or from philosophical intuition—”are just so implausible.”

Flanagan also thinks Churchland’s approach leads to a “more democratic” morality. "It’s ordinary people discussing the best thing to do in a given situation, given all the best information available at the moment." Churchland herself often underscores that democratic impulse, drawing on her own biography. She grew up on a farm, in the Okanagan Valley, in British Columbia. Speaking of her onetime neighbors, she says: "I got as much wisdom from some of those old farmers as I ever got from a seminar on moral philosophy.”

If building a bridge is the topic up for discussion, however, one can assume that most people think getting across the water is a sound idea. Yet mainstream philosophers object that such a sense of shared purpose cannot always be assumed in moral questions—and that therefore the analogy fails. (…)

Kahane says the complexity of human life demands a more intense and systematic analysis of moral questions than the average citizen might be capable of, at least if she’s limited to the basic tool kit of social skills.

Peter Railton, a philosophy professor at the University of Michigan at Ann Arbor, agrees. Our intuitions about how to get along with other people may have been shaped by our interactions within small groups (and between small groups). But we don’t live in small groups anymore, so we need some procedures through which we leverage our social skills into uncharted areas—and that is what the traditional academic philosophers, whom Churchland mostly rejects, work on. What are our obligations to future generations (concerning climate change, say)? What do we owe poor people on the other side of the globe (whom we might never have heard of, in our evolutionary past)?

For a more rudimentary example, consider that evolution quite likely trained us to treat “out groups” as our enemy. Philosophical argument, Railton says, can give reasons why members of the out-group are not, in fact, the malign and unusual creatures that we might instinctively think they are; we can thereby expand our circle of empathy.

Churchland’s response is that someone is indeed likely to have the insight that constant war against the out-group hurts both sides’ interests, but she thinks a politician, an economist, or a farmer-citizen is as likely to have that insight as a professional philosopher. (…)

But isn’t she, right there, sneaking in some moral principles that have nothing to do with oxytocin, namely the primacy of liberty over equality? In our interviews, she described Singer’s worldview as, in an important sense, unnatural. Applying the same standard to distant foreigners as we do to our own kith and kin runs counter to our most fundamental biological impulses.

But Oxford’s Kahane offers a counterargument: “‘Are humans capable of utilitarianism?’ is not a question that is answered by neuroscience,” he says. “We just need to test if people are able to live like that. Science may explain whether it is common for us to do, but that’s very different from saying what our limits are.”

Indeed, Peter Singer lives (more or less) the way he preaches, and chapters of an organization called Giving What We Can, whose members pledge to give a large portion of their earnings to charity, have popped up on several campuses. “If I can prevent hundreds of people from dying while still having the things that make life meaningful to me, that strikes me as a good idea that doesn’t go against ‘paradigmatically good sense’ or anything,” says Nick Beckstead, a fourth-year graduate student in philosophy and a founder of the group’s Rutgers chapter.

Another target in Churchland’s book is Jonathan Haidt, the University of Virginia psychologist who thinks he has identified several universal “foundations” of moral thought: protection of society’s vulnerable; fairness; loyalty to the in-group; respect for authority; and the importance of purity (a sanitary concern that evolves into the cultural ideal of sanctity). That strikes her as a nice list, but no more—a random collection of moral qualities that isn’t at all rooted in biology. During her museum talk, she described Haidt’s theory as a classic just-so story. “Maybe in the 70s, when evolutionary psychology was just becoming a thing, you could get away with saying”—here she adopted a flighty, sing-song voice—’It could have been, out there on the veldt, in Africa, 250,000 years ago that these were traits that were selected,’” she said. “But today you need evidence, actually.” (…)

The element of cultural relativism also remains somewhat mysterious in Churchland’s writings on morality. In some ways, her project dovetails with that of Sam Harris, the “New Atheist” (and neuroscience Ph.D.) who believes reason and neuroscience can replace woolly armchair philosophy and religion as guides to morality. But her defense of some practices of primitive tribes, including infanticide (in the context of scarcity) —as well the seizing of enemy women, in raids, to keep up the stock of mates— as “moral” within their own context, seems the opposite of his approach.

I reminded Churchland, who has served on panels with Harris, that he likes to put academics on the spot by asking if they think such practices as the early 19th-century Hindu tradition of burning widows on their husbands’ funeral pyres was objectively wrong.

So did she think so? First, she got irritated: “I don’t know why you’re asking that.” But, yes, she finally said, she does think that practice objectively wrong. “But frankly I don’t know enough about their values, and why they have that tradition, and I’m betting that Sam doesn’t either.”

"The example I like to use," she said, "rather than using an example from some other culture and just laughing at it, is the example from our own country, where it seems to me that the right to buy assault weapons really does not work for the well-being of most people. And I think that’s an objective matter."

At times, Churchland seems just to want to retreat from moral philosophical debate back to the pure science. “Really,” she said, “what I’m interested in is the biological platform. Then it’s an open question how we attack more complex problems of social life.”

— Christopher Shea writing about Patricia Smith Churchland, Canadian-American philosopher and neuroscientist noted for her contributions to neurophilosophy and the philosophy of mind, in Rule Breaker, The Chronicle of Higher Education, June 12, 2011. (Illustration: attributed to xkcd)

See also:

Jesse Prinz: Morality is a Culturally Conditioned Response
Sam Harris on the ‘selfish gene’ and moral behavior
Sam Harris on the moral formula: How facts inform our ethics
Morality tag on Lapidarium

Jan
14th
Sat
permalink

What are memories made of?


“There appears to be no single memory store, but instead a diverse taxonomy of memory systems, each with its own special circuitry evolved to package and retrieve that type of memory. Memories are not static entities; over time they shift and migrate between different territories of the brain.

At the top of the taxonomical tree, a split occurs between declarative and non-declarative memories. Declarative memories are those you can state as true or false, such as remembering whether you rode a bicycle to work. Non-declarative memories are those that cannot be described as true or false, such as knowing how to ride a bicycle. A central hub in the declarative memory system is a brain region called the hippocampus. This undulating, twisted structure gets its name from its resemblance to a sea horse. Destruction of the hippocampus, through injury, neurosurgery or the ravages of Alzheimer’s disease, can result in an amnesia so severe that no events experienced after the damage can be remembered. (…)

A popular view is that during sleep your hippocampus “broadcasts” its recently captured memories to the neocortex, which updates your long-term store of past experience and knowledge. Eventually the neocortex is sufficient to support recall without relying on the hippocampus. However, there is evidence that if you need to vividly picture a scene in your mind, this appears to require the hippocampus, no matter how old the memory. We have recently discovered that the hippocampus is not only needed to reimagine the past, but also to imagine the future.

Pattern completion

Studying patients has taught us where memories might be stored, but not what physically constitutes a memory. The answer lies in the multitude of tiny modifiable connections between neuronal cells, the information-processing units of the brain. These cells, with their wispy tree-like protrusions, hang like stars in miniature galaxies and pulse with electrical charge. Thus, your memories are patterns inscribed in the connections between the millions of neurons in your brain. Each memory has its unique pattern of activity, logged in the vast cellular network every time a memory is formed.

It is thought that during recall of past events the original activity pattern in the hippocampus is re-established via a process that is known as “pattern completion”. During this process, the initial activity of the cells is incoherent, but via repeated reactivation the activity pattern is pieced together until the original pattern is complete. Memory retention is helped by the presence of two important molecules in our brain: dopamine and acetylcholine. Both help the neurons improve their ability to lay down memories in their connections. Sometimes, however, the system fails, leaving us unable to bring elements of the past to mind.

Of all the things we need to remember, one of the most essential is where we are. Becoming lost is debilitating and potentially terrifying. Within the hippocampus, and neighbouring brain structures, neurons exist that allow us to map space and find our way through it.Place cells" provide an internal map of space; "head-direction cell" signal the direction we are facing, similar to an internal compass; and "grid cells" chart out space in a manner akin to latitude and longitude.

For licensed London taxi drivers, it appears that navigating the labyrinth of London’s streets on a daily basis causes the density of grey matter in their posterior hippocampus to increase. Thus, the physical structure of your brain is malleable, depending on what you learn.

With impressive technical advances such as optogenetics, in which light beams excite or silence targeted groups of neurons, scientists are beginning to control memories at an unprecedented level.”

Hugo Spiers is a neuroscientist and lecturer at the institute of behavioural neuroscience at University College London, What are memories made of?, The Guardian, Jan 14, 2012 (Illustration: Polly Becker)

How and why memories change

"Since the time of the ancient Greeks, people have imagined memories to be a stable form of information that persists reliably. The metaphors for this persistence have changed over time—Plato compared our recollections to impressions in a wax tablet, and the idea of a biological hard drive is popular today—but the basic model has not. Once a memory is formed, we assume that it will stay the same. This, in fact, is why we trust our recollections. They feel like indelible portraits of the past.

None of this is true. In the past decade, scientists have come to realize that our memories are not inert packets of data and they don’t remain constant. Even though every memory feels like an honest representation, that sense of authenticity is the biggest lie of all. (…)

New research is showing that every time we recall an event, the structure of that memory in the brain is altered in light of the present moment, warped by our current feelings and knowledge. (…)

This new model of memory isn’t just a theory—neuroscientists actually have a molecular explanation of how and why memories change. In fact, their definition of memory has broadened to encompass not only the cliché cinematic scenes from childhood but also the persisting mental loops of illnesses like PTSD and addiction—and even pain disorders like neuropathy. Unlike most brain research, the field of memory has actually developed simpler explanations. Whenever the brain wants to retain something, it relies on just a handful of chemicals. Even more startling, an equally small family of compounds could turn out to be a universal eraser of history, a pill that we could take whenever we wanted to forget anything. (…)

How memory is formed

Every memory begins as a changed set of connections among cells in the brain. If you happen to remember this moment—the content of this sentence—it’s because a network of neurons has been altered, woven more tightly together within a vast electrical fabric. This linkage is literal: For a memory to exist, these scattered cells must become more sensitive to the activity of the others, so that if one cell fires, the rest of the circuit lights up as well.

Scientists refer to this process as long-term potentiation, and it involves an intricate cascade of gene activations and protein synthesis that makes it easier for these neurons to pass along their electrical excitement. Sometimes this requires the addition of new receptors at the dendritic end of a neuron, or an increase in the release of the chemical neurotransmitters that nerve cells use to communicate. Neurons will actually sprout new ion channels along their length, allowing them to generate more voltage. Collectively this creation of long-term potentiation is called the consolidation phase, when the circuit of cells representing a memory is first linked together. Regardless of the molecular details, it’s clear that even minor memories require major work. The past has to be wired into your hardware. (…)

What happens after a memory is formed, when we attempt to access it?

The secret was the timing: If new proteins couldn’t be created during the act of remembering, then the original memory ceased to exist. The erasure was also exceedingly specific. (…) They forgot only what they’d been forced to remember while under the influence of the protein inhibitor.

The disappearance of the fear memory suggested that every time we think about the past we are delicately transforming its cellular representation in the brain, changing its underlying neural circuitry. It was a stunning discovery: Memories are not formed and then pristinely maintained, as neuroscientists thought; they are formed and then rebuilt every time they’re accessed. “The brain isn’t interested in having a perfect set of memories about the past,” LeDoux says. “Instead, memory comes with a natural updating mechanism, which is how we make sure that the information taking up valuable space inside our head is still useful. That might make our memories less accurate, but it probably also makes them more relevant to the future.” (…)

[Donald] Lewis had discovered what came to be called memory reconsolidation, the brain’s practice of re-creating memories over and over again. (…)

The science of reconsolidation suggests that the memory is less stable and trustworthy than it appears. Whenever I remember the party, I re-create the memory and alter its map of neural connections. Some details are reinforcedmy current hunger makes me focus on the ice cream—while others get erased, like the face of a friend whose name I can no longer conjure. The memory is less like a movie, a permanent emulsion of chemicals on celluloid, and more like a play—subtly different each time it’s performed. In my brain, a network of cells is constantly being reconsolidated, rewritten, remade. That two-letter prefix changes everything. (…)

Once you start questioning the reality of memory, things fall apart pretty quickly. So many of our assumptions about the human mind—what it is, why it breaks, and how it can be healed—are rooted in a mistaken belief about how experience is stored in the brain. (According to a recent survey, 63 percent of Americans believe that human memory “works like a video camera, accurately recording the events we see and hear so that we can review and inspect them later.”) We want the past to persist, because the past gives us permanence. It tells us who we are and where we belong. But what if your most cherished recollections are also the most ephemeral thing in your head? (…)

Reconsolidation provides a mechanistic explanation for these errors. It’s why eyewitness testimony shouldn’t be trusted (even though it’s central to our justice system), why every memoir should be classified as fiction, and why it’s so disturbingly easy to implant false recollections. (The psychologist Elizabeth Loftus has repeatedly demonstrated that nearly a third of subjects can be tricked into claiming a made-up memory as their own. It takes only a single exposure to a new fiction for it to be reconsolidated as fact.) (…)

When we experience a traumatic event, it gets remembered in two separate ways. The first memory is the event itself, that cinematic scene we can replay at will. The second memory, however, consists entirely of the emotion, the negative feelings triggered by what happened. Every memory is actually kept in many different parts of the brain. Memories of negative emotions, for instance, are stored in the amygdala, an almond-shaped area in the center of the brain. (Patients who have suffered damage to the amygdala are incapable of remembering fear.) By contrast, all the relevant details that comprise the scene are kept in various sensory areas—visual elements in the visual cortex, auditory elements in the auditory cortex, and so on. That filing system means that different aspects can be influenced independently by reconsolidation.

The larger lesson is that because our memories are formed by the act of remembering them, controlling the conditions under which they are recalled can actually change their content. (…)

The chemistry of the brain is in constant flux, with the typical neural protein lasting anywhere from two weeks to a few months before it breaks down or gets reabsorbed. How then do some of our memories seem to last forever? It’s as if they are sturdier than the mind itself. Scientists have narrowed down the list of molecules that seem essential to the creation of long-term memory—sea slugs and mice without these compounds are total amnesiacs—but until recently nobody knew how they worked. (…)

A form of protein kinase C called PKMzeta hangs around synapses, the junctions where neurons connect, for an unusually long time. (…) What does PKMzeta do? The molecule’s crucial trick is that it increases the density of a particular type of sensor called an AMPA receptor on the outside of a neuron. It’s an ion channel, a gateway to the interior of a cell that, when opened, makes it easier for adjacent cells to excite one another. (While neurons are normally shy strangers, struggling to interact, PKMzeta turns them into intimate friends, happy to exchange all sorts of incidental information.) This process requires constant upkeep—every long-term memory is always on the verge of vanishing. As a result, even a brief interruption of PKMzeta activity can dismantle the function of a steadfast circuit. (…)

Because of the compartmentalization of memory in the brain—the storage of different aspects of a memory in different areas—the careful application of PKMzeta synthesis inhibitors and other chemicals that interfere with reconsolidation should allow scientists to selectively delete aspects of a memory. (…)

The astonishing power of PKMzeta forces us to redefine human memory. While we typically think of memories as those facts and events from the past that stick in the brain, Sacktor’s research suggests that memory is actually much bigger and stranger than that. (…)

Being able to control memory doesn’t simply give us admin access to our brains. It gives us the power to shape nearly every aspect of our lives. There’s something terrifying about this. Long ago, humans accepted the uncontrollable nature of memory; we can’t choose what to remember or forget. But now it appears that we’ll soon gain the ability to alter our sense of the past. (…)

The fact is we already tweak our memories—we just do it badly. Reconsolidation constantly alters our recollections, as we rehearse nostalgias and suppress pain. We repeat stories until they’re stale, rewrite history in favor of the winners, and tamp down our sorrows with whiskey. “Once people realize how memory actually works, a lot of these beliefs that memory shouldn’t be changed will seem a little ridiculous,” Nader says. “Anything can change memory. This technology isn’t new. It’s just a better version of an existing biological process.” (…)

Jonah Lehrer, American author and journalist, The Forgetting Pill Erases Painful Memories Forever, Wired Magazine, Feb 17, 2012. (Third illustration: Dwight Eschliman)

"You could double the number of synaptic connections in a very simple neurocircuit as a result of experience and learning. The reason for that was that long-term memory alters the expression of genes in nerve cells, which is the cause of the growth of new synaptic connections. When you see that at the cellular level, you realize that the brain can change because of experience. It gives you a different feeling about how nature and nurture interact. They are not separate processes.”

Eric R. Kandel, American neuropsychiatrist, Nobel Prize laureate, A Quest to Understand How Memory Works, NYT, March 5, 2012

Prof. Eric Kandel: We Are What We Remember - Memory and Biology

Eric R. Kandel, American neuropsychiatrist, Nobel Prize laureate, We Are What We Remember: Memory and Biology, FORA.tv, Prohansky Auditorium New York, NY, Mar 28.2011

See also:

☞ Eric R. Kandel, The Biology of Memory: A Forty-Year Perspective (pdf), Department of Neuroscience, Columbia University, New York, 2009
☞ Eric R. Kandel, A Biological Basis for the Unconscious?, Eric Kandel: “I want to know where the id, the ego, and the super-ego are located in the brain” | Big Think video Apr 1, 2012.
Memory tag on Lapidarium notes

Nov
23rd
Wed
permalink

The Human Brain Project ☞ reconstructing the brain piece by piece and building a virtual brain in a supercomputer
        image
                                 (Click image to go to the The Human Brain Project)

"The brain, with its billions of interconnected neurons, is without any doubt the most complex organ in the body and it will be a long time before we understand all its mysteries. The Human Brain Project proposes a completely new approach. The project is integrating everything we know about the brain into computer models and using these models to simulate the actual working of the brain. Ultimately, it will attempt to simulate the complete human brain. The models built by the project will cover all the different levels of brain organisation – from individual neurons through to the complete cortex. The goal is to bring about a revolution in neuroscience and medicine and to derive new information technologies directly from the architecture of the brain.”Human Brain Project - Introduction

The Blue Brain Project is an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level.

The aim of the project, founded in May 2005 by the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne (Switzerland) is to study the brain’s architectural and functional principles. The project is headed by the Institute’s director, Henry Markram. Using a Blue Gene supercomputer running Michael Hines’s NEURON software, the simulation does not consist simply of an artificial neural network, but involves a biologically realistic model of neurons. It is hoped that it will eventually shed light on the nature of consciousness. (Wiki)

Henry Markram: Supercomputing the brain’s secrets



Henry Markram, Ph.D., Director of the Blue Brain Project at École Polytechnique Fédérale de Lausanne, says the mysteries of the mind can be solved — soon. Mental illness, memory, perception: they’re made of neurons and electric signals, and he plans to find them with a supercomputer that models all the brain’s 100,000,000,000,000 synapses.

Henry Markram builds a brain in a supercomputer, TED.com, July 2009

Henry Markram: Simulating the Brain — The Next Decisive Years

Henry Markram speaks at the International Supercomputing Conference 26.06.2011.

10 Year Documentary To Follow Bluebrain Project

Bluebrain | Year One from Couple 3 Films.

Noah Hutton (…) has recently released a mini-documentary on the first year of IBM’s Bluebrain Project. (…) There are reasons to be hopeful that Markram and others in the field will make reasonable progress in modelling the brain by 2020. As he points out in the video, modeling a single neuron used to be a PhD thesis in and of itself. Now, he can create thousands at the push of a button.  As Markram mentions, we don’t have a complete understanding of how many drugs or diseases affect the brain. Nor do we fully understand the nature of memories. A brain simulator could be profoundly helpful as we care for our aging minds. Those minds have at least a decade to wait before we know if Markram and the BBP will be successful in transforming the field of neurology into a computer problem.”

— Aaron Saenz, 10 Year Documentary To Follow Bluebrain Project, Singularity Hub, Feb 12, 2011

See also:

Human Connectome Project ☞ understanding how different parts of the brain communicate to each other
New evidence for innate knowledge. Neurons make connections independently of a subject’s experience, Ecole Polytechnique
Henry Markram and the Human Brain Project are in talks with EU for $1.61 billion to build a human brain within decade, May 18, 2011
☞ Mark Changizi, Later Terminator: We’re Nowhere Near Artificial Brains, Discover Magazine, Nov 16, 2011
☞ David Eagleman, Henry Markram, Will We Ever Understand the Brain?, California Academy of Sciences San Francisco, CA, Fora.tv video, Nov 2, 2011
Allan Jones: A map of the brain, TED.com, July 2011.
Computer modelling: Brain in a box. Henry Markram wants €1 billion to model the entire human brain. Sceptics don’t think he should get it, Nature, 22 Feb 2012
Neuroscience tag on Lapidarium notes

Nov
9th
Wed
permalink

Galileo and the relationship between the humanities and the sciences


Ever since Galileo, science has been strongly committed to the unification of theories from different disciplines. It cannot accept that the right explanations of human activities must be logically incompatible with the rest of science, or even just independent of it. If science were prepared to settle for less than unification, the difficulty of reconciling quantum mechanics and general relativity wouldn’t be the biggest problem in physics. Biology would not accept the gene as real until it was shown to have a physical structure — DNA — that could do the work geneticists assigned to the gene. For exactly the same reason science can’t accept interpretation as providing knowledge of human affairs if it can’t at least in principle be absorbed into, perhaps even reduced to, neuroscience.

That’s the job of neurophilosophy.

This problem, that thoughts about ourselves or anything else for that matters couldn’t be physical, was for a long time purely academic. Scientists had enough on their plates for 400 years just showing how physical processes bring about chemical processes, and through them biological ones. But now neuroscientists are learning how chemical and biological events bring about the brain processes that actually produce everything the body does, including speech and all other actions.

Research — including Nobel-prize winning neurogenomics and fMRI (functional magnetic resonance imaging) — has revealed how bad interpretation’s explanations of our actions are. And there are clever psychophysical experiences that show us that introspection’s insistence that interpretation really does explain our actions is not to be trusted.

These findings cannot be reconciled with explanation by interpretation. The problem they raise for the humanities can no longer be postponed. Must science write off interpretation the way it wrote off phlogiston theory — a nice try but wrong? Increasingly, the answer that neuroscience gives to this question is “afraid so.”

Few people are prepared to treat history, (auto-) biography and the human sciences like folklore. The reason is obvious. The narratives of history, the humanities and literature provide us with the feeling that we understand what they seek to explain. At their best they also trigger emotions we prize as marks of great art.

But that feeling of understanding, that psychological relief from the itch of curiosity, is not the same thing as knowledge. It is not even a mark of it, as children’s bedtime stories reveal. If the humanities and history provide only feeling (ones explained by neuroscience), that will not be enough to defend their claims to knowledge.

The only solution to the problem faced by the humanities, history and (auto) biography, is to show that interpretation can somehow be grounded in neuroscience. That is job No. 1 for neurophilosophy. And the odds are against it. If this project doesn’t work out, science will have to face plan B: treating the humanities the way we treat the arts, indispensable parts of human experience but not to be mistaken for contributions to knowledge.”

Alex Rosenberg, American philosopher, and the R. Taylor Cole Professor of Philosophy at Duke University, Bodies in Motion: An Exchange, NYT, Nov 6, 2011.

Do the humanities need to be defended from hard science?


"As the mathematician and physicist Mark A. Peterson has shown in his new book, “Galileo’s Muse: Renaissance Arts and Mathematics,” Galileo’s love for the arts profoundly shaped his thinking, and in many ways helped paved the way for his scientific discoveries. An early biography of Galileo by his contemporary Niccolò Gherardini points out that, “He was most expert in all the sciences and arts, as if he were professor of them. He took extraordinary delights in music, painting, and poetry.” For its part, Peterson takes great delight in demonstrating how his immersion in these arts informed his scientific discoveries, and how art and literature prior to Galileo often planted the seeds of scientific progress to come. (…)

Clearly Galileo was an extraordinary man, and a crucial aspect of what made him that man was the intellectual world he was immersed in. This world included mathematics, of course, but it was also full of arts and literature, of philosophy and theology. Peterson argues forcefully, for instance, that Galileo’s mastery of the techniques involved in creating and thinking about perspective in painting could well have influenced his thinking about the relativity of motion, since both require comprehending the importance of multiple points of view. (…)

The idea that the perception of movement depends on one’s point of view also has forebears in proto-scientific thinkers who are far less suitable candidates for the appealing story of how common sense suddenly toppled a 2000-year old tradition to usher modern science into the world. Take the poet, philosopher and theologian Giordano Bruno, who seldom engaged in experimentation and who, 30 years before Galileo’s own trial, refused to recant the beliefs that led him to be burned at the stake, beliefs that included the infinity of the universe and the multiplicity of worlds. (…)

Galileo’s insight into the nature of motion was not merely the epiphany of everyday experience that brushed away the fog of scholastic dogma; it was a logical consequence of a long history of engagements with an intellectual tradition that encompassed a multitude of forms of knowledge. That force is not required for an object to stay in motion goes hand in hand with the realization that motion and rest are not absolute terms, but can only be defined relative to what would later be called inertial frames. And this realization owes as much to a literary, philosophical and theological inquiry as it does to pure observation.

Professor Rosenberg uses his brief history of science to ground the argument that neuroscience threatens the humanities, and the only thing that can save them is a neurophilosophy that reconciles brain processes and interpretation. “If this project doesn’t work out,” he writes, “science and the humanities will have to face plan B: treating the humanities the way we treat the arts, indispensable parts of human experience but not to be mistaken for contributions to knowledge.”

But if this is true, should we not then ask what neuroscience could possible contribute to the very debate we are engaged in at this moment? What would we learn about the truth-value of Professor Rosenberg’s claims or mine if we had even the very best neurological data at our disposal? That our respective pleasure centers light up as we each strike blows for our preferred position? That might well be of interest, but it hardly bears on the issue at hand, namely, the evaluation of evidence — historical or experimental — underlying a claim about knowledge. That evaluation must be interpretative. The only way to dispense with interpretation is to dispense with evidence, and with it knowledge altogether.”

William Egginton is the Andrew W. Mellon Professor in the Humanities and Chair of the Department of German and Romance Languages and Literatures at the John Hopkins University, Bodies in Motion: An Exchange, NYT, Nov 6, 2011.

See also:

Science Is Not About Certainty. Science is about overcoming our own ideas and a continuous challenge of common sense

Nov
7th
Mon
permalink

Concentration. When Our Neurons Remain Silent So That Our Performances May Improve


(Whenever we look carefully for an object around us, the parts of the brain that are coloured in red are activated; but, at the same time, those in blue must deactivate themselves. (Credit: Image courtesy of INSERM (Institut national de la santé et de la recherche médicale)

To be able to focus on the world, we need to turn a part of ourselves off for a short while, and this is precisely what our brain does. (…) A team of researchers from Inserm, led by Jean Philippe Lachaux and Karim Jerbi (Lyon Neuroscience Research Centre), has just demonstrated that a network of specific neurons, referred to as “default-mode network" works on a permanent basis even when we are doing nothing. (…)

They demonstrate more specifically that when we need to concentrate, this network disrupts the activation of other specialized neurons when it is not deactivated enough. (…)

When we focus on the things around us, certain parts of the brain are activated: this network, well known to neurobiologists, is called the attention network. Other parts of the brain, however, cease their activity at the same time, as if they generally prevented our attention from being focused on the outside world. These parts of the brain form a network that is extensively studied in neurobiology, and commonly known as the “default-mode network,” because, for a long time, it was believed that it activated itself when the brain had nothing in particular to do. This interpretation was refined through ten years of neuroimaging research that concluded by associating this mysterious network (“the brain’s dark energy” as it was called by one of its discoverers, Marcus Raichle) with a host of intimate and private phenomena of our mental life: self-perception, recollections, imagination, thoughts… (…)

[Researchers] has just revealed how this network interferes with our ability to pay attention, by assessing the activity of the human brain’s default-mode network neurons on a millisecond scale for the first time ever. (…)

The results unambiguously illustrate that whenever we look for an object in the area around us, the neurons of this default-mode network stop their activity. Yet, this interruption only lasts for the amount of time required to find the object: in less than a tenth of a second, after the object has been found, the default-mode network resumes its activity as before. And if our default-mode network is not sufficiently deactivated, then we will need more time to find the object.

These results show that there is fierce competition for our attentional resources inside our brain which, when they are not used to actively analyse our sensorial environment, are instantaneously redirected towards more internal mental processes. The brain hates emptiness and never stays idle, even for a tenth of a second.”

When Our Neurons Remain Silent So That Our Performances May Improve, ScienceDaily, Nov. 3, 2011.

See also:

Transient Suppression of Broadband Gamma Power in the Default-Mode Network Is Correlated with Task Complexity and Subject Performance, The Journal of Neuroscience, 12 Oct 2011.

"Task performance is associated with increased brain metabolism but also with prominent deactivation in specific brain structures known as default-mode network (DMN). (…)

We found that all DMN areas displayed transient suppressions of broadband gamma (60–140 Hz) power during performance of a visual search task and, critically, we show for the first time that the millisecond range duration and extent of the transient gamma suppressions are correlated with task complexity and subject performance. In addition, trial-by-trial correlations revealed that spatially distributed gamma power increases and decreases formed distinct anticorrelated large-scale networks.

Beyond unraveling the electrophysiological basis of DMN dynamics, our results suggest that, rather than indicating a mere switch to a global exteroceptive mode, DMN deactivation encodes the extent and efficiency of our engagement with the external world. (…)”

Nov
6th
Sun
permalink

Fear, Greed, and Financial Crises: A Cognitive Neurosciences Perspective

                           
                                                                           NYT

"Far be it from me to say that we ever shall have the means of measuring directly the feelings of the human heart. A unit of pleasure or of pain is difficult even to conceive; but it is the amount of these feelings which is continually prompting us to buying and selling, borrowing and lending, labouring and resting, producing and consuming; and it is from the quantitative effects of the feelings that we must estimate their comparative amounts."

William Stanley Jevons, British economist, in 1871.

Abstract: 

"Historical accounts of financial crises suggest that fear and greed are the common denominators of these disruptive events: periods of unchecked greed eventually lead to excessive leverage and unsustainable asset-price levels, and the inevitable collapse results in unbridled fear, which must subside before any recovery is possible. The cognitive neurosciences may provide some new insights into this boom/bust pattern through a deeper understanding of the dynamics of emotion and human behavior.

In this chapter, I describe some recent research from the neurosciences literature on fear and reward learning, mirror neurons, theory of mind, and the link between emotion and rational behavior. By exploring the neuroscientific basis of cognition and behavior, we may be able to identify more fundamental drivers of financial crises, and improve our models and methods for dealing with them.”

To read full Andrew W. Lo's research paper click Fear, Greed, and Financial Crises: A Cognitive Neurosciences Perspective (pdf) pages: 50, MIT Sloan School of Management; MIT CSAIL; National Bureau of Economic Research (NBER), Oct 13, 2011.

Nov
3rd
Thu
permalink

The ‘rich club’ that rules your brain

        
The connectome with its 12 “rich club” hubs. Green means fewer connections, red means more connections (Image: Martijn van den Heuvel/University Medical Center in Utrecht)

"Not all brain regions are created equal – instead, a “rich club” of 12 well-connected hubs orchestrates everything that goes on between your ears. This elite cabal could be what gives us consciousness, and might be involved in disorders such as schizophrenia and Alzheimer’s disease.

As part of an ongoing effort to map the human “connectome” – the full network of connections in the brain – Martijn van den Heuvel of the University Medical Center in Utrecht, the Netherlands, and Olaf Sporns of Indiana University Bloomington scanned the brains of 21 people as they rested for 30 minutes.

The researchers used a technique called diffusion tensor imaging to track the movements of water through 82 separate areas of the brain and their interconnecting neurons. They found 12 areas of the brain had significantly more connections than all the others, both to other regions and among themselves.

"These 12 regions have twice the connections of other brain regions, and they’re more strongly connected to each other than to other regions," says Van den Heuvel. “If we wanted to look for consciousness in the brain, I would bet on it turning out to be this rich club,” he adds.

Members of the elite

The elite group consists of six pairs of identical regions, with one of each pair in each hemisphere of the brain. Each member is known to accept only preprocessed, high-order information, rather than raw incoming sensory data.

Best connected of all is the precuneus, an area at the back of the brain. Van den Heuvel says its function is not well understood, but thinks that it acts as an “integrator region” collating high-level information from all over the brain.

Another prominent hub is the superior frontal cortex, which plans actions in response to events and governs where you should focus your attention. The superior parietal cortex – the third hub – is linked to the visual cortex and registers where different objects in your immediate vicinity are.

To bring memory into the equation, the hippocampus is another hub – that’s where memories are processed, stored and consolidated. The fifth member of the club is the thalamus, which, among other things, interlinks visual processes; the last member, the putamen, coordinates movement.

Together the hubs enable the brain to constantly assess, prioritise and filter incoming information, and then puts it all together to make decisions about what to do next.

This network makes the way the brain functions more robust overall, but it could also leave the entire system vulnerable to breakdown if key hubs are damaged or disabled, says Van den Heuvel.

Downfall of the rich

After mapping the connections, Van den Heuvel’s team manipulated the data to see what might happen if parts of the rich club were damaged. The simulated brain lost three times as much function if the elite hubs were taken out than if random parts of the brain were lost.

"If [one of these] regions goes down, it can take the others down too, just like when banks failed in the global economic crisis,” says Van den Heuvel. (…)

"The human brain is extraordinarily complex, yet it works efficiently, and a major challenge has been to discover principles of brain wiring and organisation that explain this," says Randy Buckner, a neuroscientist at Harvard University.

"What Van den Heuvel and Sporns show is that some regions of the brain are embedded in densely connected networks – so-called rich clubs – that may act together as a functional unit," says Buckner. "Such an organisation might help explain how complex networks of brain regions can work together efficiently.""

Andy Coghlan, The ‘rich club’ that rules your brain, New Scientist, 2 Nov 2011

See also:

Human Connectome Project - understanding how different parts of the brain communicate to each other
Revealed – the capitalist network that runs the world, New Scientist, Oct 19, 2011

Oct
25th
Tue
permalink

Iain McGilchrist on The Divided Brain and the Making of the Western World

                             

"Just as the human body represents a whole museum of organs, with a long evolutionary history behind them, so we should expect the mind to be organized in a similar way. (…) We receive along with our body a highly differentiated brain which brings with it its entire history, and when it becomes creative it creates out of this history – out of the history of mankind (…) that age-old natural history which has been transmitted in living form since the remotest times, namely the history of the brain structure."

Carl Jung cited in The Master and His Emissary, Yale University Press, 2009, p.8.

Renowned psychiatrist and writer Iain McGilchrist explains how the ‘divided brain’ has profoundly altered human behaviour, culture and society. He draws on a vast body of recent experimental brain research to reveal that the differences between the brain’s two hemispheres are profound.

The left hemisphere is detail-oriented, prefers mechanisms to living things, and is inclined to self-interest. It misunderstands whatever is not explicit, lacks empathy and is unreasonably certain of itself, whereas the right hemisphere has greater breadth, flexibility and generosity, but lacks certainty.

It is vital that the two hemispheres work together, but McGilchrist argues that the left hemisphere is increasingly taking precedence in the modern world, resulting in a society where a rigid and bureaucratic obsession with structure and self-interest hold sway.

RSA, 17th Nov 2010

Iain McGilchrist points out that the idea that “reason [is] in the left hemisphere and something like creativity and emotion [are] in the right hemisphere” is an unhelpful misconception. He states that “every single brain function is carried out by both hemispheres. Reason and emotion and imagination depend on the coming together of what both hemispheres contribute.” Nevertheless he does see an obvious dichotomy, and asks himself: “if the brain is all about making connections, why is it that it’s evolved with this whopping divide down the middle?”

Natasha Mitchell, "The Master and his Emissary: the divided brain and the reshaping of Western civilisation", 19 June 2010

      

"The author holds instead that each of the hemispheres of the brain has a different “take” on the world or produces a different “version” of the world, though under normal circumstances these work together. This, he says, is basically to do with attention. He illustrates this with the case of chicks which use the eye connected to the left hemisphere to attend to the fine detail of picking seeds from amongst grit, whilst the other eye attends to the broader threat from predators. According to the author, “The left hemisphere has its own agenda, to manipulate and use the world”; its world view is essentially that of a mechanism. The right has a broader outlook, “has no preconceptions, and simply looks out to the world for whatever might be. In other words it does not have any allegiance to any particular set of values.”

Staff, "Two worlds of the left and right brain (audio podcast)", BBC Radio 4, 14 November 2009

McGilchrist explains this more fully in a later interview for ABC Radio National’s All in the Mind programme, stating: “The right hemisphere sees a great deal, but in order to refine it, and to make sense of it in certain ways—-in order to be able to use what it understands of the world and to be able to manipulate the world—-it needs to delegate the job of simplifying it and turning it into a usable form to another part of the brain” [the left hemisphere]. Though he sees this as an essential “double act”, McGilchrist points to the problem that the left hemisphere has a “narrow, decontextualised and theoretically based model of the world which is self consistent and is therefore quite powerful” and to the problem of the left hemisphere’s lack of awareness of its own shortcomings; whilst in contrast, the right hemisphere is aware that it is in a symbiotic relationship.

How the brain has shaped our world

"The author describes the evolution of Western culture, as influenced by hemispheric brain functioning, from the ancient world, through the Renaissance and Reformation; the Enlightenment; Romanticism and Industrial Revolution; to the modern and postmodern worlds which, to our detriment, are becoming increasingly dominated by the left brain. According to McGilchrist, interviewed for ABC Radio National’s All in the Mind programme, rather than seeking to explain the social and cultural changes and structure of civilisation in terms of the brain — which would be reductionist — he is pointing to a wider, more inclusive perspective and greater reality in which there are two competing ways of thinking and being, and that in modern Western society we appear increasingly to be able to only entertain one viewpoint: that of the left hemisphere.

The author argues that the brain and the mind do not simply experience the world, but that the world we experience is a product or meeting of that which is outside us with our mind. The outcome, the nature of this world, is thus dependent upon “which mode of attention we bring to bear on the world

McGilchrist sees an occasional flowering of "the best of the right hemisphere and the best of the left hemisphere working together" in our history: as witnessed in Athens in the 6th century by activity in the humanities and in science and in ancient Rome during the Augustan era. However, he also sees that as time passes, the left hemisphere once again comes to dominate affairs and things slide back into “a more theoretical and conceptualised abstracted bureaucratic sort of view of the world. According to McGilchrist, the cooperative use of both left and right hemispheres diminished and became imbalanced in favour of the left in the time of the classical Greek philosophers Parmenides and Plato and in the late classical Roman era. This cooperation and openness were regained during the Renaissance 1,000 years later which brought “sudden efflorescence of creative life in the sciences and the arts”. However, with the Reformation, the early Enlightenment, and what has followed as rationalism has arisen, our world has once again become increasingly rigid, simplified and rule-bound.

Looking at more recent Western history, McGilchrist sees in the Industrial Revolution that for the first time artefacts were being made “very much to the way the left hemisphere sees the world — simple solids that are regular, repeated, not individual in the way that things that are made by hand are” and that a transformation of the environment in a similar vein followed on from that; that what was perceived inwardly was projected outwardly on a mass scale. The author argues that the scientific materialism which developed in the 19th century is still with us, at least in the biological sciences, though he sees physics as having moved on.

McGilchrist does not see modernism and postmodernism as being in opposition to this, but also “symptomatic of a shift towards the left hemisphere’s conception of the world”, taking the idea that there is no absolute truth and turning that into “there is no truth at all”, and he finds some of the movements’ works of art “symptomatic of people whose right hemisphere is not working very well.” McGilchrist cites the American psychologist Louis Sass, author of Madness and Modernism, pointing out that Sass “draws extensive parallels between the phenomena of modernism and postmodernism and of schizophrenia”, with things taken out of context and fragmented.”

The Master and His Emissary, Wiki

The Master and His Emissary

Whatever the relationship between consciousness and the brainunless the brain plays no role in bringing the world as we experience it into being, a position that must have few adherents – its structure has to be significant. It might even give us clues to understanding the structure of the world it mediates, the world we know. So, to ask a very simple question, why is the brain so clearly and profoundly divided? Why, for that matter, are the two cerebral hemispheres asymmetrical? Do they really differ in any important sense? If so, in what way? (…)

Enthusiasm for finding the key to hemisphere differences has waned, and it is no longer respectable for a neuroscientist to hypothesise on the subject. (…)

These beliefs could, without much violence to the facts, be characterised as versions of the idea that the left hemisphere is somehow gritty, rational, realistic but dull, and the right hemisphere airy-fairy and impressionistic, but creative and exciting; a formulation reminiscent of Sellar and Yeatman’s immortal distinction (in their parody of English history teaching, 1066 and All That) between the Roundheads – ‘Right and Repulsive’ – and the Cavaliers – ‘Wrong but Wromantic’. In reality, both hemispheres are crucially involved in reason, just as they are in language; both hemispheres play their part in creativity. Perhaps the most absurd of these popular misconceptions is that the left hemisphere, hard-nosed and logical, is somehow male, and the right hemisphere, dreamy and sensitive, is somehow female. (…)

V. S. Ramachandran, another well-known and highly regarded neuroscientist, accepts that the issue of hemisphere difference has been traduced, but concludes: ‘The existence of such a pop culture shouldn’t cloud the main issue – the notion that the two hemispheres may indeed be specialised for different functions. (…)

I believe there is, literally, a world of difference between the hemispheres. Understanding quite what that is has involved a journey through many apparently unrelated areas: not just neurology and psychology, but philosophy, literature and the arts, and even, to some extent, archaeology and anthropology. (…)

I have come to believe that the cerebral hemispheres differ in ways that have meaning. There is a plethora of well-substantiated findings that indicate that there are consistent differences – neuropsychological, anatomical, physiological and chemical, amongst others – between the hemispheres. But when I talk of ‘meaning’, it is not just that I believe there to be a coherent pattern to these differences. That is a necessary first step. I would go further, however, and suggest that such a coherent pattern of differences helps to explain aspects of human experience, and therefore means something in terms of our lives, and even helps explain the trajectory of our common lives in the Western world.

My thesis is that for us as human beings there are two fundamentally opposed realities, two different modes of experience; that each is of ultimate importance in bringing about the recognisably human world; and that their difference is rooted in the bihemispheric structure of the brain. It follows that the hemispheres need to co-operate, but I believe they are in fact involved in a sort of power struggle, and that this explains many aspects of contemporary Western culture. (…)

The brain has evolved, like the body in which it sits, and is in the process of evolving. But the evolution of the brain is different from the evolution of the body. In the brain, unlike in most other human organs, later developments do not so much replace earlier ones as add to, and build on top of, them. Thus the cortex, the outer shell that mediates most so-called higher functions of the brain, and certainly those of which we are conscious, arose out of the underlying subcortical structures which are concerned with biological regulation at an unconscious level; and the frontal lobes, the most recently evolved part of the neocortex, which occupy a much bigger part of the brain in humans than in our animal relatives, and which grow forwards from and ‘on top of ’ the rest of the cortex, mediate most of the sophisticated activities that mark us out as human – planning, decision making, perspective taking, self-control, and so on. In other words, the structure of the brain reflects its history: as an evolving dynamic system, in which one part evolves out of, and in response to, another. (…)

There is after all coherence to the way in which the correlates of our experience are grouped and organised in the brain, and we can see these ‘functions’ forming intelligible wholes, corresponding to areas of experience, and see how they relate to one another at the brain level, this casts some light on the structure and experience of our mental world. In this sense the brain is – in fact it has to be – a metaphor of the world. (…)

I believe that there are two fundamentally opposed realities rooted in the bihemispheric structure of the brain. But the relationship between them is no more symmetrical than that of the chambers of the heart – in fact, less so; more like that of the artist to the critic, or a king to his counsellor.

There is a story in Nietzsche that goes something like this. There was once a wise spiritual master, who was the ruler of a small but prosperous domain, and who was known for his selfless devotion to his people. As his people flourished and grew in number, the bounds of this small domain spread; and with it the need to trust implicitly the emissaries he sent to ensure the safety of its ever more distant parts. It was not just that it was impossible for him personally to order all that needed to be dealt with: as he wisely saw, he needed to keep his distance from, and remain ignorant of, such concerns. And so he nurtured and trained carefully his emissaries, in order that they could be trusted. Eventually, however, his cleverest and most ambitious vizier, the one he most trusted to do his work, began to see himself as the master, and used his position to advance his own wealth and influence. He saw his master’s temperance and forbearance as weakness, not wisdom, and on his missions on the master’s behalf, adopted his mantle as his own – the emissary became contemptuous of his master. And so it came about that the master was usurped, the people were duped, the domain became a tyranny; and eventually it collapsed in ruins.

The meaning of this story is as old as humanity, and resonates far from the sphere of political history. I believe, in fact, that it helps us understand something taking place inside ourselves, inside our very brains, and played out in the cultural history of the West, particularly over the last 500 years or so. (…)

I hold that, like the Master and his emissary in the story, though the cerebral hemispheres should co-operate, they have for some time been in a state of conflict. The subsequent battles between them are recorded in the history of philosophy, and played out in the seismic shifts that characterise the history of Western culture. At present the domain – our civilisation – finds itself in the hands of the vizier, who, however gifted, is effectively an ambitious regional bureaucrat with his own interests at heart. Meanwhile the Master, the one whose wisdom gave the people peace and security, is led away in chains. The Master is betrayed by his emissary.”

Iain McGilchrist, psychiatrist and writer, The Master and His Emissary, Yale University Press, 2009 Illustrations: 1), 2) Shalmor Avnon Amichay/Y&R Interactive

Iain McGilchrist: The Divided Brain | RSA animated

RSA, 17th Nov 2010

See also:

☞ Iain McGilchrist, The Battle Between the Brain’s Left and Right Hemispheres, WSJ.com, Jan 2, 2010
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
Mind and Brain tag on Lapidarium notes

Oct
3rd
Mon
permalink

Time and the Brain. Eagleman: ‘Time is not just as a neuronal computation—a matter for biological clocks—but as a window on the movements of the mind’

                               

"Instead of reality being passively recorded by the brain, it is actively constructed by it."

David Eagleman, Incognito: The Secret Lives of the Brain, Pantheon Books, 2011

Clocks offer at best a convenient fiction, [David Eagleman] says. They imply that time ticks steadily, predictably forward, when our experience shows that it often does the opposite: it stretches and compresses, skips a beat and doubles back.”

Just how many clocks we contain still isn’t clear. The most recent neuroscience papers make the brain sound like a Victorian attic, full of odd, vaguely labelled objects ticking away in every corner. The circadian clock, which tracks the cycle of day and night, lurks in the suprachiasmatic nucleus, in the hypothalamus. The cerebellum, which governs muscle movements, may control timing on the order of a few seconds or minutes. The basal ganglia and various parts of the cortex have all been nominated as timekeepers, though there’s some disagreement on the details.

The standard model, proposed by the late Columbia psychologist John Gibbon in the nineteen-seventies, holds that the brain has “pacemaker” neurons that release steady pulses of neurotransmitters. More recently, at Duke, the neuroscientist Warren Meck has suggested that timing is governed by groups of neurons that oscillate at different frequencies. At U.C.L.A., Dean Buonomano believes that areas throughout the brain function as clocks, their tissue ticking with neural networks that change in predictable patterns. “Imagine a skyscraper at night,” he told me. “Some people on the top floor work till midnight, while some on the lower floors may go to bed early. If you studied the patterns long enough, you could tell the time just by looking at which lights are on.”

Time isn’t like the other senses, Eagleman says. Sight, smell, touch, taste, and hearing are relatively easy to isolate in the brain. They have discrete functions that rarely overlap: it’s hard to describe the taste of a sound, the color of a smell, or the scent of a feeling. (Unless, of course, you have synesthesia—another of Eagleman’s obsessions.) But a sense of time is threaded through everything we perceive. It’s there in the length of a song, the persistence of a scent, the flash of a light bulb. “There’s always an impulse toward phrenology in neuroscience—toward saying, ‘Here is the spot where it’s happening,’ ” Eagleman told me. “But the interesting thing about time is that there is no spot. It’s a distributed property. It’s metasensory; it rides on top of all the others.”

The real mystery is how all this is coördinated. When you watch a ballgame or bite into a hot dog, your senses are in perfect synch: they see and hear, touch and taste the same thing at the same moment. Yet they operate at fundamentally different speeds, with different inputs. Sound travels more slowly than light, and aromas and tastes more slowly still. Even if the signals reached your brain at the same time, they would get processed at different rates. The reason that a hundred-metre dash starts with a pistol shot rather than a burst of light, Eagleman pointed out, is that the body reacts much more quickly to sound. Our ears and auditory cortex can process a signal forty milliseconds faster than our eyes and visual cortex—more than making up for the speed of light. It’s another vestige, perhaps, of our days in the jungle, when we’d hear the tiger long before we’d see it.

In Eagleman’s essay “Brain Time,” published in the 2009 collection “What’s Next? Dispatches on the Future of Science,” he borrows a conceit from Italo Calvino’s “Invisible Cities.” The brain, he writes, is like Kublai Khan, the great Mongol emperor of the thirteenth century. It sits enthroned in its skull, “encased in darkness and silence,” at a lofty remove from brute reality. Messengers stream in from every corner of the sensory kingdom, bringing word of distant sights, sounds, and smells. Their reports arrive at different rates, often long out of date, yet the details are all stitched together into a seamless chronology. The difference is that Kublai Khan was piecing together the past. The brain is describing the present—processing reams of disjointed data on the fly, editing everything down to an instantaneous now. (…)

[Eagleman] thought of time not just as a neuronal computation—a matter for biological clocks—but as a window on the movements of the mind. (…)

You feel it now—not in half a second. But perception and reality are often a little out of register, as the saccade experiment showed. If all our senses are slightly delayed, we have no context by which to measure a given lag. Reality is a tape-delayed broadcast, carefully censored before it reaches us.

“Living in the past may seem like a disadvantage, but it’s a cost that the brain is willing to pay,” Eagleman said. “It’s trying to put together the best possible story about what’s going on in the world, and that takes time.” Touch is the slowest of the senses, since the signal has to travel up the spinal cord from as far away as the big toe. That could mean that the over-all delay is a function of body size: elephants may live a little farther in the past than hummingbirds, with humans somewhere in between. The smaller you are, the more you live in the moment. (…)

[T]ime and memory are so tightly intertwined that they may be impossible to tease apart.

One of the seats of emotion and memory in the brain is the amygdala, he explained. When something threatens your life, this area seems to kick into overdrive, recording every last detail of the experience. The more detailed the memory, the longer the moment seems to last. “This explains why we think that time speeds up when we grow older,” Eagleman said—why childhood summers seem to go on forever, while old age slips by while we’re dozing. The more familiar the world becomes, the less information your brain writes down, and the more quickly time seems to pass. (…)

“Time is this rubbery thing,” Eagleman said. “It stretches out when you really turn your brain resources on, and when you say, ‘Oh, I got this, everything is as expected,’ it shrinks up.” The best example of this is the so-called oddball effect—an optical illusion that Eagleman had shown me in his lab. It consisted of a series of simple images flashing on a computer screen. Most of the time, the same picture was repeated again and again: a plain brown shoe. But every so often a flower would appear instead. To my mind, the change was a matter of timing as well as of content: the flower would stay onscreen much longer than the shoe. But Eagleman insisted that all the pictures appeared for the same length of time. The only difference was the degree of attention that I paid to them. The shoe, by its third or fourth appearance, barely made an impression. The flower, more rare, lingered and blossomed, like those childhood summers. (…)”

Burkhard Bilger speaking about David Eagleman, neuroscientist at Baylor College of Medicine, where he directs the Laboratory for Perception and Action and the Initiative on Neuroscience and Law, The Possibilian, The New Yorker, Aprill 25, 2011 (Illustration source)

See also:

David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
☞ David Eagleman, Brain Time, Edge, June 24, 2009 
David Eagleman on the conscious mind
The Experience and Perception of Time, Stanford Encyclopedia of Philosophy
Time tag on Lapidarium notes

Sep
30th
Fri
permalink

Why Does Beauty Exist? Jonah Lehrer: ‘Beauty is a particularly potent and intense form of curiosity’

                          
                                Interwoven Beauty by John Lautermilch

Curiosity

"Here’s my (extremely speculative) theory: Beauty is a particularly potent and intense form of curiosity. It’s a learning signal urging us to keep on paying attention, an emotional reminder that there’s something here worth figuring out. Art hijacks this ancient instinct: If we’re looking at a Rothko, that twinge of beauty in the mOFC is telling us that this painting isn’t just a blob of color; if we’re listening to a Beethoven symphony, the feeling of beauty keeps us fixated on the notes, trying to find the underlying pattern; if we’re reading a poem, a particularly beautiful line slows down our reading, so that we might pause and figure out what the line actually means. Put another way, beauty is a motivational force that helps modulate conscious awareness. The problem beauty solves is the problem of trying to figure out which sensations are worth making sense of and which ones can be easily ignored.

Let’s begin with the neuroscience of curiosity, that weak form of beauty. There’s an interesting recent study from the lab of Colin Camerer at Caltech, led by Min Jeong Kang. (…)

The first thing the scientists discovered is that curiosity obeys an inverted U-shaped curve, so that we’re most curious when we know a little about a subject (our curiosity has been piqued) but not too much (we’re still uncertain about the answer). This supports the information gap theory of curiosity, which was first developed by George Loewenstein of Carnegie-Mellon in the early 90s. According to Loewenstein, curiosity is rather simple: It comes when we feel a gap “between what we know and what we want to know”. This gap has emotional consequences: it feels like a mental itch. We seek out new knowledge because we that’s how we scratch the itch.

The fMRI data nicely extended this information gap model of curiosity. It turns out that, in the moments after the question was first asked, subjects showed a substantial increase in brain activity in three separate areas: the left caudate, the prefrontal cortex and the parahippocampal gyri. The most interesting finding is the activation of the caudate, which seems to sit at the intersection of new knowledge and positive emotions. (For instance, the caudate has been shown to be activated by various kinds of learning that involve feedback, while it’s also been closely linked to various parts of the dopamine reward pathway.) The lesson is that our desire for more information – the cause of curiosity – begins as a dopaminergic craving, rooted in the same primal pathway that responds to sex, drugs and rock and roll.

I see beauty as a form of curiosity that exists in response to sensation, and not just information. It’s what happens when we see something and, even though we can’t explain why, want to see more. But here’s the interesting bit: the hook of beauty, like the hook of curiosity, is a response to an incompleteness. It’s what happens when we sense something missing, when there’s a unresolved gap, when a pattern is almost there, but not quite. I’m thinking here of that wise Leonard Cohen line: “There’s a crack in everything – that’s how the light gets in.” Well, a beautiful thing has been cracked in just the right way.

Beautiful music and the brain

The best way to reveal the link between curiosity and beauty is with music. Why do we perceive certain musical sounds as beautiful? On the one hand, music is a purely abstract art form, devoid of language or explicit ideas. The stories it tells are all subtlety and subtext; there is no content to get curious about. And yet, even though music says little, it still manages to touch us deep, to tittilate some universal dorsal hairs.

We can now begin to understand where these feelings come from, why a mass of vibrating air hurtling through space can trigger such intense perceptions of beauty. Consider this recent paper in Nature Neuroscience by a team of Montreal researchers. (…)

Because the scientists were combining methodologies (PET and fMRI) they were able to obtain a precise portrait of music in the brain. The first thing they discovered (using ligand-based PET) is that beautiful music triggers the release of dopamine in both the dorsal and ventral striatum. This isn’t particularly surprising: these regions have long been associated with the response to pleasurable stimuli. The more interesting finding emerged from a close study of the timing of this response, as the scientists looked to see what was happening in the seconds before the subjects got the chills.
I won’t go into the precise neural correlates – let’s just say that you should thank your right nucleus accumbens the next time you listen to your favorite song – but want to instead focus on an interesting distinction observed in the experiment:


                                                      Click image to enlarge

In essence, the scientists found that our favorite moments in the music – those sublimely beautiful bits that give us the chills – were preceeded by a prolonged increase of activity in the caudate, the same brain area involved in curiosity. They call this the “anticipatory phase,” as we await the arrival of our favorite part:

Immediately before the climax of emotional responses there was evidence for relatively greater dopamine activity in the caudate. This subregion of the striatum is interconnected with sensory, motor and associative regions of the brain and has been typically implicated in learning of stimulus-response associations and in mediating the reinforcing qualities of rewarding stimuli such as food.

In other words, the abstract pitches have become a primal reward cue, the cultural equivalent of a bell that makes us drool. Here is their summary:

The anticipatory phase, set off by temporal cues signaling that a potentially pleasurable auditory sequence is coming, can trigger expectations of euphoric emotional states and create a sense of wanting and reward prediction. This reward is entirely abstract and may involve such factors as suspended expectations and a sense of resolution. Indeed, composers and performers frequently take advantage of such phenomena, and manipulate emotional arousal by violating expectations in certain ways or by delaying the predicted outcome (for example, by inserting unexpected notes or slowing tempo) before the resolution to heighten the motivation for completion.

(…)

While music can often seem (at least to the outsider) like an intricate pattern of pitches – it’s art at its most mathematical – it turns out that the most important part of every song or symphony is when the patterns break down, when the sound becomes unpredictable. If the music is too obvious, it is annoyingly boring, like an alarm clock. (Numerous studies, after all, have demonstrated that dopamine neurons quickly adapt to predictable rewards. If we know what’s going to happen next, then we don’t get excited.) This is why composers introduce the tonic note in the beginning of the song and then studiously avoid it until the end. They want to make us curious, to create a beautiful gap between what we hear and what we want to hear.

To demonstrate this psychological principle, the musicologist Leonard Meyer, in his classic book Emotion and Meaning in Music (1956), analyzed the 5th movement of Beethoven’s String Quartet in C-sharp minor, Op. 131. Meyer wanted to show how music is defined by its flirtation with – but not submission to – our expectations of order. To prove his point, Meyer dissected fifty measures of Beethoven’s masterpiece, showing how Beethoven begins with the clear statement of a rhythmic and harmonic pattern and then, in an intricate tonal dance, carefully avoids repeating it. What Beethoven does instead is suggest variations of the pattern. He is its evasive shadow. If E major is the tonic, Beethoven will play incomplete versions of the E major chord, always careful to avoid its straight expression. He wants to preserve an element of uncertainty in his music, making our brains exceedingly curious for the one chord he refuses to give us. Beethoven saves that chord for the end.

According to Meyer, it is the suspenseful tension of music (arising out of our unfulfilled expectations) that is the source of the music’s beauty. While earlier theories of music focused on the way a noise can refer to the real world of images and experiences (its “connotative” meaning), Meyer argued that the emotions we find in music come from the unfolding events of the music itself. This “embodied meaning” arises from the patterns the symphony invokes and then ignores, from the ambiguity it creates inside its own form. “For the human mind,” Meyer writes, “such states of doubt and confusion are abhorrent. When confronted with them, the mind attempts to resolve them into clarity and certainty.” And so we wait, expectantly, for the resolution of E major, for Beethoven’s established pattern to be completed. This nervous anticipation, says Meyer, “is the whole raison d’etre of the passage, for its purpose is precisely to delay the cadence in the tonic.” The uncertainty – that crack in the melody – makes the feeling.

Why the feeling of beauty is useful

What I like about this speculation is that it begins to explain why the feeling of beauty is useful. The aesthetic emotion might have begun as a cognitive signal telling us to keep on looking, because there is a pattern here that we can figure out it. In other words, it’s a sort of a metacognitive hunch, a response to complexity that isn’t incomprehensible. Although we can’t quite decipher this sensation – and it doesn’t matter if the sensation is a painting or a symphony – the beauty keeps us from looking away, tickling those dopaminergic neurons and dorsal hairs. Like curiosity, beauty is a motivational force, an emotional reaction not to the perfect or the complete, but to the imperfect and incomplete. We know just enough to know that we want to know more; there is something here, we just don’t what. That’s why we call it beautiful.”

Jonah Lehrer, American journalist who writes on the topics of psychology, neuroscience, and the relationship between science and the humanities, Why Does Beauty Exist?, Wired science, July 18, 2011

See also:

Beauty is in the medial orbitofrontal cortex of the beholder, study finds
Denis Dutton: A Darwinian theory of beauty, TED, Lapidarium transcript
The Science of Art. A Neurological Theory of Aesthetic Experience
☞ Katherine Harmon, Brain on Beauty Shows the Same Pattern for Art and Music, Scientific American, July 7, 2011