Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Dec
11th
Tue
permalink

Researchers discover surprising complexities in the way the brain makes mental maps

                     image
Spatial location is closely connected to the formation of new memories. Until now, grid cells were thought to be part of a single unified map system. New findings from the Norwegian University of Science and Technology demonstrate that the grid system is in fact composed of a number of independent grid maps, each with unique properties. Each map displays a particular resolution (mesh size), and responds independently to changes in the environment. A system of several distinct grid maps (illustrated on left) can support a large number of unique combinatorial codes used to associate new memories formed with specific spatial information (illustrated on right).

Your brain has at least four different senses of location – and perhaps as many as 10. And each is different, according to new research from the Kavli Institute for Systems Neuroscience, at the Norwegian University of Science and Technology. (…)

The findings, published in the 6 December 2012 issue of Nature, show that rather than just a single sense of location, the brain has a number of “modules” dedicated to self-location. Each module contains its own internal GPS-like mapping system that keeps track of movement, and has other characteristics that also distinguishes one from another.

"We have at least four senses of location," says Edvard Moser, director of the Kavli Institute. "Each has its own scale for representing the external environment, ranging from very fine to very coarse. The different modules react differently to changes in the environment. Some may scale the brain’s inner map to the surroundings, others do not. And they operate independently of each other in several ways."

This is also the first time that researchers have been able to show that a part of the brain that does not directly respond to sensory input, called the association cortex, is organized into modules. The research was conducted using rats. (…)

Technical breakthroughs

A rat’s brain is the size of a grape, while the area that keeps track of the sense of location and memory is comparable in size to a small grape seed. This tiny area holds millions of nerve cells.

A research team of six people worked for more than four years to acquire extensive electrophysiological measurements in this seed-sized region of the brain. New measurement techniques and a technical breakthrough made it possible for Hanne Stensola and her colleagues to measure the activity in as many as 186 grid cells of the same rat brain. A grid cell is a specialized cell named for its characteristic of creating hexagonal grids in the brain’s mental map of its surroundings.

"We knew that the ‘grid maps’ in this area of the brain had resolutions covering different scales, but we did not know how independent the scales were of each other," Stensola said. "We then discovered that the maps were organized in four to five modules with different scales, and that each of these modules reacted slightly differently to changes in their environment. This independence can be used by the brain to create new combinations - many combinations - which is a very useful tool for memory formation.

After analysing the activity of nearly 1000 grid cells, researchers were able to conclude that the brain has not just one way of making an internal map of its location, but several. Perhaps 10 different senses of location.

Perhaps 10 different senses of location

image
The entorhinal cortex is a part of the neocortex that represents space by way of brain cells that have GPS-like properties. Each cell describes the environment as a hexagonal grid mesh, earning them the name ‘grid cells’. The panels show a bird’s-eye view of a rat’s recorded movements (grey trace) in a 2.2x2.2 m box. Each panel shows the activity of one grid cell (blue dots) with a particular map resolution as the animal moved through the environment. Credit: Kavli Institute for Systems Neuroscience, NTNU

Institute director Moser says that while researchers are able to state with confidence that there are at least four different location modules, and have seen clear evidence of a fifth, there may be as many as 10 different modules.

He says, however, that researchers need to conduct more measurements before they will have covered the entire grid-cell area. “At this point we have measured less than half of the area,” he says.

Aside from the time and challenges involved in making these kinds of measurements, there is another good reason why researchers have not yet completed this task. The lower region of the sense of location area, the entorhinal cortex, has a resolution that is so coarse or large that it is virtually impossible to measure it.

"The thinking is that the coordinate points for some of these maps are as much as ten metres apart," explains Moser. "To measure this we would need to have a lab that is quite a lot larger and we would need time to test activity over the entire area. We work with rats, which run around while we make measurements from their brain. Just think how long it would take to record the activity in a rat if it was running back and forth exploring every nook and cranny of a football field. So you can see that we have some challenges here in scaling up our experiments."

New way to organize

Part of what makes the discovery of the grid modules so special is that it completely changes our understanding of how the brain physically organizes abstract functions. Previously, researchers have shown that brain cells in sensory systems that are directly adjacent to each other tend to have the same response pattern. This is how they have been able to create detailed maps of which parts of the sensory brain do what.

The new research shows that a modular organization is also found in the highest parts of the cortex, far away from areas devoted to senses or motor outputs. But these maps are different in the sense that they overlap or infiltrate other. It is thus not possible to locate the different modules with a microscope, because the cells that work together are intermingled with other modules in the same area.

“The various components of the grid map are not organized side by side,” explains Moser. “The various components overlap. This is the first time a brain function has been shown to be organized in this way at separate scales. We have uncovered a new way for neural network function to be distributed.”

A map and a constant

The researchers were surprised, however, when they started calculating the difference between the scales. They may have discovered an ingenious mathematical coding system, along with a number, a constant. (Anyone who has read or seen “The Hitchhiker’s Guide to the Galaxy” may enjoy this.) The scale for each sense of location is actually 42% larger than the previous one. “

We may not be able to say with certainty that we have found a mathematical constant for the way the brain calculates the scales for each sense of location, but it’s very funny that we have to multiply each measurement by 1.42 to get the next one. That is approximately equal to the square root of the number two,” says Moser.

Maps are genetically encoded

Moser thinks it is striking that the relationship between the various functional modules is so orderly. He believes this orderliness shows that the way the grid map is organized is genetically built in, and not primarily the result of experience and interaction with the environment.

So why has evolution equipped us with four or more senses of location?

Moser believes the ability to make a mental map of the environment arose very early in evolution. He explains that all species need to navigate, and that some types of memory may have arisen from brain systems that were actually developed for the brain’s sense of location.

“We see that the grid cells that are in each of the modules send signals to the same cells in the hippocampus, which is a very important component of memory,” explains Moser. “This is, in a way, the next step in the line of signals in the brain. In practice this means that the location cells send a different code into the hippocampus at the slightest change in the environment in the form of a new pattern of activity. So every tiny change results in a new combination of activity that can be used to encode a new memory, and, with input from the environment, becomes what we call memories.”

Researchers discover surprising complexities in the way the brain makes mental maps, Medical press, Dec 5, 2012.

The article is a part of doctoral research conducted by Hanne and Tor Stensola, and has been funded through an Advanced Investigator Grant that Edvard Moser was awarded by the European Research Council (ERC).

See also:

☞ Hanne Stensola, Tor Stensola, Trygve Solstad, Kristian Frøland, May-Britt Moser & Edvard I. Moser, The entorhinal grid map is discretized, Nature, 5 Dec 2012.
Mind & brain tag on Lapidarium notes

Aug
11th
Sat
permalink

Is there any redundancy in human memory?

            

Are there two physical copies of the same memory in the brain, such that if some cells storing a particular memory die, the memory is still not lost?

Yes. “Memories are stored using a “distributed representation,” which means that each memory is stored across thousands of synapses and neurons. And each neuron or synapses is involved in thousands of memories.

So if a single neuron fails, there are still 999 (for example) other neurons collaborating in the representation of that memory. With the failure of each neuron, thousands of memories get imperceptibly weaker, a property called “graceful degradation.”

Some people like to use the metaphor of a hologram. In a hologram, the 3D image is spread across the sheet of glass, and if the glass is broken, the full image can be seen in each separate shard. This is not exactly how memory works in the brain, but it is not a bad metaphor.

In some ways the brain is like a RAID array of disks, except instead of 3 hard disks, there are millions (or billions) of neurons sharing the representation of memories. (…)

Figure: Structural comparison of a RAID disk array and the type of hierarchical distributed memory network used by the brain.

Memory in the brain is resilient against catastrophic failure. Many memories get weaker with each neuron failure, but there is no point at which the failure of “one-too-many neurons” causes a memory to suddenly disappear. And the process of recalling a memory can strengthen it by recruiting more neurons and synapses to its representation. Also, memory in the brain is not perfect. Memory recall is more similar to reconstructing an earlier brain state than retrieving stored data. The recalled memory is never exactly the same as what was stored. And more typically, memories are not recalled but instead put to use. When you turn left at a familiar intersection, you are using knowledge, not recalling it.

One extreme example of the brain’s resiliency to catastrophic failure is stroke. A stroke is an event (often a burst blood vessel) that kills possibly hundreds of millions of brain cells within a very short time-frame (hours). Even the brain’s enormous redundancy cannot prevent memory loss under these circumstances. And yet the ability of stroke victims to recover language and skills shows that the brain can reorganize itself and somehow recover and quickly relearn knowledge that should have been destroyed.

In Alzheimers Disease, brain cells die at an accelerating rate. At some point, the reduction in brain cells overtakes the memory redundancy of the brain and memories do become permanently lost.

There is still a lot that is not known about how exactly memories are organized and represented in the brain. Neuron-level mechanisms have been figured out, and quite a lot of information has been gathered about the brain region specialized for coding and managing memory storage (the hippocampus). But the exact structure and coding scheme of memories has yet to be determined.”

Related on Quora:
Are forgotten life events still visible in the brain and if so how?
Why is forgetting different for driving and calculus?
How is it that a chip of a hologram still holds the entire image?
Neuroscience: Are there any connections (axons) in the brain that are redundant?


Paul King, Computational Neuroscientist, currently a visiting scholar at the Redwood Center for Theoretical Neuroscience at UC Berkeley, Is there any redundancy in human memory?, Quora, Aug 2012. (Illustration source)

See also:

Memory tag on Lapidarium notes

May
17th
Thu
permalink

The Self Illusion: How the Brain Creates Identity

            

'The Self'

"For the majority of us the self is a very compulsive experience. I happen to think it’s an illusion and certainly the neuroscience seems to support that contention. Simply from the logical positions that it’s very difficult to, without avoiding some degree of infinite regress, to say a starting point, the trail of thought, just the fractionation of the mind, when we see this happening in neurological conditions. The famous split-brain studies showing that actually we’re not integrated entities inside our head, rather we’re the output of a multitude of unconscious processes.

I happen to think the self is a narrative, and I use the self and the division that was drawn by William James, which is the “I” (the experience of conscious self) and the “me” (which is personal identity, how you would describe yourself in terms of where are you from and everything that makes you up in your predilections and your wishes for the future). Both the “I”, who is sentient of the “me”, and the “me”, which is a story of who you are, I think are stories. They’re constructs and narratives. I mean that in a sense that a story is a reduction or at least it’s a coherent framework that has some causal kind of coherence.

When I go out and give public lectures I like to illustrate the weaknesses of the “I” by using visual illusions of the most common examples. But there are other kinds of illusions that you can introduce which just reveal to people how their conscious experience is actually really just a fraction of what’s really going on. It certainly is not a true reflection of all mechanisms that are generating. Visual illusions are very obvious in that. The thing about the visual illusion effects is that even when they’re explained to you, you can’t but help see them, so that’s interesting. You can’t divorce yourself from the mechanisms that are creating the illusion and the mind that’s experienced in the illusion.

The sense of personal identity, this is where we’ve been doing experimental work showing the importance that we place upon episodic memories, autobiographical memories. In our duplication studies for example, children are quite willing to accept that you could copy a hamster with all its physical properties that you can’t necessarily see, but what you can’t copy very easily are the episodic memories that one hamster has had.

This actually resonates with the ideas of John Locke, the philosopher, who also argued that personal identity was really dependent on the autobiographical or episodic memories, and you are the sum of your memories, which, of course, is something that fractionates and fragments in various forms of dementia. As the person loses the capacity to retrieve memories, or these memoires become distorted, then the identity of the person, the personality, can be changed, amongst other things. But certainly the memories are very important.

As we all know, memory is notoriously fallible. It’s not cast in stone. It’s not something that is stable. It’s constantly reshaping itself. So the fact that we have a multitude of unconscious processes which are generating this coherence of consciousness, which is the I experience, and the truth that our memories are very selective and ultimately corruptible, we tend to remember things which fit with our general characterization of what our self is. We tend to ignore all the information that is inconsistent. We have all these attribution biases. We have cognitive dissonance. The very thing psychology keeps telling us, that we have all these unconscious mechanisms that reframe information, to fit with a coherent story, then both the “I” and the “me”, to all intents and purposes, are generated narratives.

The illusions I talk about often are this sense that there is an integrated individual, with a veridical notion of past. And there’s nothing at the center. We’re the product of the emergent property, I would argue, of the multitude of these processes that generate us.       

I use the word illusion as opposed to delusion. Delusion implies mental illness, to some extent, and illusion, we’re quite happy to accept that we’re experiencing illusions, and for me the word illusion really does mean that it’s an experience that is not what it seems. I’m not denying that there is an experience. We all have this experience, and what’s more, you can’t escape it easily. I think it’s more acceptable to call it an illusion whereas there’s a derogatory nature of calling something a delusion. I suspect there’s probably a technical difference which ought to do with mental illness, but no, I think we’re all perfectly normally, experience this illusion.      

Oliver Sacks has famously written about various case studies of patients which seem so bizarre, people who have various forms of perceptual anomalies, they mistake their wife for a hat, or there are patients who can’t help but copy everything they see. I think that in many instances, because the self is so core to our normal behavior having an understanding that self is this constructive process, I think if this was something that clinicians were familiar with, then I think that would make a lot of sense.

Neuroethics

In fact, it’s not only in clinical practice, I think in a lot of things. I think neuroethics is a very interesting field. I’ve got another colleague, David Eagleman, he’s very interested in these ideas. The culpability, responsibility. We premise our legal systems on this notion there is an individual who is to be held accountable. Now, I’m not suggesting that we abandon that, and I’m not sure what you would put in its place, but I think we can all recognize that there are certain situations where we find it very difficult to attribute blame to someone. For example, famously, Charles Whitman, the Texan sniper, when they had the autopsy, they discovered a very sizeable tumor in a region of the brain which could have very much influenced his ability to control his rage. I’m not suggesting every mass murder has inoperable tumors in their brain, but it’s conceivable that there will be, with our increasing knowledge of how the brain operates, and our ability to understand it, it’s conceivable there will be more situations where the lawyers will be looking to put the blame on some biological abnormality.

Where is the line to be drawn? I think that’s a very tough one to deal with. It’s a problem that’s not going to go away. It’s something that we’re going to continually face as we start to learn more about the genetics of aggression.

There’s a lot of interest in this thing called the warrior gene. To what extent is this a gene which predisposes you to violence? Or do you need the interaction between the gene and the abusive childhood in order to get this kind of profile? So it’s not just clinicians, it’s actually just about every realm of human activity where you posit the existence of a self and individuals, and responsibility. Then it will reframe the way you think about things. Just the way that we heap blame and praise, the flip side of blaming people is that we praise individuals. But it could be, in a sense, a multitude of factors that have led them to be successful. I think that it’s a pervasive notion. Whether or not we actually change the way we do anything, I’m not so sure, because I think it would be really hard to live our lives dealing with non-individuals, trying to deal with multitude and the history that everyone brings to the table. There’s a good reason why we have this experience of the self. It’s a very sort of succinct and economical way of interacting with each other. We deal with individuals. We fall in love with individuals, not multitudes of past experiences and aspects of hidden agendas, we just pick them out. (…)

The objects are part of the extended sense of self

I keep tying this back to my issues about why certain objects are overvalued, and I happen to believe, like James again, that objects are part of the extended sense of self. We surround ourselves with objects. We place a lot of value on objects that we think are representative of our self.  (…)

We’re the only species on this planet that invests a lot of time and evaluation through our objects, and this has been something that has been with us for a very, very long time.

Think of some of the early artifacts. The difficulty would have been to make these artifacts, the time invested in these things, means that from a very early point in our civilization, or before civilization, I think the earliest pieces are probably about 90,000 years old. There are certainly older things that are tools, but pieces of artwork, about 90,000 years old. So it’s been with us a long time. And yes, some of them are obviously sacred objects, power of religious purposes and so forth. But outside of that, there’s still this sense of having materials or things that we value, and that intrigues me in so many ways. And I don’t think it’s necessarily universal as well. It’s been around a lot, but the endowment effect, for example, is not found everywhere. There’s some intriguing work coming out of Africa. 

The endowment effect is this rather intriguing idea that we will spontaneously overvalue an object as soon as we believe it’s in our possession, we don’t actually have to have it physically, just bidding on something, as soon as you make your connection to an object, then you value it more, you’ll actually remember more about it, you’ll remember objects which you think are in your possession in comparison to someone else. It gets a whole sense of attribution and value associated with it, which is one of the reasons why people never get the asking price for the things that they’re trying to sell, they always think their objects are worth more than other people are willing to pay for them.

There was the first experimental demonstration by Richard Thaler and Danny Kahneman, and the early behavioral economics, was this demonstration that if you just give people coffee cups, students, coffee cups, and then you ask them to sell it, they always ask more than what someone’s willing to pay for it. It turns out it’s not just coffee cups, it’s wine, it’s chocolate, it’s anything, basically. There’s been quite a bit of work done on the endowment effect now. As I say, it’s been looked at in different species, and the brain mechanisms of having to sell something at a lower price, like loss aversion, it’s seen as quite painful, triggers the same pain centers, if you think you’re going to lose out on a deal

What is it about the objects that give us this self-evaluated sense? Well, I think James spoke of this, again, William James commented on the way that we use objects to extend our self. Russell Belk is a marketing psychologist. He has also talked about the extended self in terms of objects. As I say, this is something that I think marketers know in that they create certain quality brands that are perceived to signal to others how good your social status is.

It’s something in us, but it may not be universal because there are tribes, there are some recent reports from nomadic tribes in central Africa, who don’t seem to have this sense of ownership. It might be a reflection more of the fact that a lot of this work has been done in the West where we’re very individualistic, and of course individualism almost creates a lot of endowment ideas and certainly supports the endowment, materialism that we see. But this is an area I’d like to do more work with because we have not found any evidence of the endowment effect in children below five, six years of age. I’m interested: is this something that just emerges spontaneously? I suspect not. I suspect this is something that culture is definitely shaping. That’s my hunch, so that’s an empirical question I need to pick apart.

The irrational superstitious behaviors

Another line of research I’ve been working on in the past five years … this was a little bit like putting the cart before the horse, so I put forward an idea, it wasn’t entirely original. It was a combination of ideas of others, most notably Pascal Boyer. Paul Bloom, to some extent, had been thinking something similar. A bunch of us were interested in why religion was around. I didn’t want to specifically focus on religion. I wanted to get to the more general point about belief because it was my hunch that even a lot of atheists or self-stated atheists or agnostics, still nevertheless entertained beliefs which were pretty irrational. I wasn’t meaning irrational in a kind of behavioral economics type of way. I meant irrational in that there were these implicit views that would violate the natural laws as we thought about them. Violations of the natural laws I see as being supernatural. That’s what makes them supernatural. I felt that this was an area worth looking at. They’d been looked at 50, 60 years ago very much in the behaviorist association tradition.

BF Skinner famously wrote a paper on the superstitious behavior of pigeons, and he argued if you simply set up a reinforcement schedule at a random kind of interval, pigeons will adopt typical patterns that they think are somehow related to the reward, and then you could shape irrational superstitious behaviors. Now that work has turned out to be a bit dubious and I’m not sure that stood the test of time. But in terms of people’s rituals and routines, it’s quite clear and I know them in myself. There are these things that we do which are familiar, and we get a little bit irritated we don’t get to do them, so we do, most of us, entertain some degree of superstitious behavior.

At the time there was a lot of interest in religion and a lot of the hoo-ha about The God Delusion, and I felt that maybe we just need to redress this idea that it’s all to do with indoctrination, because I couldn’t believe the whole edifice of this kind of belief system was purely indoctrination. I’m not saying there’s not indoctrination, and clearly, religions are culturally transmitted. You’re not born to be Jewish or born to be Christian. But what I think religions do is they capitalize on a lot of inclinations that children have. Then I entered into a series of work, and my particular interest was this idea of essentialism and sacred objects and moral contamination.

We took a lot of the work that Paul Rozin had done, talking about things like killers’ cardigans, and we started to see if there was any empirical measures of transfer. For example, would you find yourself wanting to wash your hands more? Would you find priming effects for words which were related to good and evil, based on whether you had touched the object or not? For me there had to be this issue of physical contact. It struck me as this was why it wasn’t a pure association mechanism. It was actually something to do with the belief, a naïve belief there was some biological entity that can somehow, moral contamination can transfer.

We started to look at, actually not children now, but looking at adults because doing this sort of work with children is very difficult and probably somewhat controversial. But the whole area of research is premised on this idea that there are intuitive ways of seeing the world. Sometimes this is referred to as System One and System Two, or automatic and control. It reappears in a variety of psychological contexts. I just think about it as these unconscious, rapid systems which are triggered automatically. I think their origins are in children. Whilst you can educate people with a kind of slower System Two, if you like, you never eradicate the intuitive ways of seeing the world because they were never taught in the first place. They’re always there. I suppose if you want to ask me if there any kind of thing that you can have as a theory that you haven’t yet proven, it’s the idea is, I don’t think you ever throw away any belief system or any ideas that have been derived through these unconscious intuitive processes. You can supersede them, you can overwrite them, but they never go away, and they will reemerge under the right contexts. If you put people through stressful situations or you overload it, you can see the reemergence of these kinds of ways of thinking. The empirical evidence seems to be supporting that. They’ve got wrinkles in their brains. They’re never going to go away. You can try and override them, but they’re always there and they will reappear under the right circumstances, which is why you see the reemergence under stress of a lot of irrational thinking.

For example, teleological explanations, the idea that everything is made for a purpose or a function, is a natural way to see the world. This is Deb Kelemen's work. You will find that people who considered themselves fairly rational and well educated will, nevertheless, default back to teleological explanations if you put them under a stressful timed kind of situation. So it’s a way of seeing the world that is never eradicated. I think that’s going to be a general principle, in the same way that a reflex, if you think about reflexes, that’s an unlearned behavioral response. You’re born with a whole set of reflexes. Many of them disappear, but they never entirely go away. They become typically reintegrated into more complex behaviors, but if someone goes into a coma, you can see the reflexes reemerging.

What we think is going on is that in the course of development, these very automatic behaviors become controlled by top-down processes from the cortex, all these higher order systems which are regulating and controlling and suppressing, trying to keep these things under wraps. But when the cortex is put out of action through a coma or head injury, then you can see many of these things reemerging again. I don’t see why there should be any point of departure from a motor system to a perceptual system, to a cognitive system, because they’re all basically patterns of neural firing in the brain, and so I don’t see why it can’t be the case that if concepts are derived through these processes, they could remain dormant and latent as well.

The hierarchy of representations in the brain

One of the things that has been fascinating me is the extent to which we can talk about the hierarchy of representations in the brain. Representations are literally re-presentations. That’s the language of the brain, that’s the mode of thinking in the brain, it’s representation. It’s more than likely, in fact, it’s most likely that there is already representation wired into the brain. If you think about the sensory systems, the array of the eye, for example, is already laid out in a topographical representation of the external world, to which it has not yet been exposed. What happens is that this is general layout, arrangements that become fine-tuned. We know of a lot of work to show that the arrangements of the sensory mechanisms do have a spatial arrangement, so that’s not learned in any sense. But these can become changed through experiences, and that’s why the early work of Hubel and Weisel, about the effects of abnormal environments showed that the general pattern could be distorted, but the pattern was already in place in the first place.

When you start to move beyond sensory into perceptual systems and then into cognitive systems, that’s when you get into theoretical arguments and the gloves come off. There are some people who argue that it has to be the case that there are certain primitives built into the conceptual systems. I’m talking about the work of, most notably, Elizabeth Spelke.  

There certainly seems to be a lot of perceptual ability in newborns in terms of constancies, noticing invariant aspects of the physical world. I don’t think I have a problem with any of that, but I suppose this is where the debates go. (…)

Shame in the East is something that is at least recognized as a major factor of identity

I’ve been to Japan a couple of time. I’m not an expert in the cultural variation of cognition, but clearly shame is a major factor in motivation, or avoidance of shame, in eastern cultures. I think it reflects the sense of self worth and value in eastern culture. It is very much a collective notion that they place a lot of emphasis on not letting the team down. I believe they even have a special word for that aspect or experience of shame that we don’t have. That doesn’t mean that it’s a concept that we can never entertain, but it does suggest that in the East this is something that is at least recognized as a major factor of identity.

Children don’t necessarily feel shame. I don’t think they’ve got a sense of self until well into their second year. They have the “I”, they have the notion of being, of having control. They will experience the willingness to move their arms, and I’m sure they make that connection very quickly, so they have this sense of self, in that “I” notion, but I don’t think they’ve got personal identity, and that’s one of the reasons that they don’t have much, or very few of us have much memory of our earlier times. Our episodic memories are very fragmented, sensory events. But from about two to three years on they start to get a sense of who they are. Knowing who you are means becoming integrated into your social environment, and part of becoming integrated into your social environment means acquiring a sense of shame. Below two, three years of age, I don’t think many children have a notion of shame. But from then on, as they have to become members of the social tribe, then they have to be made aware of the consequences of being antisocial or doing things not what’s expected of them. I think that’s probably late in the acquisition.”

Bruce Hood, Canadian-born experimental psychologist who specialises in developmental cognitive neuroscience, Director of the Bristol Cognitive Development Centre, based at the University of Bristol, Essentialism, Edge, May, 17, 2012. (Illustration source)

The Illusion of the Self

"For me, an illusion is a subjective experience that is not what it seems. Illusions are experiences in the mind, but they are not out there in nature. Rather, they are events generated by the brain. Most of us have an experience of a self. I certainly have one, and I do not doubt that others do as well – an autonomous individual with a coherent identity and sense of free will. But that experience is an illusion – it does not exist independently of the person having the experience, and it is certainly not what it seems. That’s not to say that the illusion is pointless. Experiencing a self illusion may have tangible functional benefits in the way we think and act, but that does not mean that it exists as an entity. (…)

For most of us, the sense of our self is as an integrated individual inhabiting a body. I think it is helpful to distinguish between the two ways of thinking about the self that William James talked about. There is conscious awareness of the present moment that he called the “I,” but there is also a self that reflects upon who we are in terms of our history, our current activities and our future plans. James called this aspect of the self, “me” which most of us would recognize as our personal identity—who we think we are. However, I think that both the “I” and the “me” are actually ever-changing narratives generated by our brain to provide a coherent framework to organize the output of all the factors that contribute to our thoughts and behaviors.

I think it helps to compare the experience of self to subjective contours – illusions such as the Kanizsa pattern where you see an invisible shape that is really defined entirely by the surrounding context. People understand that it is a trick of the mind but what they may not appreciate is that the brain is actually generating the neural activation as if the illusory shape was really there. In other words, the brain is hallucinating the experience. There are now many studies revealing that illusions generate brain activity as if they existed. They are not real but the brain treats them as if they were.

Now that line of reasoning could be applied to all perception except that not all perception is an illusion. There are real shapes out there in the world and other physical regularities that generate reliable states in the minds of others. The reason that the status of reality cannot be applied to the self, is that it does not exist independently of my brain alone that is having the experience. It may appear to have a consistency of regularity and stability that makes it seem real, but those properties alone do not make it so.

Similar ideas about the self can be found in Buddhism and the writings of Hume and Spinoza. The difference is that there is now good psychological and physiological evidence to support these ideas that I cover in the book. (…)

There are many cognitive scientists who would doubt that the experience of I is constructed from a multitude of unconscious mechanisms and processes. Me is similarly constructed, though we may be more aware of the events that have shaped it over our lifetime. But neither is cast in stone and both are open to all manner of reinterpretation. As artists, illusionists, movie makers, and more recently experimental psychologists have repeatedly shown, conscious experience is highly manipulatable and context dependent. Our memories are also largely abstracted reinterpretations of events – we all hold distorted memories of past experiences. (…)

The developmental processes that shape our brains from infancy onwards to create our identities as well as the systematic biases that distort the content of our identity to form a consistent narrative. I believe much of that distortion and bias is socially relevant in terms of how we would like to be seen by others. We all think we would act and behave in a certain way, but the reality is that we are often mistaken. (…)

Q: What role do you think childhood plays in shaping the self?

Just about everything we value in life has something to do with other people. Much of that influence occurs early in our development, which is one reason why human childhoods are so prolonged in comparison to other species. We invest so much effort and time into our children to pass on as much knowledge and experience as possible. It is worth noting that other species that have long periods of rearing also tend to be more social and intelligent in terms of flexible, adaptive behaviors. Babies are born social from the start but they develop their sense of self throughout childhood as they move to become independent adults that eventually reproduce. I would contend that the self continues to develop throughout a lifetime, especially as our roles change to accommodate others. (…)

The role of social networking in the way we portray our self

There are some interesting phenomena emerging. There is evidence of homophily – the grouping together of individuals who share a common perspective, which is not too surprising. More interesting is evidence of polarization. Rather than opening up and exposing us to different perspectives, social networking on the Internet can foster more radicalization as we seek out others who share our positions. The more others validate our opinions, the more extreme we become. I don’t think we need to be fearful, and I am less concerned than the prophets of doom who predict the downfall of human civilization, but I believe it is true that the way we create the narrative of the self is changing.

Q: If the self is an illusion, what is your position on free will?

Free will is certainly a major component of the self illusion, but it is not synonymous. Both are illusions, but the self illusion extends beyond the issues of choice and culpability to other realms of human experience. From what I understand, I think you and I share the same basic position about the logical impossibility of free will. I also think that compatibilism (that determinism and free will can co-exist) is incoherent. We certainly have more choices today to do things that are not in accord with our biology, and it may be true that we should talk about free will in a meaningful way, as Dennett has argued, but that seems irrelevant to the central problem of positing an entity that can make choices independently of the multitude of factors that control a decision. To me, the problem of free will is a logical impasse – we cannot choose the factors that ultimately influence what we do and think. That does not mean that we throw away the social, moral, and legal rulebooks, but we need to be vigilant about the way our attitudes about individuals will be challenged as we come to understand the factors (both material and psychological) that control our behaviors when it comes to attributing praise and blame. I believe this is somewhat akin to your position. (…)

The self illusion explains so many aspects of human behavior as well as our attitudes toward others. When we judge others, we consider them responsible for their actions. But was Mary Bale, the bank worker from Coventry who was caught on video dropping a cat into a garbage can, being true to her self? Or was Mel Gibson’s drunken anti-Semitic rant being himself or under the influence of someone else? What motivated Senator Weiner to text naked pictures of himself to women he did not know? In the book, I consider some of the extremes of human behavior from mass murderers with brain tumors that may have made them kill, to rising politicians who self-destruct. By rejecting the notion of a core self and considering how we are a multitude of competing urges and impulses, I think it is easier to understand why we suddenly go off the rails. It explains why we act, often unconsciously, in a way that is inconsistent with our self image – or the image of our self as we believe others see us.

That said, the self illusion is probably an inescapable experience we need for interacting with others and the world, and indeed we cannot readily abandon or ignore its influence, but we should be skeptical that each of us is the coherent, integrated entity we assume we are.

Bruce Hood Canadian-born experimental psychologist who specialises in developmental cognitive neuroscience, Director of the Bristol Cognitive Development Centre, based at the University of Bristol, interviewed by Sam Harris, The Illusion of the Self, Sam Harris blog, May 22, 2012.

See also:

Existence: What is the self?, Lapidarium notes
Paul King on what is the best explanation for identity
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
Professor George Lakoff: Reason is 98% Subconscious Metaphor in Frames & Cultural Narratives
Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking

Apr
29th
Sun
permalink

The time machine in our mind. The imagistic mental machinery that allows us to travel through time

            

Our ability to close our eyes and imagine the pleasures of Super Bowl Sunday or remember the excesses of New Year’s Eve is a fairly recent evolutionary development, and our talent for doing this is unparalleled in the animal kingdom. We are a race of time travelers, unfettered by chronology and capable of visiting the future or revisiting the past whenever we wish. If our neural time machines are damaged by illness, age or accident, we may become trapped in the present. (…)

Why did evolution design our brains to go wandering in time? Perhaps it’s because an experience is a terrible thing to waste. Moving around in the world exposes organisms to danger, so as a rule they should have as few experiences as possible and learn as much from each as they can. (…)

Time travel allows us to pay for an experience once and then have it again and again at no additional charge, learning new lessons with each repetition. When we are busy having experiences—herding children, signing checks, battling traffic—the dark network is silent, but as soon as those experiences are over, the network is awakened, and we begin moving across the landscape of our history to see what we can learn—for free.

Animals learn by trial and error, and the smarter they are, the fewer trials they need. Traveling backward buys us many trials for the price of one, but traveling forward allows us to dispense with trials entirely. Just as pilots practice flying in flight simulators, the rest of us practice living in life simulators, and our ability to simulate future courses of action and preview their consequences enables us to learn from mistakes without making them.

We don’t need to bake a liver cupcake to find out that it is a stunningly bad idea; simply imagining it is punishment enough. The same is true for insulting the boss and misplacing the children. We may not heed the warnings that prospection provides, but at least we aren’t surprised when we wake up with a hangover or when our waists and our inseams swap sizes. (…)

Perhaps the most startling fact about the dark network isn’t what it does but how often it does it. Neuroscientists refer to it as the brain’s default mode, which is to say that we spend more of our time away from the present than in it. People typically overestimate how often they are in the moment because they rarely take notice when they take leave. It is only when the environment demands our attention—a dog barks, a child cries, a telephone rings—that our mental time machines switch themselves off and deposit us with a bump in the here and now. We stay just long enough to take a message and then we slip off again to the land of Elsewhen, our dark networks awash in light.”

Daniel Gilbert, Professor of Psychology at Harvard University, Essay: The Brain: Time Travel in the Brain, TIME, Jan. 29, 2007. (Illustration for TIME by Jeffery Fischer).

Kurt Stocker: The time machine in our mind (2012)

                                            
                                          (Click image to open research paper in pdf)

Abstract:

"This article provides the first comprehensive conceptual account for the imagistic mental machinery that allows us to travel through time—for the time machine in our mind. It is argued that language reveals this imagistic machine and how we use it. Findings from a range of cognitive fields are theoretically unified and a recent proposal about spatialized mental time travel is elaborated on. The following novel distinctions are offered: external vs. internal viewing of time; “watching” time vs. projective “travel” through time; optional vs. obligatory mental time travel; mental time travel into anteriority or posteriority vs. mental time travel into the past or future; single mental time travel vs. nested dual mental time travel; mental time travel in episodic memory vs. mental time travel in semantic memory; and “seeing” vs. “sensing” mental imagery. Theoretical, empirical, and applied implications are discussed.”

"The theoretical strategy I adopt is to use language as an entree to a conceptual level that seems deeper than language itself (Pinker, 2007; Talmy, 2000). The logic of this strategy is in accordance with recent findings that many conceptualizations observed in language have also been found to exist in mental representations that are more basic than language itself. (…)

It is proposed that this strategy helps to uncover an imagistic mental machinery that allows us to travel through time—that this strategy helps us to uncover the time machine in our mind.

A central term used in this article is “the imagery structuring of time.” By this I refer to an invisible spatial scaffolding in our mental imagery across which temporal material can be splayed, the existence of which will be proposed in this article. At times it will be quite natural to assume that a space-to-time mapping in the sense of conceptual metaphor theory is involved in the structuring of this invisible scaffolding. (…)

It is thus for the present investigation more coherent to assume that mental time is basically constructed out of “spatialized” mental imagery—“spatialized” is another central term that I use in this article. I use it in the sense that it is neutral as to whether some of the imagery might be transferred via space-to-time mappings or whether some of the imagery might relate to space-to-time mappings only in an etymological sense. An example of temporal constructions that are readily characterized in terms of spatialized temporal imagery structuring are the conceptualizations underlying the use of before and after, conceptualizations that are often treated as having autonomous temporal status and as relating only etymologically to space.

The current investigation can refine this view somewhat, by postulating that spatialized temporal structures still play a very vital role in the imagery structuring underlying before and after. (…)

The theoretical strategy, to use linguistic expressions about time as an entree to conceptual structures about time that seem deeper than language itself, has been applied quite fruitfully, since it has allowed for the development of a rather comprehensive and precise conceptual account of the time machine in our mind. The theory is not an ad-hoc theory, since linguistic conceptualizations cannot be interpreted in a totally arbitrary way—for example language does not allow us to assume that a sentence such as I shopped at the store before I went home means that first the going home took place and then the shopping. In this respect the theory is to some degree already a data-guided theory, since linguistic expressions are data. However, the proposal of the theory that language has helped us to uncover a specific system of spatialized imagery structuring of time can only be evaluated by carrying out corresponding psychological (cognitive and neurocognitive) experiments and some ideas for such experiments have been presented. Since the time machine in our mind is a deeply fascinating apparatus, I am confident that theoretical and empirical investigations will continue to explore it.”

— Kurt Stocker, The time machine in our mind (pdf), Institute of Cognitive and Brain Sciences, University of California, Berkeley, CA, USA, 2012

See also:

☞ T. Suddendorf, D. Rose Addis and M C. Corballis, Mental time travel and the shaping of the human mind (pdf), The Royal Society, 2009.

Abstract: “Episodic memory, enabling conscious recollection of past episodes, can be distinguished from semantic memory, which stores enduring facts about the world. Episodic memory shares a core neural network with the simulation of future episodes, enabling mental time travel into both the past and the future. The notion that there might be something distinctly human about mental time travel has provoked ingenious attempts to demonstrate episodic memory or future simulation in nonhuman animals, but we argue that they have not yet established a capacity comparable to the human faculty. The evolution of the capacity to simulate possible future events, based on episodic memory, enhanced fitness by enabling action in preparation of different possible scenarios that increased present or future survival and reproduction chances. Human language may have evolved in the first instance for the sharing of past and planned future events, and, indeed, fictional ones, further enhancing fitness in social settings.”

☞ George Lakoff, Mark Johnson, Conceptual Metaphor in Everyday Language (pdf), The Journal of Philosophy, Vol 77, 1980.
Our sense of time is deeply entangled with memory
Time tag on Lapidarium notes

Jan
14th
Sat
permalink

What are memories made of?


“There appears to be no single memory store, but instead a diverse taxonomy of memory systems, each with its own special circuitry evolved to package and retrieve that type of memory. Memories are not static entities; over time they shift and migrate between different territories of the brain.

At the top of the taxonomical tree, a split occurs between declarative and non-declarative memories. Declarative memories are those you can state as true or false, such as remembering whether you rode a bicycle to work. Non-declarative memories are those that cannot be described as true or false, such as knowing how to ride a bicycle. A central hub in the declarative memory system is a brain region called the hippocampus. This undulating, twisted structure gets its name from its resemblance to a sea horse. Destruction of the hippocampus, through injury, neurosurgery or the ravages of Alzheimer’s disease, can result in an amnesia so severe that no events experienced after the damage can be remembered. (…)

A popular view is that during sleep your hippocampus “broadcasts” its recently captured memories to the neocortex, which updates your long-term store of past experience and knowledge. Eventually the neocortex is sufficient to support recall without relying on the hippocampus. However, there is evidence that if you need to vividly picture a scene in your mind, this appears to require the hippocampus, no matter how old the memory. We have recently discovered that the hippocampus is not only needed to reimagine the past, but also to imagine the future.

Pattern completion

Studying patients has taught us where memories might be stored, but not what physically constitutes a memory. The answer lies in the multitude of tiny modifiable connections between neuronal cells, the information-processing units of the brain. These cells, with their wispy tree-like protrusions, hang like stars in miniature galaxies and pulse with electrical charge. Thus, your memories are patterns inscribed in the connections between the millions of neurons in your brain. Each memory has its unique pattern of activity, logged in the vast cellular network every time a memory is formed.

It is thought that during recall of past events the original activity pattern in the hippocampus is re-established via a process that is known as “pattern completion”. During this process, the initial activity of the cells is incoherent, but via repeated reactivation the activity pattern is pieced together until the original pattern is complete. Memory retention is helped by the presence of two important molecules in our brain: dopamine and acetylcholine. Both help the neurons improve their ability to lay down memories in their connections. Sometimes, however, the system fails, leaving us unable to bring elements of the past to mind.

Of all the things we need to remember, one of the most essential is where we are. Becoming lost is debilitating and potentially terrifying. Within the hippocampus, and neighbouring brain structures, neurons exist that allow us to map space and find our way through it.Place cells" provide an internal map of space; "head-direction cell" signal the direction we are facing, similar to an internal compass; and "grid cells" chart out space in a manner akin to latitude and longitude.

For licensed London taxi drivers, it appears that navigating the labyrinth of London’s streets on a daily basis causes the density of grey matter in their posterior hippocampus to increase. Thus, the physical structure of your brain is malleable, depending on what you learn.

With impressive technical advances such as optogenetics, in which light beams excite or silence targeted groups of neurons, scientists are beginning to control memories at an unprecedented level.”

Hugo Spiers is a neuroscientist and lecturer at the institute of behavioural neuroscience at University College London, What are memories made of?, The Guardian, Jan 14, 2012 (Illustration: Polly Becker)

How and why memories change

"Since the time of the ancient Greeks, people have imagined memories to be a stable form of information that persists reliably. The metaphors for this persistence have changed over time—Plato compared our recollections to impressions in a wax tablet, and the idea of a biological hard drive is popular today—but the basic model has not. Once a memory is formed, we assume that it will stay the same. This, in fact, is why we trust our recollections. They feel like indelible portraits of the past.

None of this is true. In the past decade, scientists have come to realize that our memories are not inert packets of data and they don’t remain constant. Even though every memory feels like an honest representation, that sense of authenticity is the biggest lie of all. (…)

New research is showing that every time we recall an event, the structure of that memory in the brain is altered in light of the present moment, warped by our current feelings and knowledge. (…)

This new model of memory isn’t just a theory—neuroscientists actually have a molecular explanation of how and why memories change. In fact, their definition of memory has broadened to encompass not only the cliché cinematic scenes from childhood but also the persisting mental loops of illnesses like PTSD and addiction—and even pain disorders like neuropathy. Unlike most brain research, the field of memory has actually developed simpler explanations. Whenever the brain wants to retain something, it relies on just a handful of chemicals. Even more startling, an equally small family of compounds could turn out to be a universal eraser of history, a pill that we could take whenever we wanted to forget anything. (…)

How memory is formed

Every memory begins as a changed set of connections among cells in the brain. If you happen to remember this moment—the content of this sentence—it’s because a network of neurons has been altered, woven more tightly together within a vast electrical fabric. This linkage is literal: For a memory to exist, these scattered cells must become more sensitive to the activity of the others, so that if one cell fires, the rest of the circuit lights up as well.

Scientists refer to this process as long-term potentiation, and it involves an intricate cascade of gene activations and protein synthesis that makes it easier for these neurons to pass along their electrical excitement. Sometimes this requires the addition of new receptors at the dendritic end of a neuron, or an increase in the release of the chemical neurotransmitters that nerve cells use to communicate. Neurons will actually sprout new ion channels along their length, allowing them to generate more voltage. Collectively this creation of long-term potentiation is called the consolidation phase, when the circuit of cells representing a memory is first linked together. Regardless of the molecular details, it’s clear that even minor memories require major work. The past has to be wired into your hardware. (…)

What happens after a memory is formed, when we attempt to access it?

The secret was the timing: If new proteins couldn’t be created during the act of remembering, then the original memory ceased to exist. The erasure was also exceedingly specific. (…) They forgot only what they’d been forced to remember while under the influence of the protein inhibitor.

The disappearance of the fear memory suggested that every time we think about the past we are delicately transforming its cellular representation in the brain, changing its underlying neural circuitry. It was a stunning discovery: Memories are not formed and then pristinely maintained, as neuroscientists thought; they are formed and then rebuilt every time they’re accessed. “The brain isn’t interested in having a perfect set of memories about the past,” LeDoux says. “Instead, memory comes with a natural updating mechanism, which is how we make sure that the information taking up valuable space inside our head is still useful. That might make our memories less accurate, but it probably also makes them more relevant to the future.” (…)

[Donald] Lewis had discovered what came to be called memory reconsolidation, the brain’s practice of re-creating memories over and over again. (…)

The science of reconsolidation suggests that the memory is less stable and trustworthy than it appears. Whenever I remember the party, I re-create the memory and alter its map of neural connections. Some details are reinforcedmy current hunger makes me focus on the ice cream—while others get erased, like the face of a friend whose name I can no longer conjure. The memory is less like a movie, a permanent emulsion of chemicals on celluloid, and more like a play—subtly different each time it’s performed. In my brain, a network of cells is constantly being reconsolidated, rewritten, remade. That two-letter prefix changes everything. (…)

Once you start questioning the reality of memory, things fall apart pretty quickly. So many of our assumptions about the human mind—what it is, why it breaks, and how it can be healed—are rooted in a mistaken belief about how experience is stored in the brain. (According to a recent survey, 63 percent of Americans believe that human memory “works like a video camera, accurately recording the events we see and hear so that we can review and inspect them later.”) We want the past to persist, because the past gives us permanence. It tells us who we are and where we belong. But what if your most cherished recollections are also the most ephemeral thing in your head? (…)

Reconsolidation provides a mechanistic explanation for these errors. It’s why eyewitness testimony shouldn’t be trusted (even though it’s central to our justice system), why every memoir should be classified as fiction, and why it’s so disturbingly easy to implant false recollections. (The psychologist Elizabeth Loftus has repeatedly demonstrated that nearly a third of subjects can be tricked into claiming a made-up memory as their own. It takes only a single exposure to a new fiction for it to be reconsolidated as fact.) (…)

When we experience a traumatic event, it gets remembered in two separate ways. The first memory is the event itself, that cinematic scene we can replay at will. The second memory, however, consists entirely of the emotion, the negative feelings triggered by what happened. Every memory is actually kept in many different parts of the brain. Memories of negative emotions, for instance, are stored in the amygdala, an almond-shaped area in the center of the brain. (Patients who have suffered damage to the amygdala are incapable of remembering fear.) By contrast, all the relevant details that comprise the scene are kept in various sensory areas—visual elements in the visual cortex, auditory elements in the auditory cortex, and so on. That filing system means that different aspects can be influenced independently by reconsolidation.

The larger lesson is that because our memories are formed by the act of remembering them, controlling the conditions under which they are recalled can actually change their content. (…)

The chemistry of the brain is in constant flux, with the typical neural protein lasting anywhere from two weeks to a few months before it breaks down or gets reabsorbed. How then do some of our memories seem to last forever? It’s as if they are sturdier than the mind itself. Scientists have narrowed down the list of molecules that seem essential to the creation of long-term memory—sea slugs and mice without these compounds are total amnesiacs—but until recently nobody knew how they worked. (…)

A form of protein kinase C called PKMzeta hangs around synapses, the junctions where neurons connect, for an unusually long time. (…) What does PKMzeta do? The molecule’s crucial trick is that it increases the density of a particular type of sensor called an AMPA receptor on the outside of a neuron. It’s an ion channel, a gateway to the interior of a cell that, when opened, makes it easier for adjacent cells to excite one another. (While neurons are normally shy strangers, struggling to interact, PKMzeta turns them into intimate friends, happy to exchange all sorts of incidental information.) This process requires constant upkeep—every long-term memory is always on the verge of vanishing. As a result, even a brief interruption of PKMzeta activity can dismantle the function of a steadfast circuit. (…)

Because of the compartmentalization of memory in the brain—the storage of different aspects of a memory in different areas—the careful application of PKMzeta synthesis inhibitors and other chemicals that interfere with reconsolidation should allow scientists to selectively delete aspects of a memory. (…)

The astonishing power of PKMzeta forces us to redefine human memory. While we typically think of memories as those facts and events from the past that stick in the brain, Sacktor’s research suggests that memory is actually much bigger and stranger than that. (…)

Being able to control memory doesn’t simply give us admin access to our brains. It gives us the power to shape nearly every aspect of our lives. There’s something terrifying about this. Long ago, humans accepted the uncontrollable nature of memory; we can’t choose what to remember or forget. But now it appears that we’ll soon gain the ability to alter our sense of the past. (…)

The fact is we already tweak our memories—we just do it badly. Reconsolidation constantly alters our recollections, as we rehearse nostalgias and suppress pain. We repeat stories until they’re stale, rewrite history in favor of the winners, and tamp down our sorrows with whiskey. “Once people realize how memory actually works, a lot of these beliefs that memory shouldn’t be changed will seem a little ridiculous,” Nader says. “Anything can change memory. This technology isn’t new. It’s just a better version of an existing biological process.” (…)

Jonah Lehrer, American author and journalist, The Forgetting Pill Erases Painful Memories Forever, Wired Magazine, Feb 17, 2012. (Third illustration: Dwight Eschliman)

"You could double the number of synaptic connections in a very simple neurocircuit as a result of experience and learning. The reason for that was that long-term memory alters the expression of genes in nerve cells, which is the cause of the growth of new synaptic connections. When you see that at the cellular level, you realize that the brain can change because of experience. It gives you a different feeling about how nature and nurture interact. They are not separate processes.”

Eric R. Kandel, American neuropsychiatrist, Nobel Prize laureate, A Quest to Understand How Memory Works, NYT, March 5, 2012

Prof. Eric Kandel: We Are What We Remember - Memory and Biology

Eric R. Kandel, American neuropsychiatrist, Nobel Prize laureate, We Are What We Remember: Memory and Biology, FORA.tv, Prohansky Auditorium New York, NY, Mar 28.2011

See also:

☞ Eric R. Kandel, The Biology of Memory: A Forty-Year Perspective (pdf), Department of Neuroscience, Columbia University, New York, 2009
☞ Eric R. Kandel, A Biological Basis for the Unconscious?, Eric Kandel: “I want to know where the id, the ego, and the super-ego are located in the brain” | Big Think video Apr 1, 2012.
Memory tag on Lapidarium notes

Nov
2nd
Wed
permalink

How walking through a doorway increases forgetting      

              

"Like information in a book, unfolding events are stored in human memory in successive chapters or episodes. One consequence is that information in the current episode is easier to recall than information in a previous episode. An obvious question then is how the mind divides experience up into these discrete episodes? A new study led by Gabriel Radvansky shows that the simple act of walking through a doorway creates a new memory episode, thereby making it more difficult to recall information pertaining to an experience in the room that’s just been left behind. (…)

The key finding is that memory performance was poorer after travelling through an open doorway, compared with covering the same distance within the same room. “Walking through doorways serves as an event boundary, thereby initiating the updating of one’s event model [i.e. the creation of a new episode in memory]” the researchers said. (…)

Participants were more likely to make memory errors after they’d passed through a doorway than after they’d travelled the same distance in a single room.

Performance was worst of all when in the third, unfamiliar room, supporting the account based on new memory episodes being created on entering each new area.

These findings show how a physical feature of the environment can trigger a new memory episode. They concur with a study published earlier this year which focused on episode markers in memories for stories. Presented with a passage of narrative text, participants later found it more difficult to remember which sentence followed a target sentence, if the two were separated by an implied temporal boundary, such as “a while later …”. It’s as if information within a temporal episode was somehow bound together, whereas a memory divide was placed between information spanning two episodes.”

— Christian Jarrett, How walking through a doorway increases forgetting, BPS Research Digest, 2 November 2011

How is autobiographical memory divided into chapters?

"Autobiographical or 'episodic' memory describes our ability to recall past experiences and is distinct from semantic memory, which is our factual knowledge about the world. So far so good, but according to Youssef Ezzyat and Lila Davachi, psychology until now has largely neglected to investigate exactly how the brain organises the continuity of lived experience into a filing system of discrete episodes. (…)

Crucially, a minority of sentences began: ‘A while later …’, thereby conveying a temporal boundary in the narrative; the end of one episode and start of another. For comparison, a small number of control sentences began: 'A moment later …', indicating that the ensuing sentence was part of the same episode, not a new one.

After a ten minute break, the participants were given a surprise memory test. Presented with one sentence from the earlier narratives, their task was to recall the sentence that had followed. The key finding here was that the participants were poorer at recalling a sentence that came after a temporal boundary. It’s as if information within an episode was somehow bound together, whereas a memory divide was placed between information spanning two episodes.

A second study was similar to the first except that nineteen participants had their brains scanned during the initial read-through of the sentences. Ezzyat and Davachi identified patterns of neural activity in distinct regions of the prefrontal cortex and the middle-temporal gyrus that either correlated with within-event processing or with forming boundaries between events. These neural activity patterns were more distinct in those participants who showed larger behavioural effects of episode boundaries in their memory performance.

‘Our experiments are an important step toward understanding how event perception and segmentation influence the structure of long-term memory,’ the researchers concluded. ‘The behavioural results support the hypothesis that event segmentation shapes the organisation of long-term memory; the fMRI [brain scanning] results link these memory effects to brain activity consistent with information maintenance and integration within events.’”

See also:

G. A. Radvansky, S. A. Krawietz & A. K. Tamplin, Walking through doorways causes forgetting: Further explorations, The Quarterly Journal of Experimental Psychology, Volume 64, Issue 8, 2011
☞ Ezzyat, Y., and Davachi, L. What Constitutes an Episode in Episodic Memory?, Psychological Science, 2010
How Does the Brain Retain Information? (infographic)
Daniel Kahneman on the riddle of experience vs. memory
Memory tag on Lapidarium notes

Aug
7th
Sun
permalink

The Optimism Bias and Memory

“The belief that the future will be much better than the past and present is known as the optimism bias. (…)

The bias also protects and inspires us: it keeps us moving forward rather than to the nearest high-rise ledge. Without optimism, our ancestors might never have ventured far from their tribes and we might all be cave dwellers, still huddled together and dreaming of light and heat.

To make progress, we need to be able to imagine alternative realities — better ones — and we need to believe that we can achieve them. (…)

A growing body of scientific evidence points to the conclusion that optimism may be hardwired by evolution into the human brain. (…)

Our brains aren’t just stamped by the past. They are constantly being shaped by the future. (…)

Scientists who study memory proposed an intriguing answer: memories are susceptible to inaccuracies partly because the neural system responsible for remembering episodes from our past might not have evolved for memory alone. Rather, the core function of the memory system could in fact be to imagine the future (…) The system is not designed to perfectly replay past events. (…) It is designed to flexibly construct future scenarios in our minds. As a result, memory also ends up being a reconstructive process, and occasionally, details are deleted and others inserted.”

Tali Sharot, a British Academy postdoctoral fellow at the Wellcome Trust Centre for Neuroimaging at University College London, Optimism Bias: Human Brain May Be Hardwired for Hope, Time, June 6, 2011

Remembering the past to imagine the future

"A rapidly growing number of recent studies show that imagining the future depends on much of the same neural machinery that is needed for remembering the past. These findings have led to the concept of the prospective brain; an idea that a crucial function of the brain is to use stored information to imagine, simulate and predict possible future events. We suggest that processes such as memory can be productively re-conceptualized in light of this idea. (…)

Thoughts of past and future events are proposed to draw on similar information stored in episodic memory and rely on similar underlying processes, and episodic memory is proposed to support the construction of future events by extracting and recombining stored information into a simulation of a novel event. The hypothesis receives general support from findings of neural and cognitive overlap between thoughts of past and future events. (…)



Future events were more vivid and more detailed when imagined in recently experienced contexts (university locations) than when imagined in remotely experienced contexts (school settings). These results support the idea that episodic information is used to construct future event simulations. (…)

The core brain system is also used by many diverse types of task that require mental simulation of alternative perspectives. The idea is that the core brain system allows one to shift from perceiving the immediate environment to an alternative, imagined perspective that is based largely on memories of the past. Future thinking, by this view, is just one of several forms of such ability. Thinking about the perspectives of others (theory of mind) also appears to use the core brain system, as do certain forms of navigation. (…)

From an adaptive perspective, preparing for the future is a vital task in any domain of cognition or behaviour that is important for survival. The processes of event simulation probably have a key role in helping individuals plan for the future, although they are also important for other tasks that relate to the present and the past.

Memory can be thought of as a tool used by the prospective brain to generate simulations of possible future events.”

— D. L. Schacter, D. Rose Addis & R. L. Buckner, Remembering the past to imagine the future: the prospective brain (pdf), Department of Psychology, Harvard University, and the Athinoula A Martinos Center for Biomedical Imaging, Massachusetts General Hospital

See also:
The Brain Memories Are Crucial for Looking Into the Future
How the brain stops time, Lapidarium

                 

☞ K. K. Szpunar and K. B. McDermott, Episodic future thought and its relation to remembering: Evidence from ratings of subjective experience, Department of Psychology, Washington University
Memory tag on Lapidarium notes

Aug
5th
Fri
permalink

Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’


                                          “Brain (Left)” and “Brain (Right)” ©Don Stewart

"Simply put, our brain is inherently well suited for some tasks, but ill suited for others. Unfortunately, the brain’s weaknesses include recognizing which tasks are which, so for the most part we remain ignorantly blissful of the extent to which our lives are governed by the brain’s bugs.”

"Like a parent that carefully filters the information her child is exposed to, the brain edits and censors much of the information it feeds to the conscious mind. In the same fashion that your brain likely edited out the extra "the" from the previous sentence, we are generally blissfully unaware of the arbitrary and irrational factors that govern our decisions and behaviors."

Dean Buonomano, Brain Bugs. How the brain’s flaws shape our lives, W.W. Norton, 2011

Memory errors

"One type of memory error that we make, a memory bug, is really a product of the fact that in human memory, there’s no distinct process or distinction between storage and retrieval.

So when a computer or a DVD writes something down, it has one laser that’s used to store the memory, and it has another laser to retrieve the memory, and those are very distinct processes.

Now, in human memory, the distinction between storage and retrieval is not very clear. (…)

This should be seen as a consequence of the fact that memory is written down as we experience it. It’s being continuously updated. And the synapses that undergo changes in strength - so as you alluded to earlier, one of the ways the brain writes does information is by making new synapses, making new connections or strengthening new ones or weakening old ones.

And that process uses these synapses that get strengthened, but the retrieval also uses those same synapses. So that can strengthen that pathway. (…)

The perception of time

When we think of the perception of time, most people think of the subjective sense of time: How long have they been listening to this program, how long are they stuck in traffic? And the brain seems to have multiple different mechanisms, and that’s one thing that we’ve learned about how the brain tells time is that unlike the clocks on our wrist that can be used to tell a few milliseconds or months and years, the brain has very fundamentally different mechanisms for telling very short periods of time and very long periods of time.

And that’s a consequence of the evolutionary process, that it came up with redundant solutions and different solutions depending upon the adaptive needs of different animals.

And it turns out that we don’t seem to have a very precise clock. Time is very much distorted when we are anticipating what’s about to happen, when we’re nervous, when we’re stressed and when we have high-adrenaline moments. Our internal clock is not that accurate. (…)

We are living in a time and place we didn’t evolve to live in

"And humans suffer some of the same consequences of living in a time and place we didn’t evolve to live in. (…) And by peering into the brain, we can learn a lot about whywe are good at some things and why we are not very good at others."

"The brain is an incomprehensibly complex biological computer, responsible for every action we have taken and every decision, thought, and feeling we’ve ever had. This is probably a concept that most people do not find comforting. Indeed, the fact that the mind emerges from the brain is something not all brains have come to accept. But our reticence to acknowledge that our humanity derives solely from the physical brain should not come as a surprise. The brain was not designed to understand itself anymore than a calculator was designed to surf the Web.Dean Buonomano, Brain Bugs. How the brain’s flaws shape our lives, W.W. Norton, 2011

Our neuro-operating system, if you will, the set of rules were endowed in our genes that provide instructions on how to build the brain, what it should come preloaded with, the innate biases we should have, and most animals have innate biases to fear predator with big sharp teeth and to fear poisonous spiders and poisonous snakes. Because those innate fears increase survival. And you don’t want to have to learn to fear snakes because you might not have a second chance. So we still carry that genetic baggage within our neuro-operating system.

It’s safe to say that it’s outdated. We currently live in a world in which in the United States, probably few people a year die or suffer severe consequences due to snake bites. But every year 44,000 people die of car accidents. So in the same way that evolution did not prepare say skunks to cope with the dangers of automobiles, evolution did not prepare humans to face the dangers of many of the things that surround us in our modern life, including automobiles, or an excess fluid for example of that we deal with problems due to obesity and too much cholesterol are all things that now have very dramatic effects on our lives, and we weren’t prepared for those things by the evolutionary process. (…)

A lots of our decisions are the product of two different systems interacting within the brain. And very loosely speaking, you can think of one of these as the automatic system, which is very unconscious and associative and emotional; and people can think of this as intuition. And then we have the reflective system, which is effortful, requires knowledge and careful deliberation. And people can get a quick feel for these two systems in operation with the following examples. The old trick question: what do cows drink? The part of your brain that just thought of milk was the automatic system. And then the reflective system comes in and says wait a minute. That’s wrong. The answer is water.

Similarly, if I asked: I’m going to throw four coins up in the air what’s the probability that two of them will be heads and two of them will be tail? Now, part of the brain that’s just thinking of well, it sounds like it should be 50 percent, because I said half the coins tails, half the coins heads. That’s again basically the automatic system. It would take the reflective system, some serious reflection, to work out the math and come up with an answer of six-sixteenths.

Now, in most cases we reach happy balances between both of these systems. And clearly, when we are understanding each other’s speech and making rapid decisions, the automatic system provides great balance as to what the proper answer is. But when we need to engage our reflective system and ask questions, such as the probability question that I just asked, sometimes we are misled because we trust the automatic system too much and the reflective system doesn’t really get through in some situations. And this can lead us, in the case for example, of the temporal discounting situation where I asked if you want $100 today or $120 in the future. So the automatic system, which is biased by immediate gratification, might get the edge in that situation.”

Dean Buonomano, professor in the departments of neurobiology and psychology at the University of California at Los Angeles and an investigator at UCLA’s Brain Research Institute, 'Brain Bugs': Cognitive Flaws That 'Shape Our Lives', NPR, July 14, 2011

See also:

A risk-perception: What You Don’t Know Can Kill You
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
Iain McGilchrist on The Divided Brain and the Making of the Western World
Daniel Kahneman on the riddle of experience vs. memory
Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
Mind & Brain tag on Lapidarium

Jul
15th
Fri
permalink

How the Internet Affects Our Memories: Cognitive Consequences of Having Information at Our Fingertips
                           image

"Before the printed book, Memory ruled daily life… (…)

The elder Seneca (c. 55 B.C.-A.D. 37), a famous teacher of rhetoric, was said to be able to repeat long passages of speeches he had heard only once many years before. He would impress his students by asking each member of a class of two hundred to recite lines of poetry, and then he would recite all the lines they had quoted—in reverse order, from last to first.”

Daniel Boorsten, American historian, professor, attorney, and writer (1914-2004), The Discoverers, Random House, 1983

Abstract:

"The advent of the Internet, with sophisticated algorithmic search engines, has made accessing information as easy as lifting a finger. No longer do we have to make costly efforts to find the things we want. We can “Google” the old classmate, find articles online, or look up the actor who was on the tip of our tongue. The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves.”

***

"We investigate whether the Internet has become an external memory system that is primed by the need to acquire information. If asked the question whether there are any countries with only one color in their flag, for example, do we think about flags—or immediately think to go online to find out? Our research then tested if, once information has been accessed, our internal encoding is increased for where the information is to be found rather than for the information itself. (…)

Participants apparently did not make the effort to remember when they thought they could later look up the trivia statements they had read. Since search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up. (…)

The social form of information storage is also reflected in the findings that people forget items they think will be available externally, and remember items they think will not be available. And transactive memory is also evident when people seem better able to remember which computer folder an item has been stored in than the identity of the item itself. These results suggest that processes of human memory are adapting to the advent of new computing and communication technology. Just as we learn through transactive memory who knows what in our families and offices, we are learning what the computer “knows” and when we should attend to where we have stored information in our computer-based memories. We are becoming symbiotic with our computer tools, growing into interconnected systems that remember less by knowing information than by knowing where the information can be found. This gives us the advantage of access to a vast range of information—although the disadvantages of being constantly “wired” are still being debated.

It may be no more that nostalgia at this point, however, to wish we were less dependent on our gadgets. We have become dependent on them to the same degree we are dependent on all the knowledge we gain from our friends and coworkers—and lose if they are out of touch. The experience of losing our Internet connection becomes more and more like losing a friend. We must remain plugged in to know what Google knows.”

— B. Sparrow (Department of Psychology, Columbia University), J. Liu (University of Wisconsin–Madison), D. M. Wegner (Harvard University), ☞ Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips, Science, 5 August 2011: Vol. 333 no. 6043, pp. 776-778

Daniel M. Wegner comment:

"Groups of people commonly depend on one another for memory in this way — not by all knowing the same thing, but by specializing. And now we’ve added our computing devices to the network, depending for memory not just on people but also on a cloud of linked people and specialized information-filled devices.

We have all become a great cybermind. As long as we are connected to our machines through talk and keystrokes, we can all be part of the biggest, smartest mind ever. It is only when we are trapped for a moment without our Internet link that we return to our own humble little personal minds, tumbling back to earth from our flotation devices in the cloud.”

Daniel M. Wegner, professor of psychology at Harvard University, Don’t Fear the Cybermind, The New York Times, Aug 4, 2012 

The Extended Mind

"Why do we engage in reconsolidation? One theory is that reconsolidation helps ensure our memories are kept up to date, interpreted in light of recent experience. The brain has no interest in immaculate recall – it’s only interested in the past to the extent it helps us make sense of the future. By having memories that constantly change, we ensure that the memories stored inside our mental file cabinets are mostly relevant.

Of course, reconsolidation theory poses problems for the fidelity of memory. Although our memories always feel true – like a literal recording of the past – they’re mostly not, since they’re always being edited and bent by what we think now. And now. And now. (See the work of Elizabeth Loftus for more on memory inaccuracy.)

And this is where the internet comes in. One of the virtues of transactive memory is that it acts like a fact-check, helping ensure we don’t all descend into selfish solipsism. By sharing and comparing our memories, we can ensure that we still have some facts in common, that we all haven’t disappeared down the private rabbit hole of our own reconsolidations. In this sense, instinctually wanting to Google information – to not entrust trivia to the fallible brain – is a perfectly healthy impulse. (I’ve used Google to correct my errant memories thousands of times.) I don’t think it’s a sign that technology is rotting our cortex – I think it shows that we’re wise enough to outsource a skill we’re not very good at. Because while the web enables all sorts of other biases – it lets us filter news, for instance, to confirm what we already believe – the use of the web as a vessel of transactive memory is mostly virtuous. We save hard drive space for what matters, while at the same time improving the accuracy of recall.

PS. If you’d like a contrarian take, here’s Nicholas Carr:

If a fact stored externally were the same as a memory of that fact stored in our mind, then the loss of internal memory wouldn’t much matter. But external storage and biological memory are not the same thing. When we form, or “consolidate,” a personal memory, we also form associations between that memory and other memories that are unique to ourselves and also indispensable to the development of deep, conceptual knowledge. The associations, moreover, continue to change with time, as we learn more and experience more. As Emerson understood, the essence of personal memory is not the discrete facts or experiences we store in our mind but “the cohesion” which ties all those facts and experiences together. What is the self but the unique pattern of that cohesion?”

Jonah Lehrer, American journalist who writes on the topics of psychology, neuroscience, Is Google Ruining Your Memory?, Wired.com, July 15, 2011 (Illustration source)

See also:

☞ Amara D. Angelica, Google is destroying your memory, KurzweilAI
Thomas Metzinger on How Has The Internet Changed The Way You Think
Rolf Fobelli: News is to the mind what sugar is to the body, Lapidarium notes
Luciano Floridi on the future development of the information society
The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks
Memory tag on Lapidarium notes

Jul
3rd
Sun
permalink

Why we must remember to delete – and forget – in the digital age

"Human knowledge is based on memory. But does the digital age force us to remember too much?  Victor Mayer-Schönberger argues that we must delete and let go. (…) “Quite literally, Google knows more about us than we can remember ourselves.” (…)

That inability to forget, Mayer-Schönberger argues, limits one’s decision-making ability and ability to form close links with people who remember less. “The effect may be stronger when caused by more comprehensive and easily accessible external digital memory. Too perfect a recall, even when it is benignly intended to aid our decision-making, may prompt us to become caught up in our memories, unable to leave our past behind.” And not being able to leave our past behind makes humans, he argues, more unforgiving in the digital age than ever before. (…) “Digital memory, in reminding us of who she was more than 10 years ago, denied her the chance to evolve and change.” (…)

Harvard cyberlaw expert Jonathan Zittrain’s idea that we should have a right to declare reputation bankruptcy – ie to have certain aspects of one’s digital past erased from the digital memory.”

Jun
16th
Thu
permalink

The Physics of Intelligence

                                   

The laws of physics may well prevent the human brain from evolving into an ever more powerful thinking machine.

  • Human intelligence may be close to its evolutionary limit. Various lines of research suggest that most of the tweaks that could make us smarter would hit limits set by the laws of physics.
  • Brain size, for instance, helps up to a point but carries diminishing returns: brains become energy-hungry and slow. Better “wiring” across the brain also would consume energy and take up a disproportionate amount of space.
  • Making wires thinner would hit thermodynamic limitations similar to those that affect transistors in computer chips: communication would get noisy.
  • Humans, however, might still achieve higher intelligence collectively. And technology, from writing to the Internet, enables us to expand our mind outside the confines of our body.

"What few people realize is that the laws of physics place tough constraints on our mental faculties as well. Anthropologists have speculated about anatomic roadblocks to brain expansion—for instance, whether a larger brain could fit through the birth canal of a bipedal human. If we assume, though, that evolution can solve the birth canal problem, then we are led to the cusp of some even more profound questions.

One might think, for example, that evolutionary processes could increase the number of neurons in our brain or boost the rate at which those neurons exchange information and that such changes would make us smarter. But several recent trends of investigation, if taken together and followed to their logical conclusion, seem to suggest that such tweaks would soon run into physical limits. Ultimately those limits trace back to the very nature of neurons and the statistically noisy chemical exchanges by which they communicate. “Information, noise and energy are inextricably linked,” says Simon Laughlin, a theoretical neuroscientist at the University of Cambridge. “That connection exists at the thermodynamic level.”

Do the laws of thermodynamics, then, impose a limit on neuron-based intelligence, one that applies universally, whether in birds, primates, porpoises or praying mantises? This question apparently has never been asked in such broad terms, but the scientists interviewed for this article generally agree that it is a question worth contemplating. “It’s a very interesting point,” says Vijay Balasubramanian, a physicist who studies neural coding of information at the University of Penn­sylvania. “I’ve never even seen this point discussed in science fiction.”

Intelligence is of course a loaded word: it is hard to measure and even to define. Still, it seems fair to say that by most metrics, humans are the most intelligent animals on earth. But as our brain has evolved, has it approached a hard limit to its ability to process information? Could there be some physical limit to the evolution of neuron-based intelligence—and not just for humans but for all of life as we know it? (…)

Staying in Touch

Much of the energetic burden of brain size comes from the organ’s communication networks: in the human cortex, communications account for 80 percent of energy consumption. But it appears that as size increases, neuronal connectivity also becomes more challenging for subtler, structural reasons. (…)

A typical neuron has an elongated tail called the axon. At its end, the axon branches out, with the tips of the branches forming synapses, or contact points, with other cells. Axons, like telegraph wires, may connect different parts of the brain or may bundle up into nerves that extend from the central nervous system to the various parts of the body.

In their pioneering efforts, biologists measured the diameter of axons under microscopes and counted the size and density of nerve cells and the number of synapses per cell. They surveyed hundreds, sometimes thousands, of cells per brain in dozens of species. Eager to refine their mathematical curves by extending them to ever larger beasts, they even found ways to extract intact brains from whale carcasses. The five-hour process, meticulously described in the 1880s by biologist Gustav Adolf Guldberg, involved the use of a two-man lumberjack saw, an ax, a chisel and plenty of strength to open the top of the skull like a can of beans.

These studies revealed that as brains expand in size from species to species, several subtle but probably unsustainable changes happen. First, the average size of nerve cells increases. This phenomenon allows the neurons to connect to more and more of their compatriots as the overall number of neurons in the brain increases. But larger cells pack into the cerebral cortex less densely, so the distance between cells increases, as does the length of axons required to connect them. And because longer axons mean longer times for signals to travel between cells, these projections need to become thicker to maintain speed (thicker axons carry signals faster).

Researchers have also found that as brains get bigger from species to species, they are divided into a larger and larger number of distinct areas. You can see those areas if you stain brain tissue and view it under a microscope: patches of the cortex turn different colors. These areas often correspond with specialized functions, say, speech comprehension or face recognition. And as brains get larger, the specialization unfolds in another dimension: equivalent areas in the left and right hemispheres take on separate functions—for example, spatial versus verbal reasoning.

For decades this dividing of the brain into more work cubicles was viewed as a hallmark of intelligence. But it may also reflect a more mundane truth, says Mark Changizi, a theoretical neurobiologist at 2AI Labs in Boise, Idaho: specialization compensates for the connectivity problem that arises as brains get bigger. As you go from a mouse brain to a cow brain with 100 times as many neurons, it is impossible for neurons to expand quickly enough to stay just as well connected. Brains solve this problem by segregating like-functioned neurons into highly interconnected modules, with far fewer long-distance connections between modules. The specialization between right and left hemispheres solves a similar problem; it reduces the amount of information that must flow between the hemispheres, which minimizes the number of long, interhemispheric axons that the brain needs to maintain. “All of these seemingly complex things about bigger brains are just the backbends that the brain has to do to satisfy the connectivity problem” as it gets larger, Changizi argues. “It doesn’t tell us that the brain is smarter.”

Jan Karbowski, a computational neuroscientist at the Polish Academy of Sciences in Warsaw, agrees. “Somehow brains have to optimize several parameters simultaneously, and there must be trade-offs,” he says. “If you want to improve one thing, you screw up something else.” What happens, for example, if you expand the corpus callosum (the bundle of axons connecting right and left hemispheres) quickly enough to maintain constant connectivity as brains expand? And what if you thicken those axons, so the transit delay for signals traveling between hemispheres does not increase as brains expand? The results would not be pretty. The corpus callosum would expand—and push the hemispheres apart—so quickly that any performance improvements would be neutralized.

These trade-offs have been laid into stark relief by experiments showing the relation between axon width and conduction speed. At the end of the day, Karbowski says, neurons do get larger as brain size increases, but not quite quickly enough to stay equally well connected. And axons do get thicker as brains expand, but not quickly enough to make up for the longer conduction delays.

Keeping axons from thickening too quickly saves not only space but energy as well, Balasubramanian says. Doubling the width of an axon doubles energy expenditure, while increasing the velocity of pulses by just 40 percent or so. Even with all of this corner cutting, the volume of white matter (the axons) still grows more quickly than the volume of gray matter (the main body of neurons containing the cell nucleus) as brains increase in size. To put it another way, as brains get bigger, more of their volume is devoted to wiring rather than to the parts of individual cells that do the actual computing, which again suggests that scaling size up is ultimately unsustainable.

The Primacy of Primates

It is easy, with this dire state of affairs, to see why a cow fails to squeeze any more smarts out of its grapefruit-size brain than a mouse does from its blueberry-size brain. But evolution has also achieved impressive work­arounds at the level of the brain’s building blocks. When Jon H. Kaas, a neuroscientist at Vanderbilt University, and his colleagues compared the morphology of brain cells across a spectrum of primates in 2007, they stumbled on­to a game changer—one that has probably given humans an edge. (…)

Humans pack 100 billion neurons into 1.4 kilograms of brain, but a rodent that had followed its usual neuron-size scaling law to reach that number of neurons would now have to drag around a brain weighing 45 kilograms. And metabolically speaking, all that brain matter would eat the varmint out of house and home. “That may be one of the factors in why the large rodents don’t seem to be [smarter] at all than the small rodents,” Kaas says.

Having smaller, more densely packed neurons does seem to have a real impact on intelligence. In 2005 neurobiologists Gerhard Roth and Urusula Dicke, both at the University of Bremen in Germany, reviewed several traits that predict intelligence across species (as measured, roughly, by behavioral complexity) even more effectively than the encephalization quotient does. “The only tight correlation with intelligence,” Roth says, “is in the number of neurons in the cortex, plus the speed of neuronal activity,” which decreases with the distance between neurons and increases with the degree of myelination of axons. Myelin is fatty insulation that lets axons transmit signals more quickly.

If Roth is right, then primates’ small neurons have a double effect: first, they allow a greater increase in cortical cell number as brains enlarge; and second, they allow faster communication, because the cells pack more closely. Elephants and whales are reasonably smart, but their larger neurons and bigger brains lead to inefficiencies. “The packing density of neurons is much lower,” Roth says, “which means that the distance between neurons is larger and the velocity of nerve impulses is much lower.”

In fact, neuroscientists have recently seen a similar pattern in variations within humans: people with the quickest lines of communication between their brain areas also seem to be the brightest. One study, led in 2009 by Martijn P. van den Heuvel of the University Medical Center Utrecht in the Netherlands, used functional magnetic resonance imaging to measure how directly different brain areas talk to one another—that is, whether they talk via a large or a small number of intermediary areas. Van den Heuvel found that shorter paths between brain areas correlated with higher IQ. Edward Bullmore, an imaging neuroscientist at the University of Cambridge, and his collaborators obtained similar results the same year using a different approach. They compared working memory (the ability to hold several numbers in one’s memory at once) among 29 healthy people. They then used magnetoencephalographic recordings from their subjects’ scalp to estimate how quickly communication flowed between brain areas. People with the most direct communication and the fastest neural chatter had the best working memory.

It is a momentous insight. We know that as brains get larger, they save space and energy by limiting the number of direct connections between regions. The large human brain has relatively few of these long-distance connections. But Bullmore and van den Heuvel showed that these rare, nonstop connections have a disproportionate influence on smarts: brains that scrimp on resources by cutting just a few of them do noticeably worse. “You pay a price for intelligence,” Bullmore concludes, “and the price is that you can’t simply minimize wiring.”

Intelligence Design

If communication between neurons, and between brain areas, is really a major bottleneck that limits intelligence, then evolving neurons that are even smaller (and closer together, with faster communication) should yield smarter brains. Similarly, brains might become more efficient by evolving axons that can carry signals faster over longer distances without getting thicker. But something prevents animals from shrinking neurons and axons beyond a certain point. You might call it the mother of all limitations: the proteins that neurons use to generate electrical pulses, called ion channels, are inherently unreliable.

Ion channels are tiny valves that open and close through changes in their molecular folding. When they open, they allow ions of sodium, potassium or calcium to flow across cell membranes, producing the electrical signals by which neurons communicate. But being so minuscule, ion channels can get flipped open or closed by mere thermal vibrations. A simple biology experiment lays the defect bare. Isolate a single ion channel on the surface of a nerve cell using a microscopic glass tube, sort of like slipping a glass cup over a single ant on a sidewalk. When you adjust the voltage on the ion channel—a maneuver that causes it to open or close—the ion channel does not flip on and off reliably like your kitchen light does. Instead it flutters on and off randomly. Sometimes it does not open at all; other times it opens when it should not. By changing the voltage, all you do is change the likelihood that it opens.

It sounds like a horrible evolutionary design flaw—but in fact, it is a compromise. “If you make the spring on the channel too loose, then the noise keeps on switching it,” Laughlin says—as happens in the biology experiment described earlier. “If you make the spring on the channel stronger, then you get less noise,” he says, “but now it’s more work to switch it,” which forces neurons to spend more energy to control the ion channel. In other words, neurons save energy by using hair-trigger ion channels, but as a side effect the channels can flip open or close accidentally. The trade-off means that ion channels are reliable only if you use large numbers of them to “vote” on whether or not a neuron will generate an impulse. But voting becomes problematic as neurons get smaller. “When you reduce the size of neurons, you reduce the number of channels that are available to carry the signal,” Laughlin says. “And that increases the noise.”

In a pair of papers published in 2005 and 2007, Laughlin and his collaborators calculated whether the need to include enough ion channels limits how small axons can be made. The results were startling. “When axons got to be about 150 to 200 nanometers in diameter, they became impossibly noisy,” Laughlin says. At that point, an axon contains so few ion channels that the accidental opening of a single channel can spur the axon to deliver a signal even though the neuron did not intend to fire. The brain’s smallest axons probably already hiccup out about six of these accidental spikes per second. Shrink them just a little bit more, and they would blather out more than 100 per second. “Cortical gray matter neurons are working with axons that are pretty close to the physical limit,” Laughlin concludes.

This fundamental compromise between information, energy and noise is not unique to biology. It applies to everything from optical-fiber communications to ham radios and computer chips. Transistors act as gatekeepers of electrical signals, just like ion channels do. For five decades engineers have shrunk transistors steadily, cramming more and more onto chips to produce ever faster computers. Transistors in the latest chips are 22 nanometers. At those sizes, it becomes very challenging to “dope” silicon uniformly (doping is the addition of small quantities of other elements to adjust a semiconductor’s properties). By the time they reach about 10 nanometers, transistors will be so small that the random presence or absence of a single atom of boron will cause them to behave unpredictably.

Engineers might circumvent the limitations of current transistors by going back to the drawing board and redesigning chips to use entirely new technologies. But evolution cannot start from scratch: it has to work within the scheme and with the parts that have existed for half a billion years, explains Heinrich Reichert, a developmental neurobiologist at the University of Basel in Switzerland—like building a battleship with modified airplane parts.

Moreover, there is another reason to doubt that a major evolutionary leap could lead to smarter brains. Biology may have had a wide range of options when neurons first evolved, but 600 million years later a peculiar thing has happened. The brains of the honeybee, the octopus, the crow and intelligent mammals, Roth points out, look nothing alike at first glance. But if you look at the circuits that underlie tasks such as vision, smell, navigation and episodic memory of event sequences, “very astonishingly they all have absolutely the same basic arrangement.” Such evolutionary convergence usually suggests that a certain anatomical or physiological solution has reached maturity so that there may be little room left for improvement.

Perhaps, then, life has arrived at an optimal neural blueprint. That blueprint is wired up through a step-by-step choreography in which cells in the growing embryo interact through signaling molecules and physical nudging, and it is evolutionarily entrenched.

Bees Do It

So have humans reached the physical limits of how complex our brain can be, given the building blocks that are available to us? Laughlin doubts that there is any hard limit on brain function the way there is one on the speed of light. “It’s more likely you just have a law of diminishing returns,” he says. “It becomes less and less worthwhile the more you invest in it.” Our brain can pack in only so many neurons; our neurons can establish only so many connections among themselves; and those connections can carry only so many electrical impulses per second. Moreover, if our body and brain got much bigger, there would be costs in terms of energy consumption, dissipation of heat and the sheer time it takes for neural impulses to travel from one part of the brain to another.

The human mind, however, may have better ways of expanding without the need for further biological evolution. After all, honeybees and other social insects do it: acting in concert with their hive sisters, they form a collective entity that is smarter than the sum of its parts. Through social interaction we, too, have learned to pool our intelligence with others.

And then there is technology. For millennia written language has enabled us to store information outside our body, beyond the capacity of our brain to memorize. One could argue that the Internet is the ultimate consequence of this trend toward outward expansion of intelligence beyond our body. In a sense, it could be true, as some say, that the Internet makes you stupid: collective human intelligence—culture and computers—may have reduced the impetus for evolving greater individual smarts.

Douglas Fox, a freelance writer living in San Francisco. He is a frequent contributor to New Scientist, The Limits of Intelligence, Scientific American, July 2011.

See also:
Quantum Approaches to Consciousness, Stanford Encyclopedia of Philosophy
Dunbar’s Number: Why We Can’t Have More Than 150 Friends

Jun
4th
Sat
permalink

Exploring how the brain helps you keep a grip on reality

“How can you be sure that you’re remembering a faithful representation of what happened, as opposed to a fictitious recollection of an event that might have been entirely imagined? In short, how do we determine whether our memories are real? (…)

Reality monitoring" which is vital for maintaining confidence in our memories, and in understanding ourselves as individuals with a past and a future. (…)

One brain area that has emerged as playing a key role in discriminating imagination from reality is anterior prefrontal cortex. (…) It is thought to be among the last areas to achieve myelination, the neurodevelopmental process that continues into adolescence and enables nerve cells to transmit information more rapidly, allowing for more complex cognitive abilities. (…)”

permalink

New evidence for innate knowledge - Why we all share similar perceptions of physical reality

“Do we have innate knowledge? The team working on the Blue Brain Project at EPFL (Ecole Polytechnique Fédérale de Lausanne), led by Professor Henry Markram are finding proof that this is the case. They’ve discovered that neurons make connections independently of a subject’s experience. (…) 

The researchers were able to demonstrate that small clusters of pyramidal neurons in the neocortex interconnect according to a set of immutable and relatively simple rules. (…)

Acquired knowledge, such as memory, would involve combining these elementary building blocks at a higher level of the system. “This could explain why we all share similar perceptions of physical reality, while our memories reflect our individual experience” (…)

The neuronal connectivity must in some way have been programmed in advance. (…) Some of our fundamental representations or basic knowledge is inscribed in our genes.”

May
1st
Sun
permalink

How groups form (and don’t form) memories

"When the other person cannot validate shared memories," said Suparna Rajaram, "they are both robbed of the past."

From this observation came a keen and enduring interest in the social nature of memory, an area of scholarship occupied mostly by philosophers, sociologists, and historians — and notably unattended to until recently by cognitive psychologists.

So Rajaram, a psychology professor at Stony Brook University, began to specialize in “collaborative memory” — or how people learn and remember in groups. People generally believe that collaboration helps memory — but does it always? “How is memory shaped by being experienced in a social context?” (…)

The collaborative groups remembered more items than any single person would have done alone. But they also remembered fewer than the nominal groups did by totaling the efforts of its solitary workers. In other words, the collaborators’ whole was less than the sum of its parts.

This so-called "collaborative inhibition" affects recall for all sorts of things, from word pairs to emotionally laden events; it affects strangers or spouses, children or adults. It is, in scientific lingo, “robust.”

What explains this? One dynamic is "retrieval disruption": Each person remembers in his or her own way, and compelled to listen to others, can’t use those strategies effectively. Sometimes that effect fades. Sometimes it squashes the memories for good, causing “post collaborative forgetting.” Then there’s “social contagion” of errors, wherein a group member can implant erroneous recollections in another’s memory.

On the other hand, collaborative learning helps — which is why people hold it in high esteem. Individuals recall different information or events; after time, they can get together, contribute their bits, and reeducate each others’ memories and expand the group’s recall, mitigating the costs of collaboration. People can also correct each other’s erroneous memories, a process Rajaram and her colleagues call "error pruning." Or they can "cross-cue" — bring up recollections that jog memories others have forgotten. (…)

"If a small group can reshape memories, we see how individuals come to hold certain viewpoints or perspectives," she says. "That can serve as a model for how collective identities and histories are shaped."

Psychologists Ask How Well — Or Badly — We Remember Together, ScienceDaily, Apr. 28, 2011.