Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Oct
28th
Mon
permalink

Douglas Hofstadter: The Man Who Would Teach Machines to Think

"All the limitative theorems of metamathematics and the theory of computation suggest that once the ability to represent your own structure has reached a certain critical point, that is the kiss of death: it guarantees that you can never represent yourself totally. Gödel’s Incompleteness Theorem, Church’s Undecidability Theorem, Turing’s Halting Theorem, Tarski’s Truth Theorem — all have the flavour of some ancient fairy tale which warns you that “To seek self-knowledge is to embark on a journey which … will always be incomplete, cannot be charted on any map, will never halt, cannot be described.”

Douglas Hofstadter, 1979, cited in Vinod K. Wadhawan, Complexity Explained: 17. Epilogue, Nirmukta, 04 April 2010.

image M. C. Escher, Print Gallery. Hofstadter calls this Escher work a “pictorial parable for Godel’s Incompleteness Theorem.” Why? Look to the center of the painting, is there any way logical way to complete it? — source (Click picture to enlarge)

"On [Douglas] Hofstadter's office wall is a somewhat tattered reproduction of one of his favorite mind-twisting M. C. Escher prints, The Print Gallery.” In it, a boy stands looking at a print depicting a town where a woman looks down from her window at the roof of a gallery in which - yes - the boy stands looking at the print. We appreciate the paradox without being thrown by it, because we are outside looking in. Something like that creates our own unfathomable feelings of self. The self, subject and object, perceiver and perceived, is everywhere in the paradox.

It is a ”circling back,” the Tortoise tells Achilles, ”of a complex representation of the system together with its representations of all the rest of the world.”

”It is just so hard, emotionally,” Achilles tells the Tortoise, ”to acknowledge that a ‘soul’ emerges from so physical a system.” (…)

But philosophers like  [Daniel] Dennett believe, with Hofstadter, that scientific answers can be found without cheating by reducing the question to a manageable scale. (…) [T]he danger of looking only at the lowest biological level is in losing sight of the essential humanity that, in Hofstadter’s view, exists in the pattern and in the paradox. ”There seems to be no alternative to accepting some sort of incomprehensible quality to existence,” as Hofstadter puts it. ”Take your pick.” 

James Gleick, on Douglas R. Hofstadter in Exploring the Labyrinth of the Mind, The New York Times, August 21, 1983.

"In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself. Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.” (…)

“Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines as “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes” and a young man’s style as “hipsterish” and on and on ceaselessly throughout your day. That’s what it means to understand. But how does understanding work? For three decades, Hofstadter and his students have been trying to find out, trying to build “computer models of the fundamental mechanisms of thought.”

“At every moment,” Hofstadter writes in Surfaces and Essences, his latest book (written with Emmanuel Sander), “we are simultaneously faced with an indefinite number of overlapping and intermingling situations.” It is our job, as organisms that want to live, to make sense of that chaos. We do it by having the right concepts come to mind. This happens automatically, all the time. Analogy is Hofstadter’s go-to word. The thesis of his new book, which features a mélange of A’s on its cover, is that analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.

“Look at your conversations,” he says. “You’ll see over and over again, to your surprise, that this is the process of analogy-making.” Someone says something, which reminds you of something else; you say something, which reminds the other person of something else—that’s a conversation. It couldn’t be more straightforward. But at each step, Hofstadter argues, there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.

“Beware,” he writes, “of innocent phrases like ‘Oh, yeah, that’s exactly what happened to me!’ … behind whose nonchalance is hidden the entire mystery of the human mind.” (…)

[Hofstadter] spends most of his time in his study, two rooms on the top floor of his house, carpeted, a bit stuffy, and messier than he would like. His study is the center of his world. He reads there, listens to music there, studies there, draws there, writes his books there, writes his e‑mails there. (Hofstadter spends four hours a day writing e‑mail. “To me,” he has said, “an e‑mail is identical to a letter, every bit as formal, as refined, as carefully written … I rewrite, rewrite, rewrite, rewrite all of my e‑mails, always.”) He lives his mental life there, and it shows. Wall-to-wall there are books and drawings and notebooks and files, thoughts fossilized and splayed all over the room. It’s like a museum for his binges, a scene out of a brainy episode of Hoarders.

“Anything that I think about becomes part of my professional life,” he says. Daniel Dennett, who co-edited The Mind’s I with him, has explained that “what Douglas Hofstadter is, quite simply, is a phenomenologist, a practicing phenomenologist, and he does it better than anybody else. Ever.” He studies the phenomena—the feelings, the inside actions—of his own mind. “And the reason he’s good at it,” Dennett told me, “the reason he’s better than anybody else, is that he is very actively trying to have a theory of what’s going on backstage, of how thinking actually happens in the brain.” (…)

He makes photocopies of his notebook pages, cuts them up with scissors, and stores the errors in filing cabinets and labeled boxes around his study.

For Hofstadter, they’re clues. “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.” Correct speech isn’t very interesting; it’s like a well-executed magic trick—effective because it obscures how it works. What Hofstadter is looking for is “a tip of the rabbit’s ear … a hint of a trap door.

As the wind tunnel was to the Wright brothers, so the computer is to FARG. The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited. In Hofstadter’s view, this is the great opportunity of artificial intelligence. Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why. “I have always felt that the only hope of humans ever coming to fully understand the complexity of their minds,” Hofstadter has written, “is by modeling mental processes on computers and learning from the models’ inevitable failures.” (…)

But very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.

The modern era of mainstream AI—an era of steady progress and commercial success that began, roughly, in the early 1990s and continues to this day—is the long unlikely springtime after a period, known as the AI Winter, that nearly killed off the field.

It came down to a basic dilemma. On the one hand, the software we know how to write is very orderly; most computer programs are organized like a well-run army, with layers of commanders, each layer passing instructions down to the next, and routines that call subroutines that call subroutines. On the other hand, the software we want to write would be adaptable—and for that, a hierarchy of rules seems like just the wrong idea. Hofstadter once summarized the situation by writing, “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”

Machine learning

The “expert systems” that had once been the field’s meal ticket were foundering because of their brittleness. Their approach was fundamentally broken. Take machine translation from one language to another, long a holy grail of AI. The standard attack involved corralling linguists and translators into a room and trying to convert their expertise into rules for a program to follow. The standard attack failed for reasons you might expect: no set of rules can ever wrangle a human language; language is too big and too protean; for every rule obeyed, there’s a rule broken.

If machine translation was to survive as a commercial enterprise—if AI was to survive—it would have to find another way. Or better yet, a shortcut.

The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence. One such device, of course, is the human brain—but the whole point is to avoid grappling with the brain’s complexity. So what you do instead is start with a machine so simple, it almost doesn’t work: a machine, say, that randomly spits out French words for the English words it’s given.

Imagine a box with thousands of knobs on it. Some of these knobs control general settings: given one English word, how many French words, on average, should come out? And some control specific settings: given jump, what is the probability that shot comes next? The question is, just by tuning these knobs, can you get your machine to convert sensible English into sensible French?

It turns out that you can. What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.) You proceed one pair at a time. After you’ve entered a pair, take the English half and feed it into your machine to see what comes out in French. If that sentence is different from what you were expecting—different from the known correct translation—your machine isn’t quite right. So jiggle the knobs and try again. After enough feeding and trying and jiggling, feeding and trying and jiggling again, you’ll get a feel for the knobs, and you’ll be able to produce the correct French equivalent of your English sentence.

By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable result. And the beauty is that you never needed to program the machine explicitly; you never needed to know why the knobs should be twisted this way or that. (…)

Google has projects that gesture toward deeper understanding: extensions of machine learning inspired by brain biology; a “knowledge graph” that tries to map words, like Obama, to people or places or things. But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself. It’s like an enormous Rosetta Stone, the calcified hieroglyphics of minds once at work. (…)

Ever since he was about 15, Hofstadter has read The Catcher in the Rye once every 10 years. In the fall of 2011, he taught an undergraduate seminar called “Why Is J. D. Salinger’s The Catcher in the Rye a Great Novel?” He feels a deep kinship with Holden Caulfield. When I mentioned that a lot of the kids in my high-school class didn’t like Holden—they thought he was a whiner—Hofstadter explained that “they may not recognize his vulnerability.” You imagine him standing like Holden stood at the beginning of the novel, alone on the top of a hill, watching his classmates romp around at the football game below. “I have too many ideas already,” Hofstadter tells me. “I don’t need the stimulation of the outside world.” (…)

“Ars longa, vita brevis,” Hofstadter likes to say. “I just figure that life is short. I work, I don’t try to publicize. I don’t try to fight.”

There’s an analogy he made for me once. Einstein, he said, had come up with the light-quantum hypothesis in 1905. But nobody accepted it until 1923. “Not a soul,” Hofstadter says. “Einstein was completely alone in his belief in the existence of light as particles—for 18 years.

“That must have been very lonely.” “

— James Somers, to read the full article click The Man Who Would Teach Machines to Think, The Atlantic, Oct 23 2013

Douglas Hofstadter, is an American professor of cognitive science whose research focuses on the sense of “I”, consciousness, analogy-making, artistic creation, literary translation, and discovery in mathematics and physics. He is best known for his book Gödel, Escher, Bach: an Eternal Golden Braid, first published in 1979. It won both the Pulitzer Prize for general non-fiction.

See also:

The Mathematical Art of M.C. Escher, Lapidarium notes

Jul
1st
Mon
permalink

Why It’s Good To Be Wrong. David Deutsch on Fallibilism

image

"That human beings can be mistaken in anything they think or do is a proposition known as fallibilism. (…)

The trouble is that error is a subject where issues such as logical paradox, self-reference, and the inherent limits of reason rear their ugly heads in practical situations, and bite.

Paradoxes seem to appear when one considers the implications of one’s own fallibility: A fallibilist cannot claim to be infallible even about fallibilism itself. And so, one is forced to doubt that fallibilism is universally true. Which is the same as wondering whether one might be somehow infallible—at least about some things. For instance, can it be true that absolutely anything that you think is true, no matter how certain you are, might be false?

What? How might we be mistaken that two plus two is four? Or about other matters of pure logic? That stubbing one’s toe hurts? That there is a force of gravity pulling us to earth? Or that, as the philosopher René Descartes argued, “I think, therefore I am”?

When fallibilism starts to seem paradoxical, the mistakes begin. We are inclined to seek foundations—solid ground in the vast quicksand of human opinion—on which one can try to base everything else. Throughout the ages, the false authority of experience and the false reassurance of probability have been mistaken for such foundations: “No, we’re not always right,” your parents tell you, “just usually.” They have been on earth longer and think they have seen this situation before. But since that is an argument for “therefore you should always do as we say,” it is functionally a claim of infallibility after all. Moreover, look more closely: It claims literal infallibility too. Can anyone be infallibly right about the probability that they are right? (…)

The fact is, there’s nothing infallible about “direct experience” (…). Indeed, experience is never direct. It is a sort of virtual reality, created by our brains using sketchy and flawed sensory clues, given substance only by fallible expectations, explanations, and interpretations. Those can easily be more mistaken than the testimony of the passing hobo. If you doubt this, look at the work of psychologists Christopher Chabris and Daniel Simons, and verify by direct experience the fallibility of your own direct experience. Furthermore, the idea that your reminiscences are infallible is also heresy by the very doctrine that you are faithful to

I’ll tell you what really happened. You witnessed a dress rehearsal. The real ex cathedra ceremony was on the following day. In order not to make the declaration a day early, they substituted for the real text (which was about some arcane theological issue, not gravity) a lorem-ipsum-type placeholder that they deemed so absurd that any serious listener would immediately realize that that’s what it was. 

And indeed, you did realize this; and as a result, you reinterpreted your “direct experience,” which was identical to that of witnessing an ex cathedra declaration, as not being one. Precisely by reasoning that the content of the declaration was absurd, you concluded that you didn’t have to believe it. Which is also what you would have done if you hadn’t believed the infallibility doctrine.

You remain a believer, serious about giving your faith absolute priority over your own “unaided” reason (as reason is called in these contexts). But that very seriousness has forced you to decide first on the substance of the issue, using reason, and only then whether to defer to the infallible authority. This is neither fluke nor paradox. It is simply that if you take ideas seriously, there is no escape, even in dogma and faith, from the obligation to use reason and to give it priority over dogma, faith, and obedience. (…)

It is hard to contain reason within bounds. If you take your faith sufficiently seriously you may realize that it is not only the printers who are fallible in stating the rules for ex cathedra, but also the committee that wrote down those rules. And then that nothing can infallibly tell you what is infallible, nor what is probable. It is precisely because you, being fallible and having no infallible access to the infallible authority, no infallible way of interpreting what the authority means, and no infallible means of identifying an infallible authority in the first place, that infallibility cannot help you before reason has had its say. 

A related useful thing that faith tells you, if you take it seriously enough, is that the great majority of people who believe something on faith, in fact believe falsehoods. Hence, faith is insufficient for true belief. As the Nobel-Prize-winning biologist Peter Medawar said: “the intensity of the conviction that a hypothesis is true has no bearing on whether it is true or not.”

You know that Medawar’s advice holds for all ideas, not just scientific ones, and, by the same argument, to all the other diverse things that are held up as infallible (or probable) touchstones of truth: holy books; the evidence of the senses; statements about who is probably right; even true love. (…)

This logic of fallibility, discovered and rediscovered from time to time, has had profound salutary effects in the history of ideas. Whenever anything demands blind obedience, its ideology contains a claim of infallibility somewhere; but wherever someone believes seriously enough in that infallibility, they rediscover the need for reason to identify and correctly interpret the infallible source. Thus the sages of ancient Judaism were led, by the assumption of the Bible’s infallibility, to develop their tradition of critical discussion. And in an apparently remote application of the same logic, the British constitutional doctrine of “parliamentary sovereignty” was used by 20th-century judges such as Lord Denning to develop an institution of judicial review similar to that which, in the United States, had grown out of the opposite doctrine of “separation of powers.”

Fallibilism has practical consequences for the methodology and administration of science, and in government, law, education, and every aspect of public life. The philosopher Karl Poppe elaborated on many of these. He wrote:

The question about the sources of our knowledge … has always been asked in the spirit of: ‘What are the best sources of our knowledge—the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist—no more than ideal rulers—and that all ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’

It’s all about error. We used to think that there was a way to organize ourselves that would minimize errors. This is an infallibilist chimera that has been part of every tyranny since time immemorial, from the “divine right of kings” to centralized economic planning. And it is implemented by many patterns of thought that protect misconceptions in individual minds, making someone blind to evidence that he isn’t Napoleon, or making the scientific crank reinterpret peer review as a conspiracy to keep falsehoods in place. (…)

Popper’s answer is: We can hope to detect and eliminate error if we set up traditions of criticism—substantive criticism, directed at the content of ideas, not their sources, and directed at whether they solve the problems that they purport to solve. Here is another apparent paradox, for a tradition is a set of ideas that stay the same, while criticism is an attempt to change ideas. But there is no contradiction. Our systems of checks and balances are steeped in traditions—such as freedom of speech and of the press, elections, and parliamentary procedures, the values behind concepts of contract and of tort—that survive not because they are deferred to but precisely because they are not: They themselves are continually criticized, and either survive criticism (which allows them to be adopted without deference) or are improved (for example, when the franchise is extended, or slavery abolished). Democracy, in this conception, is not a system for enforcing obedience to the authority of the majority. In the bigger picture, it is a mechanism for promoting the creation of consent, by creating objectively better ideas, by eliminating errors from existing ones.

“Our whole problem,” said the physicist John Wheeler, “is to make the mistakes as fast as possible.” This liberating thought is more obviously true in theoretical physics than in situations where mistakes hurt. A mistake in a military operation, or a surgical operation, can kill. But that only means that whenever possible we should make the mistakes in theory, or in the laboratory; we should “let our theories die in our place,” as Popper put it. But when the enemy is at the gates, or the patient is dying, one cannot confine oneself to theory. We should abjure the traditional totalitarian assumption, still lurking in almost every educational system, that every mistake is the result of wrongdoing or stupidity. For that implies that everyone other than the stupid and the wrongdoers is infallible. Headline writers should not call every failed military strike “botched;” courts should not call every medical tragedy malpractice, even if it’s true that they “shouldn’t have happened” in the sense that lessons can be learned to prevent them from happening again. “We are all alike,” as Popper remarked, “in our infinite ignorance.” And this is a good and hopeful thing, for it allows for a future of unbounded improvement.

Fallibilism, correctly understood, implies the possibility, not the impossibility, of knowledge, because the very concept of error, if taken seriously, implies that truth exists and can be found. The inherent limitation on human reason, that it can never find solid foundations for ideas, does not constitute any sort of limit on the creation of objective knowledge nor, therefore, on progress. The absence of foundation, whether infallible or probable, is no loss to anyone except tyrants and charlatans, because what the rest of us want from ideas is their content, not their provenance: If your disease has been cured by medical science, and you then become aware that science never proves anything but only disproves theories (and then only tentatively), you do not respond “oh dear, I’ll just have to die, then.” (…)

The theory of knowledge is a tightrope that is the only path from A to B, with a long, hard drop for anyone who steps off on one side into “knowledge is impossible, progress is an illusion” or on the other side into “I must be right, or at least probably right.” Indeed, infallibilism and nihilism are twins. Both fail to understand that mistakes are not only inevitable, they are correctable (fallibly). Which is why they both abhor institutions of substantive criticism and error correction, and denigrate rational thought as useless or fraudulent. They both justify the same tyrannies. They both justify each other.

I must now apologize for trying to trick you earlier: All the ideas that I suggested we might know infallibly are in fact falsehoods. “Two plus two” of course isn’t “four” as you’d discover if you wrote “2+2” in an arithmetic test when asked to add two and two. If we were infallible about matters of pure logic, no one would ever fail a logic test either. Stubbing your toe does not always hurt if you are focused on some overriding priority like rescuing a comrade in battle. And as for knowing that “I” exist because I think—note that your knowledge that you think is only a memory of what you did think, a second or so ago, and that can easily be a false memory. (For discussions of some fascinating experiments demonstrating this, see Daniel Dennett’s book Brainstorms.) Moreover, if you think you are Napoleon, the person you think must exist because you think, doesn’t exist.

And the general theory of relativity denies that gravity exerts a force on falling objects. The pope would actually be on firm ground if he were to concur with that ex cathedra. Now, are you going to defer to my authority as a physicist about that? Or decide that modern physics is a sham? Or are you going to decide according to whether that claim really has survived all rational attempts to refute it?”

David Deutsch, a British physicist at the University of Oxford. He is a non-stipendiary Visiting Professor in the Department of Atomic and Laser Physics at the Centre for Quantum Computation (CQC) in the Clarendon Laboratory of the University of Oxford, Why It’s Good To Be Wrong, Nautilus, 2013. (Illustration by Gérard DuBois)

See also:

David Deutsch: A new way to explain explanation, TED, 2009
David Deutsch on knowledge as crafted self-similarity
David Deutsch on Artificial Intelligence

Mar
3rd
Sun
permalink

Rolf Fobelli: News is to the mind what sugar is to the body

   image

"We humans seem to be natural-born signal hunters, we’re terrible at regulating our intake of information. We’ll consume a ton of noise if we sense we may discover an added ounce of signal. So our instinct is at war with our capacity for making sense.”

Nicholas Carr, A little more signal, a lot more noise, Rough Type, May 30, 2012.

"When people struggle to describe the state that the Internet puts them in they arrive at a remarkably familiar picture of disassociation and fragmentation. Life was once whole, continuous, stable; now it is fragmented, multi-part, shimmering around us, unstable and impossible to fix. The world becomes Keats’s “waking dream,” as the writer Kevin Kelly puts it.”

Adam Gopnik on The Information and How the Internet gets inside us, 2011

"Our brains are wired to pay attention to visible, large, scandalous, sensational, shocking, peoplerelated, story-formatted, fast changing, loud, graphic onslaughts of stimuli. Our brains have limited attention to spend on more subtle pieces of intelligence that are small, abstract, ambivalent, complex, slow to develop and quiet, much less silent. News organizations systematically exploit this bias. News media outlets, by and large, focus on the highly visible. They display whatever information they can convey with gripping stories and lurid pictures, and they systematically ignore the subtle and insidious, even if that material is more important. News grabs our attention; that’s how its business model works. Even if the advertising model didn’t exist, we would still soak up news pieces because they are easy to digest and superficially quite tasty. The highly visible misleads us. (…)

  • Terrorism is overrated. Chronic stress is underrated.
  • The collapse of Lehman Brothers is overrated. Fiscal irresponsibility is underrated.
  • Astronauts are overrated. Nurses are underrated.
  • Britney Spears is overrated. IPCC reports are underrated.
  • Airplane crashes are overrated. Resistance to antibiotics is underrated.

(…)

Afraid you will miss “something important”? From my experience, if something really important happens, you will hear about it, even if you live in a cocoon that protects you from the news. Friends and colleagues will tell you about relevant events far more reliably than any news organization. They will fill you in with the added benefit of meta-information, since they know your priorities and you know how they think. You will learn far more about really important events and societal shifts by reading about them in specialized journals, in-depth magazines or good books and by talking to the people who know. (…)

The more “news factoids” you digest, the less of the big picture you will understand. (…)

Thinking requires concentration. Concentration requires uninterrupted time. News items are like free-floating radicals that interfere with clear thinking. News pieces are specifically engineered to interrupt you. They are like viruses that steal attention for their own purposes. (…)

This is about the inability to think clearly because you have opened yourself up to the disruptive factoid stream. News makes us shallow thinkers. But it’s worse than that. News severely affects memory. (…)

News is an interruption system. It seizes your attention only to scramble it. Besides a lack of glucose in your blood stream, news distraction is the biggest barricade to clear thinking. (…)

In the words of Professor Michael Merzenich (University of California, San Francisco), a pioneer in the field of neuroplasticity: “We are training our brains to pay attention to the crap.” (…)

Good professional journalists take time with their stories, authenticate their facts and try to think things through. But like any profession, journalism has some incompetent, unfair practitioners who don’t have the time – or the capacity – for deep analysis. You might not be able to tell the difference between a polished professional report and a rushed, glib, paid-by-the-piece article by a writer with an ax to grind. It all looks like news.

My estimate: fewer than 10% of the news stories are original. Less than 1% are truly investigative. And only once every 50 years do journalists uncover a Watergate.

Many reporters cobble together the rest of the news from other people’s reports, common knowledge, shallow thinking and whatever the journalist can find on the internet. Some reporters copy from each other or refer to old pieces, without necessarily catching up with any interim corrections. The copying and the copying of the copies multiply the flaws in the stories and their irrelevance. (…)

Overwhelming evidence indicates that forecasts by journalists and by experts in finance, social development, global conflicts and technology are almost always completely wrong. So, why consume that junk?

Did the newspapers predict World War I, the Great Depression, the sexual revolution, the fall of the Soviet empire, the rise of the Internet, resistance to antibiotics, the fall of Europe’s birth rate or the explosion in depression cases? Maybe, you’d find one or two correct predictions in a sea of millions of mistaken ones. Incorrect forecast are not only useless, they are harmful.

To increase the accuracy of your predictions, cut out the news and roll the dice or, if you are ready for depth, read books and knowledgeable journals to understand the invisible generators that affect our world. (…)

I have now gone without news for a year, so I can see, feel and report the effects of this freedom first hand: less disruption, more time, less anxiety, deeper thinking, more insights. It’s not easy, but it’s worth it.”

Table of Contents:

No 1 – News misleads us systematically
No 2 – News is irrelevant
No 3 – News limits understanding
No 4 – News is toxic to your body
No 5 – News massively increases cognitive errors
No 6 – News inhibits thinking
No 7 – News changes the structure of your brain
No 8 – News is costly
No 9 – News sunders the relationship between reputation and achievement
No 10 – News is produced by journalists
No 11 – Reported facts are sometimes wrong, forecasts always
No 12 – News is manipulative
No 13 – News makes us passive
No 14 – News gives us the illusion of caring
No 15 – News kills creativity

Rolf Dobelli, Swiss novelist, writer, entrepreneur and curator of zurich.minds, to read full essay click Avoid News. Towards a Healthy News Diet (pdf), 2010. (Illustration: Information Overload by taylorboren)

See also:

The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks
Nicholas Carr on the evolution of communication technology and our compulsive consumption of information
Does Google Make Us Stupid?
Nicholas Carr on what the internet is doing to our brains?
How the Internet Affects Our Memories: Cognitive Consequences of Having Information at Our Fingertips
☞ Dr Paul Howard-Jones, The impact of digital technologies on human wellbeing (pdf), University of Bristol
William Deresiewicz on multitasking and the value of solitude
Information tag on Lapidarium

Jan
27th
Sun
permalink

Daniel C. Dennett on an attempt to understand the mind; autonomic neurons, culture and computational architecture

image

"What Darwin and Turing had both discovered, in their different ways, was the existence of competence without comprehension."

— Daniel C. Dennett, What Darwin’s theory of evolution teaches us about Alan Turing and artificial intelligence, Lapidarium

"I’m trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine. This is called homuncular functionalism, because you take the whole person. You break the whole person down into two or three or four or seven sub persons that are basically agents. They’re homunculi, and this looks like a regress, but it’s only a finite regress, because you take each of those in turn and you break it down into a group of stupider, more specialized homunculi, and you keep going until you arrive at parts that you can replace with a machine, and that’s a great way of thinking about cognitive science. It’s what good old-fashioned AI tried to do and still trying to do.

The idea is basically right, but when I first conceived of it, I made a big mistake. I was at that point enamored of the McCulloch-Pitts logical neuron. McCulloch and Pitts had put together the idea of a very simple artificial neuron, a computational neuron, which had multiple inputs and a single branching output and a threshold for firing, and the inputs were either inhibitory or excitatory. They proved that in principle a neural net made of these logical neurons could compute anything you wanted to compute. So this was very exciting. It meant that basically you could treat the brain as a computer and treat the neuron as a sort of basic switching element in the computer, and that was certainly an inspiring over-simplification. Everybody knew is was an over-simplification, but people didn’t realize how much, and more recently it’s become clear to me that it’s a dramatic over-simplification, because each neuron, far from being a simple logical switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.

The question is, what happens to your ideas about computational architecture when you think of individual neurons not as dutiful slaves or as simple machines but as agents that have to be kept in line and that have to be properly rewarded and that can form coalitions and cabals and organizations and alliances? This vision of the brain as a sort of social arena of politically warring forces seems like sort of an amusing fantasy at first, but is now becoming something that I take more and more seriously, and it’s fed by a lot of different currents.

Evolutionary biologist David Haig has some lovely papers on intrapersonal conflicts where he’s talking about how even at the level of the genetics, even at the level of the conflict between the genes you get from your mother and the genes you get from your father, the so-called madumnal and padumnal genes, those are in opponent relations and if they get out of whack, serious imbalances can happen that show up as particular psychological anomalies.

We’re beginning to come to grips with the idea that your brain is not this well-organized hierarchical control system where everything is in order, a very dramatic vision of bureaucracy. In fact, it’s much more like anarchy with some elements of democracy. Sometimes you can achieve stability and mutual aid and a sort of calm united front, and then everything is hunky-dory, but then it’s always possible for things to get out of whack and for one alliance or another to gain control, and then you get obsessions and delusions and so forth.

You begin to think about the normal well-tempered mind, in effect, the well-organized mind, as an achievement, not as the base state, something that is only achieved when all is going well, but still, in the general realm of humanity, most of us are pretty well put together most of the time. This gives a very different vision of what the architecture is like, and I’m just trying to get my head around how to think about that. (…)

The vision of the brain as a computer, which I still champion, is changing so fast. The brain’s a computer, but it’s so different from any computer that you’re used to. It’s not like your desktop or your laptop at all, and it’s not like your iPhone except in some ways. It’s a much more interesting phenomenon. What Turing gave us for the first time (and without Turing you just couldn’t do any of this) is a way of thinking in a disciplined way about phenomena that have, as I like to say, trillions of moving parts. Until late 20th century, nobody knew how to take seriously a machine with a trillion moving parts. It’s just mind-boggling.

You couldn’t do it, but computer science gives us the ideas, the concepts of levels, virtual machines implemented in virtual machines implemented in virtual machines and so forth. We have these nice ideas of recursive reorganization of which your iPhone is just one example and a very structured and very rigid one at that.

We’re getting away from the rigidity of that model, which was worth trying for all it was worth. You go for the low-hanging fruit first. First, you try to make minds as simple as possible. You make them as much like digital computers, as much like von Neumann machines, as possible. It doesn’t work. Now, we know why it doesn’t work pretty well. So you’re going to have a parallel architecture because, after all, the brain is obviously massively parallel.

It’s going to be a connectionist network. Although we know many of the talents of connectionist networks, how do you knit them together into one big fabric that can do all the things minds do? Who’s in charge? What kind of control system? Control is the real key, and you begin to realize that control in brains is very different from control in computers. Control in your commercial computer is very much a carefully designed top-down thing.

You really don’t have to worry about one part of your laptop going rogue and trying out something on its own that the rest of the system doesn’t want to do. No, they’re all slaves. If they’re agents, they’re slaves. They are prisoners. They have very clear job descriptions. They get fed every day. They don’t have to worry about where the energy’s coming from, and they’re not ambitious. They just do what they’re asked to do and do it brilliantly with only the slightest tint of comprehension. You get all the power of computers out of these mindless little robotic slave prisoners, but that’s not the way your brain is organized.

Each neuron is imprisoned in your brain. I now think of these as cells within cells, as cells within prison cells. Realize that every neuron in your brain, every human cell in your body (leaving aside all the symbionts), is a direct descendent of eukaryotic cells that lived and fended for themselves for about a billion years as free-swimming, free-living little agents. They fended for themselves, and they survived.

They had to develop an awful lot of know-how, a lot of talent, a lot of self-protective talent to do that. When they joined forces into multi-cellular creatures, they gave up a lot of that. They became, in effect, domesticated. They became part of larger, more monolithic organizations. My hunch is that that’s true in general. We don’t have to worry about our muscle cells rebelling against us, or anything like that. When they do, we call it cancer, but in the brain I think that (and this is my wild idea) maybe only in one species, us, and maybe only in the obviously more volatile parts of the brain, the cortical areas, some little switch has been thrown in the genetics that, in effect, makes our neurons a little bit feral, a little bit like what happens when you let sheep or pigs go feral, and they recover their wild talents very fast.

Maybe a lot of the neurons in our brains are not just capable but, if you like, motivated to be more adventurous, more exploratory or risky in the way they comport themselves, in the way they live their lives. They’re struggling amongst themselves with each other for influence, just for staying alive, and there’s competition going on between individual neurons. As soon as that happens, you have room for cooperation to create alliances, and I suspect that a more free-wheeling, anarchic organization is the secret of our greater capacities of creativity, imagination, thinking outside the box and all that, and the price we pay for it is our susceptibility to obsessions, mental illnesses, delusions and smaller problems.

We got risky brains that are much riskier than the brains of other mammals even, even more risky than the brains of chimpanzees, and that this could be partly a matter of a few simple mutations in control genes that release some of the innate competitive talent that is still there in the genomes of the individual neurons. But I don’t think that genetics is the level to explain this. You need culture to explain it.

'Culture creates a whole new biosphere'

This, I speculate, is a response to our invention of culture; culture creates a whole new biosphere, in effect, a whole new cultural sphere of activity where there’s opportunities that don’t exist for any other brain tissues in any other creatures, and that this exploration of this space of cultural possibility is what we need to do to explain how the mind works.

Everything I just said is very speculative. I’d be thrilled if 20 percent of it was right. It’s an idea, a way of thinking about brains and minds and culture that is, to me, full of promise, but it may not pan out. I don’t worry about that, actually. I’m content to explore this, and if it turns out that I’m just wrong, I’ll say, “Oh, okay. I was wrong. It was fun thinking about it,” but I think I might be right.

I’m not myself equipped to work on a lot of the science; other people could work on it, and they already are in a way. The idea of selfish neurons has already been articulated by Sebastian Seung of MIT in a brilliant keynote lecture he gave at Society for Neuroscience in San Diego a few years ago. I thought, oh, yeah, selfish neurons, selfish synapses. Cool. Let’s push that and see where it leads. But there are many ways of exploring this. One of the still unexplained, so far as I can tell, and amazing features of the brain is its tremendous plasticity.

Mike Merzenich sutured a monkey’s fingers together so that it didn’t need as much cortex to represent two separate individual digits, and pretty soon the cortical regions that were representing those two digits shrank, making that part of the cortex available to use for other things. When the sutures were removed, the cortical regions soon resumed pretty much their earlier dimensions. If you blindfold yourself for eight weeks, as Alvaro Pascual-Leone does in his experiments, you find that your visual cortex starts getting adapted for Braille, for haptic perception, for touch.

The way the brain spontaneously reorganizes itself in response to trauma of this sort, or just novel experience, is itself one of the most amazing features of the brain, and if you don’t have an architecture that can explain how that could happen and why that is, your model has a major defect. I think you really have to think in terms of individual neurons as micro-agents, and ask what’s in it for them?

Why should these neurons be so eager to pitch in and do this other work just because they don’t have a job? Well, they’re out of work. They’re unemployed, and if you’re unemployed, you’re not getting your neuromodulators. If you’re not getting your neuromodulators, your neuromodulator receptors are going to start disappearing, and pretty soon you’re going to be really out of work, and then you’re going to die.

In this regard, I think of John Hollands work on the emergence of order. His example is New York City. You can always find a place where you can get gefilte fish, or sushi, or saddles or just about anything under the sun you want, and you don’t have to worry about a state bureaucracy that is making sure that supplies get through. No. The market takes care of it. The individual web of entrepreneurship and selfish agency provides a host of goods and services, and is an extremely sensitive instrument that responds to needs very quickly.

Until the lights go out. Well, we’re all at the mercy of the power man. I am quite concerned that we’re becoming hyper-fragile as a civilization, and we’re becoming so dependent on technologies that are not as reliable as they should be, that have so many conditions that have to be met for them to work, that we may specialize ourselves into some very serious jams. But in the meantime, thinking about the self-organizational powers of the brain as very much like the self-organizational powers of a city is not a bad idea. It just reeks of over-enthusiastic metaphor, though, and it’s worth reminding ourselves that this idea has been around since Plato.

Plato analogizes the mind of a human being to the state. You’ve got the rulers and the guardians and the workers. This idea that a person is made of lots of little people is comically simpleminded in some ways, but that doesn’t mean it isn’t, in a sense, true. We shouldn’t shrink from it just because it reminds us of simpleminded versions that have been long discredited. Maybe some not so simpleminded version is the truth.

There are a lot of cultural fleas

My next major project will be trying to take another hard look at cultural evolution and look at the different views of it and see if I can achieve a sort of bird’s eye view and establish what role, if any, is there for memes or something like memes and what are the other forces that are operating. We are going to have to have a proper scientific perspective on cultural change. The old-fashioned, historical narratives are wonderful, and they’re full of gripping detail, and they’re even sometimes right, but they only cover a small proportion of the phenomena. They only cover the tip of the iceberg.

Basically, the model that we have and have used for several thousand years is the model that culture consists of treasures, cultural treasures. Just like money, or like tools and houses, you bequeath them to your children, and you amass them, and you protect them, and because they’re valuable, you maintain them and prepare them, and then you hand them on to the next generation and some societies are rich, and some societies are poor, but it’s all goods. I think that vision is true of only the tip of the iceberg.

Most of the regularities in culture are not treasures. It’s not all opera and science and fortifications and buildings and ships. It includes all kinds of bad habits and ugly patterns and stupid things that don’t really matter but that somehow have got a grip on a society and that are part of the ecology of the human species in the same way that mud, dirt and grime and fleas are part of the world that we live in. They’re not our treasures. We may give our fleas to our children, but we’re not trying to. It’s not a blessing. It’s a curse, and I think there are a lot of cultural fleas. There are lots of things that we pass on without even noticing that we’re doing it and, of course, language is a prime case of this, very little deliberate intentional language instruction goes on or has to go on.

Kids that are raised with parents pointing out individual objects and saying, “See, it’s a ball. It’s red. Look, Johnny, it’s a red ball, and this is a cow, and look at the horsy” learn to speak, but so do kids who don’t have that patient instruction. You don’t have to do that. Your kids are going to learn ball and red and horsy and cow just fine without that, even if they’re quite severely neglected. That’s not a nice observation to make, but it’s true. It’s almost impossible not to learn language if you don’t have some sort of serious pathology in your brain.

Compare that with chimpanzees. There are hundreds of chimpanzees who have spent their whole lives in human captivity. They’ve been institutionalized. They’ve been like prisoners, and in the course of the day they hear probably about as many words as a child does. They never show any interest. They never apparently get curious about what those sounds are for. They can hear all the speech, but it’s like the rustling of the leaves. It just doesn’t register on them as worth attention.

But kids are tuned for that, and it might be a very subtle tuning. I can imagine a few small genetic switches, which, if they were just in a slightly different position, would make chimpanzees just as pantingly eager to listen to language as human babies are, but they’re not, and what a difference it makes in their world! They never get to share discoveries the way we do and to share our learning. That, I think, is the single feature about human beings that distinguishes us most clearly from all others: we don’t have to reinvent the wheel. Our kids get the benefit of not just what grandpa and grandma and great grandpa and great grandma knew. They get the benefit of basically what everybody in the world knew in the years when they go to school. They don’t have to invent calculus or long division or maps or the wheel or fire. They get all that for free. It just comes as part of the environment. They get incredible treasures, cognitive treasures, just by growing up. (…)

A lot of naïve thinking by scientists about free will

Moving Naturalism Forward" was a nice workshop that Sean Carroll put together out in Stockbridge a couple of weeks ago, and it was really interesting. I learned a lot. I learned more about how hard it is to do some of these things and that’s always useful knowledge, especially for a philosopher.

If we take seriously, as I think we should, the role that Socrates proposed for us as midwives of thinking, then we want to know what the blockades are, what the imagination blockades are, what people have a hard time thinking about, and among the things that struck me about the Stockbridge conference were the signs of people really having a struggle to take seriously some ideas which I think they should take seriously. (…)

I realized I really have my work cut out for me in a way that I had hoped not to discover. There’s still a lot of naïve thinking by scientists about free will. I’ve been talking about it quite a lot, and I do my best to undo some bad thinking by various scientists. I’ve had some modest success, but there’s a lot more that has to be done on that front. I think it’s very attractive to scientists to think that here’s this several-millennia-old philosophical idea, free will, and they can just hit it out of the ballpark, which I’m sure would be nice if it was true.

It’s just not true. I think they’re well intentioned. They’re trying to clarify, but they’re really missing a lot of important points. I want a naturalistic theory of human beings and free will and moral responsibility as much as anybody there, but I think you’ve got to think through the issues a lot better than they’ve done, and this, happily, shows that there’s some real work for philosophers.

Philosophers have done some real work that the scientists jolly well should know. Here’s an area where it was one of the few times in my career when I wanted to say to a bunch of scientists, “Look. You have some reading to do in philosophy before you hold forth on this. There really is some good reading to do on these topics, and you need to educate yourselves.”

A combination of arrogance and cravenness

The figures about American resistance to evolution are still depressing, and you finally have to realize that there’s something structural. It’s not that people are stupid, and I think it’s clear that people, everybody, me, you, we all have our authorities, our go-to people whose word we trust. If you want to question about the economic situation in Greece, for instance, you need to check it out with somebody whose opinion on that we think is worth taking seriously. We don’t try to work it out for ourselves. We find some expert that we trust, and right around the horn, whatever the issues are, we have our experts, and so a lot of people have as their experts on matters of science, they have their pastors. This is their local expert.

I don’t blame them. I wish they were more careful about vetting their experts and making sure that they found good experts. They wouldn’t choose an investment advisor, I think, as thoughtlessly as they go along with their pastor. I blame the pastors, but where do they get their ideas? Well, they get them from the hierarchies of their churches. Where do they get their ideas? Up at the top, I figure there’s some people that really should be ashamed of themselves. They know better.

They’re lying, and when I get a chance, I try to ask them that. I say, “Doesn’t it bother you that your grandchildren are going to want to know why you thought you had to lie to everybody about evolution?” I mean, really. They’re lies. They’ve got to know that these are lies. They’re not that stupid, and I just would love them to worry about what their grandchildren and great grandchildren would say about how their ancestors were so craven and so arrogant. It’s a combination of arrogance and cravenness.

We now have to start working on that structure of experts and thinking, why does that persist? How can it be that so many influential, powerful, wealthy, in-the-public people can be so confidently wrong about evolutionary biology? How did that happen? Why does it happen? Why does it persist? It really is a bit of a puzzle if you think about how they’d be embarrassed not to know that the world is round. I think that would be deeply embarrassing to be that benighted, and they’d realize it. They’d be embarrassed not to know that HIV is the vector of AIDS. They’d be embarrassed to not understand the way the tides are produced by the gravitational forces of the moon and the sun. They may not know the details, but they know that the details are out there. They could learn them in 20 minutes if they wanted to. How did they get themselves in the position where they could so blithely trust people who they’d never buy stocks and bonds from? They’d never trust a child’s operation to a doctor that was as ignorant and as ideological as these people. It is really strange. I haven’t got to the bottom of that. (…)

This pernicious sort of lazy relativism

[T]here’s a sort of enforced hypocrisy where the pastors speak from the pulpit quite literally, and if you weren’t listening very carefully, you’d think: oh my gosh, this person really believes all this stuff. But they’re putting in just enough hints for the sophisticates in the congregation so that the sophisticates are supposed to understand: Oh, no. This is all just symbolic. This is all just metaphorical. And that’s the way they want it, but of course, they could never admit it. You couldn’t put a little neon sign up over the pulpit that says, “Just metaphor, folks, just metaphor.” It would destroy the whole thing.

You can’t admit that it’s just metaphor even when you insist when anybody asks that it’s just metaphor, and so this professional doubletalk persists, and if you study it for a while the way Linda [pdf] and I have been doing, you come to realize that’s what it is, and that means they’ve lost track of what it means to tell the truth. Oh, there are so many different kinds of truth. Here’s where postmodernism comes back to haunt us. What a pernicious bit of intellectual vandalism that movement was! It gives license to this pernicious sort of lazy relativism.

One of the most chilling passages in that great book by William James, The Varieties of Religious Experience, is where he talks about soldiers in the military: "Far better is it for an army to be too savage, too cruel, too barbarous, thant to possess too much sentimentality and human reasonableness.” This is a very sobering, to me, a very sobering reflection. Let’s talk about when we went into Iraq. There was Rumsfeld saying, “Oh, we don’t need a big force. We don’t need a big force. We can do this on the cheap,” and there were other people, retrospectively we can say they were wiser, who said, “Look, if you’re going to do this at all, you want to go in there with such overpowering, such overwhelming numbers and force that you can really intimidate the population, and you can really maintain the peace and just get the population to sort of roll over, and that way actually less people get killed, less people get hurt. You want to come in with an overwhelming show of force.”

The principle is actually one that’s pretty well understood. If you don’t want to have a riot, have four times more police there than you think you need. That’s the way not to have a riot and nobody gets hurt because people are not foolish enough to face those kinds of odds. But they don’t think about that with regard to religion, and it’s very sobering. I put it this way.

Suppose that we face some horrific, terrible enemy, another Hitler or something really, really bad, and here’s two different armies that we could use to defend ourselves. I’ll call them the Gold Army and the Silver Army; same numbers, same training, same weaponry. They’re all armored and armed as well as we can do. The difference is that the Gold Army has been convinced that God is on their side and this is the cause of righteousness, and it’s as simple as that. The Silver Army is entirely composed of economists. They’re all making side insurance bets and calculating the odds of everything.

Which army do you want on the front lines? It’s very hard to say you want the economists, but think of what that means. What you’re saying is we’ll just have to hoodwink all these young people into some false beliefs for their own protection and for ours. It’s extremely hypocritical. It is a message that I recoil from, the idea that we should indoctrinate our soldiers. In the same way that we inoculate them against diseases, we should inoculate them against the economists’—or philosophers’—sort of thinking, since it might lead to them to think: am I so sure this cause is just? Am I really prepared to risk my life to protect? Do I have enough faith in my commanders that they’re doing the right thing? What if I’m clever enough and thoughtful enough to figure out a better battle plan, and I realize that this is futile? Am I still going to throw myself into the trenches? It’s a dilemma that I don’t know what to do about, although I think we should confront it at least.”

Daniel C. Dennett is University Professor, Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University, The normal well-tempered mind, Edge, Jan 8, 2013.

'The Intentional Stance'

"Dennett favours the theory (first suggested by Richard Dawkins) that our social learning has given us a second information highway (in addition to the genetic highway) where the transmission of variant cultural information (memes) takes place via differential replication. Software viruses, for example, can be understood as memes, and as memes evolve in complexity, so does human cognition: “The mind is the effect, not the cause.” (…)

Daniel Dennett: "Natural selection is not gene centrist and nor is biology all about genes, our comprehending minds are a result of our fast evolving culture. Words are memes that can be spoken and words are the best example of memes. Words have a genealogy and it’s easier to trace the evolution of a single word than the evolution of a language." (…)

I don’t like theory of mind. I coined the phrase The Intentional Stance. [Dennett’s Intentional Stance encompasses attributing feelings, memories and beliefs to others as well as mindreading and predicting what someone will do next.] Do you need a theory to ride a bike? (…)

Riding a bike is a craft – you don’t need a theory. Autistic people might need a theory with which to understand other minds, but the rest of us don’t. If a human is raised without social interaction and without language they would be hugely disabled and probably lacking in empathy.”

Daniel C. Dennett, Daniel Dennett: ‘I don’t like theory of mind’ – interview, The Guardian, 22 March 2013.

See also:

Steven Pinker on the mind as a system of ‘organs of computation’, Lapidarium notes
Quantum minds: Why we think like quarks - ‘To be human is to be quantum’, Lapidarium notes
Human Connectome Project: understanding how different parts of the brain communicate to each other
How Free Is Your Will?, Lapidarium notes
Susan Blackmore on memes and “temes”
Mind & Brain tag on Lapidarium notes

Dec
11th
Tue
permalink

Researchers discover surprising complexities in the way the brain makes mental maps

                     image
Spatial location is closely connected to the formation of new memories. Until now, grid cells were thought to be part of a single unified map system. New findings from the Norwegian University of Science and Technology demonstrate that the grid system is in fact composed of a number of independent grid maps, each with unique properties. Each map displays a particular resolution (mesh size), and responds independently to changes in the environment. A system of several distinct grid maps (illustrated on left) can support a large number of unique combinatorial codes used to associate new memories formed with specific spatial information (illustrated on right).

Your brain has at least four different senses of location – and perhaps as many as 10. And each is different, according to new research from the Kavli Institute for Systems Neuroscience, at the Norwegian University of Science and Technology. (…)

The findings, published in the 6 December 2012 issue of Nature, show that rather than just a single sense of location, the brain has a number of “modules” dedicated to self-location. Each module contains its own internal GPS-like mapping system that keeps track of movement, and has other characteristics that also distinguishes one from another.

"We have at least four senses of location," says Edvard Moser, director of the Kavli Institute. "Each has its own scale for representing the external environment, ranging from very fine to very coarse. The different modules react differently to changes in the environment. Some may scale the brain’s inner map to the surroundings, others do not. And they operate independently of each other in several ways."

This is also the first time that researchers have been able to show that a part of the brain that does not directly respond to sensory input, called the association cortex, is organized into modules. The research was conducted using rats. (…)

Technical breakthroughs

A rat’s brain is the size of a grape, while the area that keeps track of the sense of location and memory is comparable in size to a small grape seed. This tiny area holds millions of nerve cells.

A research team of six people worked for more than four years to acquire extensive electrophysiological measurements in this seed-sized region of the brain. New measurement techniques and a technical breakthrough made it possible for Hanne Stensola and her colleagues to measure the activity in as many as 186 grid cells of the same rat brain. A grid cell is a specialized cell named for its characteristic of creating hexagonal grids in the brain’s mental map of its surroundings.

"We knew that the ‘grid maps’ in this area of the brain had resolutions covering different scales, but we did not know how independent the scales were of each other," Stensola said. "We then discovered that the maps were organized in four to five modules with different scales, and that each of these modules reacted slightly differently to changes in their environment. This independence can be used by the brain to create new combinations - many combinations - which is a very useful tool for memory formation.

After analysing the activity of nearly 1000 grid cells, researchers were able to conclude that the brain has not just one way of making an internal map of its location, but several. Perhaps 10 different senses of location.

Perhaps 10 different senses of location

image
The entorhinal cortex is a part of the neocortex that represents space by way of brain cells that have GPS-like properties. Each cell describes the environment as a hexagonal grid mesh, earning them the name ‘grid cells’. The panels show a bird’s-eye view of a rat’s recorded movements (grey trace) in a 2.2x2.2 m box. Each panel shows the activity of one grid cell (blue dots) with a particular map resolution as the animal moved through the environment. Credit: Kavli Institute for Systems Neuroscience, NTNU

Institute director Moser says that while researchers are able to state with confidence that there are at least four different location modules, and have seen clear evidence of a fifth, there may be as many as 10 different modules.

He says, however, that researchers need to conduct more measurements before they will have covered the entire grid-cell area. “At this point we have measured less than half of the area,” he says.

Aside from the time and challenges involved in making these kinds of measurements, there is another good reason why researchers have not yet completed this task. The lower region of the sense of location area, the entorhinal cortex, has a resolution that is so coarse or large that it is virtually impossible to measure it.

"The thinking is that the coordinate points for some of these maps are as much as ten metres apart," explains Moser. "To measure this we would need to have a lab that is quite a lot larger and we would need time to test activity over the entire area. We work with rats, which run around while we make measurements from their brain. Just think how long it would take to record the activity in a rat if it was running back and forth exploring every nook and cranny of a football field. So you can see that we have some challenges here in scaling up our experiments."

New way to organize

Part of what makes the discovery of the grid modules so special is that it completely changes our understanding of how the brain physically organizes abstract functions. Previously, researchers have shown that brain cells in sensory systems that are directly adjacent to each other tend to have the same response pattern. This is how they have been able to create detailed maps of which parts of the sensory brain do what.

The new research shows that a modular organization is also found in the highest parts of the cortex, far away from areas devoted to senses or motor outputs. But these maps are different in the sense that they overlap or infiltrate other. It is thus not possible to locate the different modules with a microscope, because the cells that work together are intermingled with other modules in the same area.

“The various components of the grid map are not organized side by side,” explains Moser. “The various components overlap. This is the first time a brain function has been shown to be organized in this way at separate scales. We have uncovered a new way for neural network function to be distributed.”

A map and a constant

The researchers were surprised, however, when they started calculating the difference between the scales. They may have discovered an ingenious mathematical coding system, along with a number, a constant. (Anyone who has read or seen “The Hitchhiker’s Guide to the Galaxy” may enjoy this.) The scale for each sense of location is actually 42% larger than the previous one. “

We may not be able to say with certainty that we have found a mathematical constant for the way the brain calculates the scales for each sense of location, but it’s very funny that we have to multiply each measurement by 1.42 to get the next one. That is approximately equal to the square root of the number two,” says Moser.

Maps are genetically encoded

Moser thinks it is striking that the relationship between the various functional modules is so orderly. He believes this orderliness shows that the way the grid map is organized is genetically built in, and not primarily the result of experience and interaction with the environment.

So why has evolution equipped us with four or more senses of location?

Moser believes the ability to make a mental map of the environment arose very early in evolution. He explains that all species need to navigate, and that some types of memory may have arisen from brain systems that were actually developed for the brain’s sense of location.

“We see that the grid cells that are in each of the modules send signals to the same cells in the hippocampus, which is a very important component of memory,” explains Moser. “This is, in a way, the next step in the line of signals in the brain. In practice this means that the location cells send a different code into the hippocampus at the slightest change in the environment in the form of a new pattern of activity. So every tiny change results in a new combination of activity that can be used to encode a new memory, and, with input from the environment, becomes what we call memories.”

Researchers discover surprising complexities in the way the brain makes mental maps, Medical press, Dec 5, 2012.

The article is a part of doctoral research conducted by Hanne and Tor Stensola, and has been funded through an Advanced Investigator Grant that Edvard Moser was awarded by the European Research Council (ERC).

See also:

☞ Hanne Stensola, Tor Stensola, Trygve Solstad, Kristian Frøland, May-Britt Moser & Edvard I. Moser, The entorhinal grid map is discretized, Nature, 5 Dec 2012.
Mind & brain tag on Lapidarium notes

Aug
11th
Sat
permalink

Is there any redundancy in human memory?

            

Are there two physical copies of the same memory in the brain, such that if some cells storing a particular memory die, the memory is still not lost?

Yes. “Memories are stored using a “distributed representation,” which means that each memory is stored across thousands of synapses and neurons. And each neuron or synapses is involved in thousands of memories.

So if a single neuron fails, there are still 999 (for example) other neurons collaborating in the representation of that memory. With the failure of each neuron, thousands of memories get imperceptibly weaker, a property called “graceful degradation.”

Some people like to use the metaphor of a hologram. In a hologram, the 3D image is spread across the sheet of glass, and if the glass is broken, the full image can be seen in each separate shard. This is not exactly how memory works in the brain, but it is not a bad metaphor.

In some ways the brain is like a RAID array of disks, except instead of 3 hard disks, there are millions (or billions) of neurons sharing the representation of memories. (…)

Figure: Structural comparison of a RAID disk array and the type of hierarchical distributed memory network used by the brain.

Memory in the brain is resilient against catastrophic failure. Many memories get weaker with each neuron failure, but there is no point at which the failure of “one-too-many neurons” causes a memory to suddenly disappear. And the process of recalling a memory can strengthen it by recruiting more neurons and synapses to its representation. Also, memory in the brain is not perfect. Memory recall is more similar to reconstructing an earlier brain state than retrieving stored data. The recalled memory is never exactly the same as what was stored. And more typically, memories are not recalled but instead put to use. When you turn left at a familiar intersection, you are using knowledge, not recalling it.

One extreme example of the brain’s resiliency to catastrophic failure is stroke. A stroke is an event (often a burst blood vessel) that kills possibly hundreds of millions of brain cells within a very short time-frame (hours). Even the brain’s enormous redundancy cannot prevent memory loss under these circumstances. And yet the ability of stroke victims to recover language and skills shows that the brain can reorganize itself and somehow recover and quickly relearn knowledge that should have been destroyed.

In Alzheimers Disease, brain cells die at an accelerating rate. At some point, the reduction in brain cells overtakes the memory redundancy of the brain and memories do become permanently lost.

There is still a lot that is not known about how exactly memories are organized and represented in the brain. Neuron-level mechanisms have been figured out, and quite a lot of information has been gathered about the brain region specialized for coding and managing memory storage (the hippocampus). But the exact structure and coding scheme of memories has yet to be determined.”

Related on Quora:
Are forgotten life events still visible in the brain and if so how?
Why is forgetting different for driving and calculus?
How is it that a chip of a hologram still holds the entire image?
Neuroscience: Are there any connections (axons) in the brain that are redundant?


Paul King, Computational Neuroscientist, currently a visiting scholar at the Redwood Center for Theoretical Neuroscience at UC Berkeley, Is there any redundancy in human memory?, Quora, Aug 2012. (Illustration source)

See also:

Memory tag on Lapidarium notes

Jul
21st
Sat
permalink

What Neuroscience Tells Us About Morality: 'Morality is a form of decision-making, and is based on emotions, not logic'

           

Morality is not the product of a mythical pure reason divorced from natural selection and the neural wiring that motivates the animal to sociability. It emerges from the human brain and its responses to real human needs, desires, and social experience; it depends on innate emotional responses, on reward circuitry that allows pleasure and fear to be associated with certain conditions, on cortical networks, hormones and neuropeptides. Its cognitive underpinnings owe more to case-based reasoning than to conformity to rules.”

Patricia Smith Churchland, Canadian-American philosopher and neuroscientist noted for her contributions to neurophilosophy and the philosophy of mind, in John Bickle, The Oxford Handbook of Philosophy and Neuroscience, Chapter 16 "Inference to the best decision", Oxford Handbooks, 2009, p.419.

"Although many philosophers used to dismiss the relevance of neuroscience on grounds that what mattered was “the software, not the hardware”, increasingly philosophers have come to recognize that understanding how the brain works is essential to understanding the mind."

Patricia Smith Churchland, introductory message at her homepage at the University of California, San Diego.

"Morality is a form of decision-making, and is based on emotions, not logic."

Jonah Lehrer, cited in delancey place, 2009

"Philosophers must take account of neuroscience in their investigations.

While [Patricia S.] Churchland's intellectual opponents over the years have suggested that you can understand the “software” of thinking, independently of the “hardware”—the brain structure and neuronal firings—that produced it, she has responded that this metaphor doesn't work with the brain: Hardware and software are intertwined to such an extent that all philosophy must be “neurophilosophy.” There’s no other way.

Churchland, professor emerita of philosophy at the University of California at San Diego, has been best known for her work on the nature of consciousness. But now, with a new book, Braintrust: What Neuroscience Tells Us About Morality (Princeton University Press), she is taking her perspective into fresh terrain: ethics. And the story she tells about morality is, as you’d expect, heavily biological, emphasizing the role of the peptide oxytocin, as well as related neurochemicals.

Oxytocin’s primary purpose appears to be in solidifying the bond between mother and infant, but Churchland argues—drawing on the work of biologists—that there are significant spillover effects: Bonds of empathy lubricated by oxytocin expand to include, first, more distant kin and then other members of one’s in-group. (Another neurochemical, aregenine vasopressin, plays a related role, as do endogenous opiates, which reinforce the appeal of cooperation by making it feel good.)

The biological picture contains other elements, of course, notably our large prefrontal cortexes, which help us to take stock of situations in ways that lower animals, driven by “fight or flight” impulses, cannot. But oxytocin and its cousin-compounds ground the human capacity for empathy. (When she learned of oxytocin’s power, Churchland writes in Braintrust, she thought: “This, perhaps, Hume might accept as the germ of ‘moral sentiment.’”)

From there, culture and society begin to make their presence felt, shaping larger moral systems: tit-for-tat retaliation helps keep freeloaders and abusers of empathic understanding in line. Adults pass along the rules for acceptable behavior—which is not to say “just” behavior, in any transcendent sense—to their children. Institutional structures arise to enforce norms among strangers within a culture, who can’t be expected to automatically trust each other.

These rules and institutions, crucially, will vary from place to place, and over time. “Some cultures accept infanticide for the disabled or unwanted,” she writes, without judgment. “Others consider it morally abhorrent; some consider a mouthful of the killed enemy’s flesh a requirement for a courageous warrior, others consider it barbaric.”

Hers is a bottom-up, biological story, but, in her telling, it also has implications for ethical theory. Morality turns out to be not a quest for overarching principles but rather a process and practice not very different from negotiating our way through day-to-day social life. Brain scans, she points out, show little to no difference between how the brain works when solving social problems and how it works when solving ethical dilemmas. (…)

[Churchland] thinks, with Aristotle’s argument that morality is not about rule-making but instead about the cultivation of moral sentiment through experience, training, and the following of role models. The biological story also confirms, she thinks, David Hume’s assertion that reason and the emotions cannot be disentangled. This view stands in sharp contrast to those philosophers who argue that instinctual reactions must be scrutinized by reason. The villains of her books are philosophical system-builders—whether that means Jeremy Bentham, with his ideas about maximizing aggregate utility (“the greatest good for the greatest number”), or Immanuel Kant, with his categorical imperatives (never lie!), or John Rawls, erector of A Theory of Justice.

Churchland thinks the search for what she invariably calls “exceptionless rules” has deformed modern moral philosophy. “There have been a lot of interesting attempts, and interesting insights, but the target is like perpetual youth or a perpetual-motion machine. You’re not going to find an exceptionless rule,” she says. “What seems more likely is that there is a basic platform that people share and that things shape themselves based on that platform, and based on ecology, and on certain needs and certain traditions.”

The upshot of that approach? “Sometimes there isn’t an answer in the moral domain, and sometimes we have to agree to disagree, and come together and arrive at a good solution about what we will live with.”

Owen Flanagan Jr., a professor of philosophy and neurobiology at Duke University and a friend of Churchland’s, adds, “There’s a long tradition in philosophy that morality is based on rule-following, or on intuitions that only specially positioned people can have. One of her main points is that that is just a completely wrong picture of the genealogical or descriptive story. The first thing to do is to emphasize our continuity with the animals.” In fact, Churchland believes that primates and even some birds have a moral sense, as she defines it, because they, too, are social problem-solvers.

Recognizing our continuity with a specific species of animal was a turning point in her thinking about morality, in recognizing that it could be tied to the hard and fast. “It all changed when I learned about the prairie voles,” she says—surely not a phrase John Rawls ever uttered.

She told the story at the natural-history museum, in late March. Montane voles and prairie voles are so similar “that naifs like me can’t tell them apart,” she told a standing-room-only audience (younger and hipper than the museum’s usual patrons—the word “neuroscience” these days is like catnip). But prairie voles mate for life, and montane voles do not. Among prairie voles, the males not only share parenting duties, they will even lick and nurture pups that aren’t their own. By contrast, male montane voles do not actively parent even their own offspring. What accounts for the difference? Researchers have found that the prairie voles, the sociable ones, have greater numbers of oxytocin receptors in certain regions of the brain. (And prairie voles that have had their oxytocin receptors blocked will not pair-bond.)

"As a philosopher, I was stunned," Churchland said, archly. "I thought that monogamous pair-bonding was something one determined for oneself, with a high level of consideration and maybe some Kantian reasoning thrown in. It turns out it is mediated by biology in a very real way.”

The biologist Sue Carter, now at the University of Illinois at Chicago, did some of the seminal work on voles, but oxytocin research on humans is now extensive as well. In a study of subjects playing a lab-based cooperative game in which the greatest benefits to two players would come if the first (the “investor”) gave a significant amount of money to the second (the “trustee”), subjects who had oxytocin sprayed into their noses donated more than twice as often as a control group, giving nearly one-fifth percent more each time.

Paul Zak, an economist at Claremont Graduate University, was an author of that study, as well as others that Churchland cites. He is working on a book called “The Moral Molecule” and describes himself as “in exactly the same camp” as Churchland.

Oxytocin works on the level of emotion,” he says. “You just get the feeling of right and wrong. It is less precise than a Kantian system, but it’s consistent with our evolved physiology as social creatures.”

The City University of New York Graduate Center philosopher Jesse Prinz, who appeared with Churchland at a Columbia University event the night after her museum lecture, has mostly praise for Churchland’s latest offering. “If you look at a lot of the work that’s been done on scientific approaches to morality—books written for a lay audience—it’s been about evolutionary psychology. And what we get again and again is a story about the importance of evolved tendencies to be altruistic. That’s a report on a particular pattern of behavior, and an evolutionary story to explain the behavior. But it’s not an account of the underlying mechanism. The idea that science has moved to a point where we can see two animals working together toward a collective end and know the brain mechanism that allows that is an extraordinary achievement.”

Nevertheless, he says, how to move from the possibility of collective action to “the specific human institution of moral rules is a bit of connective tissue that she isn’t giving us.”

Indeed, that’s one of the most striking aspects of Braintrust. After Churchland establishes the existence of a platform for moral decision-making, she describes the process through which moral decisions come to be made, but she says little about their content—why one path might be better than another. She offers the following description of a typical “moral” scenario. A farmer sees a deer breaching his neighbor’s fence and eating his apples while the neighbor is away. The farmer will not consult a Kantian rule book before deciding whether to help, she writes, but instead will weigh an array of factors: Would I want my neighbor to help me? Does my culture find such assistance praiseworthy or condescending? Am I faced with any pressing emergencies on my own farm? Churchland describes this process of moral decision-making as being driven by “constraint satisfaction.”

"What exactly constraint satisfaction is in neurobiological terms we do not yet understand,” she writes, “but roughly speaking it involves various factors with various weights and probabilities interacting so as to produce a suitable solution to a question.”

"Various" factors with "various" weights? Is that not a little vague? But Duke’s Owen Flanagan Jr. defends this highly pragmatic view of morality. "Where we get a lot of pushback from philosophers is that they’ll say, ‘If you go this naturalistic route that Flanagan and Churchland go, then you make ethics merely a theory of prudence.’ And the answer is, Yeah, you kind of do that. Morality doesn’t become any different than deciding what kind of bridge to build across a river. The reason we both think it makes sense is that the other stories”—that morality comes from God, or from philosophical intuition—”are just so implausible.”

Flanagan also thinks Churchland’s approach leads to a “more democratic” morality. "It’s ordinary people discussing the best thing to do in a given situation, given all the best information available at the moment." Churchland herself often underscores that democratic impulse, drawing on her own biography. She grew up on a farm, in the Okanagan Valley, in British Columbia. Speaking of her onetime neighbors, she says: "I got as much wisdom from some of those old farmers as I ever got from a seminar on moral philosophy.”

If building a bridge is the topic up for discussion, however, one can assume that most people think getting across the water is a sound idea. Yet mainstream philosophers object that such a sense of shared purpose cannot always be assumed in moral questions—and that therefore the analogy fails. (…)

Kahane says the complexity of human life demands a more intense and systematic analysis of moral questions than the average citizen might be capable of, at least if she’s limited to the basic tool kit of social skills.

Peter Railton, a philosophy professor at the University of Michigan at Ann Arbor, agrees. Our intuitions about how to get along with other people may have been shaped by our interactions within small groups (and between small groups). But we don’t live in small groups anymore, so we need some procedures through which we leverage our social skills into uncharted areas—and that is what the traditional academic philosophers, whom Churchland mostly rejects, work on. What are our obligations to future generations (concerning climate change, say)? What do we owe poor people on the other side of the globe (whom we might never have heard of, in our evolutionary past)?

For a more rudimentary example, consider that evolution quite likely trained us to treat “out groups” as our enemy. Philosophical argument, Railton says, can give reasons why members of the out-group are not, in fact, the malign and unusual creatures that we might instinctively think they are; we can thereby expand our circle of empathy.

Churchland’s response is that someone is indeed likely to have the insight that constant war against the out-group hurts both sides’ interests, but she thinks a politician, an economist, or a farmer-citizen is as likely to have that insight as a professional philosopher. (…)

But isn’t she, right there, sneaking in some moral principles that have nothing to do with oxytocin, namely the primacy of liberty over equality? In our interviews, she described Singer’s worldview as, in an important sense, unnatural. Applying the same standard to distant foreigners as we do to our own kith and kin runs counter to our most fundamental biological impulses.

But Oxford’s Kahane offers a counterargument: “‘Are humans capable of utilitarianism?’ is not a question that is answered by neuroscience,” he says. “We just need to test if people are able to live like that. Science may explain whether it is common for us to do, but that’s very different from saying what our limits are.”

Indeed, Peter Singer lives (more or less) the way he preaches, and chapters of an organization called Giving What We Can, whose members pledge to give a large portion of their earnings to charity, have popped up on several campuses. “If I can prevent hundreds of people from dying while still having the things that make life meaningful to me, that strikes me as a good idea that doesn’t go against ‘paradigmatically good sense’ or anything,” says Nick Beckstead, a fourth-year graduate student in philosophy and a founder of the group’s Rutgers chapter.

Another target in Churchland’s book is Jonathan Haidt, the University of Virginia psychologist who thinks he has identified several universal “foundations” of moral thought: protection of society’s vulnerable; fairness; loyalty to the in-group; respect for authority; and the importance of purity (a sanitary concern that evolves into the cultural ideal of sanctity). That strikes her as a nice list, but no more—a random collection of moral qualities that isn’t at all rooted in biology. During her museum talk, she described Haidt’s theory as a classic just-so story. “Maybe in the 70s, when evolutionary psychology was just becoming a thing, you could get away with saying”—here she adopted a flighty, sing-song voice—’It could have been, out there on the veldt, in Africa, 250,000 years ago that these were traits that were selected,’” she said. “But today you need evidence, actually.” (…)

The element of cultural relativism also remains somewhat mysterious in Churchland’s writings on morality. In some ways, her project dovetails with that of Sam Harris, the “New Atheist” (and neuroscience Ph.D.) who believes reason and neuroscience can replace woolly armchair philosophy and religion as guides to morality. But her defense of some practices of primitive tribes, including infanticide (in the context of scarcity) —as well the seizing of enemy women, in raids, to keep up the stock of mates— as “moral” within their own context, seems the opposite of his approach.

I reminded Churchland, who has served on panels with Harris, that he likes to put academics on the spot by asking if they think such practices as the early 19th-century Hindu tradition of burning widows on their husbands’ funeral pyres was objectively wrong.

So did she think so? First, she got irritated: “I don’t know why you’re asking that.” But, yes, she finally said, she does think that practice objectively wrong. “But frankly I don’t know enough about their values, and why they have that tradition, and I’m betting that Sam doesn’t either.”

"The example I like to use," she said, "rather than using an example from some other culture and just laughing at it, is the example from our own country, where it seems to me that the right to buy assault weapons really does not work for the well-being of most people. And I think that’s an objective matter."

At times, Churchland seems just to want to retreat from moral philosophical debate back to the pure science. “Really,” she said, “what I’m interested in is the biological platform. Then it’s an open question how we attack more complex problems of social life.”

— Christopher Shea writing about Patricia Smith Churchland, Canadian-American philosopher and neuroscientist noted for her contributions to neurophilosophy and the philosophy of mind, in Rule Breaker, The Chronicle of Higher Education, June 12, 2011. (Illustration: attributed to xkcd)

See also:

Jesse Prinz: Morality is a Culturally Conditioned Response
Sam Harris on the ‘selfish gene’ and moral behavior
Sam Harris on the moral formula: How facts inform our ethics
Morality tag on Lapidarium

Jun
20th
Wed
permalink

The crayola-fication of the world: How we gave colors names, and it messed with our brains

     

"We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement to organize it in this way—an agreement that holds throughout our speech community and is codified in the patterns of our language (…) all observers are not led by the same physical evidence to the same picture of the universe, unless their linguistic backgrounds are similar”

Benjamin Whorf, Science and linguistics, first published in 1940 in MIT Technology Review [See also: linguistic relativity]

"The implication is that language may affect how we see the world. Somehow, the linguistic distinction between blue and green may heighten the perceived difference between them. (…)

If you have a word to distinguish two colors, does that make you any better at telling them apart? More generally, does the linguistic baggage that we carry effect how we perceive the world? This study was designed to address Whorf’s idea head on.

As it happens, Whorf was right. Or rather, he was half right.

The researchers found that there is a real, measurable difference in how we perform on these two tasks. In general, it takes less time to identify that odd blue square compared to the odd green.  This makes sense to anyone who’s ever tried looking for a tennis ball in the grass. It’s not that hard, but I’d rather the ball be blue. In once case you are jumping categories (blue versus green), and in the other, staying with a category (green versus green).

However, and this is where things start to get a bit odd, this result only holds if the differently colored square was in the right half of the circle. If it was in the left half (…), then there’s no difference in reaction times – it takes just as long to spot the odd blue as the odd green.  It seems that color categories only matter in the right half of your visual field! (…)

It’s easier to tell apart colors with different names, but only if they are to your right. Keep in mind that this is a very subtle effect, the difference in reaction time is a few hundredths of a second.

So what’s causing this lopsidedness?  Well, if you know something about how the brain works, you might have already guessed. The crucial point is that everything that we see in the right half of our vision is processed in the left hemisphere of our brain, and everything we see in the left half is processed by the right hemisphere. And for most of us, the left brain is stronger at processing language. So perhaps the language savvy half of our brain is helping us out.

    

It’s not just English speakers that show this asymmetry. Koreans are familiar with the colors yeondu and chorok. An English speaker would call them both green (yeondu perhaps being a more yellowish green). But in Korean it’s not a matter of shade, they are both basic colors. There is no word for green that includes both yeondu and chorok.

      
To the left of the dotted line is yeondu, and to the right chorok. Is it still as easy to spot the odd square in the circle?

And so imagine taking the same color ID test, but this time with yeondu and chorok instead of blue and green. A group of researchers ran this experiment. They discovered that among those who were the fastest at identifying the odd color, English speakers showed no left brain / right brain distinction, whereas Korean speakers did. It’s plausible that their left brain was attuned to the distinction between yeondu and chorok.

But how do we know that language is the key here? Back to the previous study. The researchers repeated the color circle experiment, but this time threw in a verbal distraction. The subjects were asked to memorize a word before each color test. The idea was to keep their language circuits distracted. And at the same time, other subjects were shown an image to memorize, not a word. In this case, it’s a visual distraction, and the language part of the brain needn’t be disturbed.

They found that when you’re verbally distracted, it suddenly becomes harder to separate blue from green (you’re slower at straddling color categories). In fact the results showed that people found this more difficult then separating two shades of green. However, if the distraction is visual, not verbal, things are different. It’s easy to spot the blue among green, so you’re faster at straddling categories.

All of this is only true for your left brain. Meanwhile, your right brain is rather oblivious to these categories (until, of course, the left brain bothers to inform it). The conclusion is that language is somehow enhancing your left brain’s ability to discern different colors with different names. Cultural forces alter our perception in ever so subtle a way, by gently tugging our visual leanings in different directions. Oddly enough, Whorf was right, but only when it comes to half your brain.

Imagine a world without color names. You lived in such a world once, when you were an infant. Do you remember what it was like? Anna Franklin is a psychologist who is particularly interested in where color categories come from. She studies color recognition in infants, as a window into how the brain organizes color.

Here she is discussing her work in this incredible clip from a BBC Horizon documentary called ‘Do you see what I see?‘. (…) It starts off with infants, and then cuts to the Himba tribe who have a highly unusual color naming system. You’ll see them taking the color wheel test, with very surprising results.

Surprisingly, many children take a remarkably long time to learn their color names. By the time they can name dozens of objects, they still struggle with basic colors. A two year old may know that a banana is yellow or an apple is red, but if you show them a blue cup, odds are even that they’ll call it red. And this confusion can persist even after encountering hundreds of examples, until as late as the age of four. There have been studies that show that very young sighted children are as likely to identify a color correctly as blind children of the same age. They rely on their experience, rather than recognize the color outright. (…)

The big question is when children learn their color words, does their perception of the world change? Anna Franklin (who we met in the video above) and colleagues took on this question. Working with toddlers aged two to four, they split them into two groups. There were the namers, who could reliably distinguish blue from green, and the politely-named learners, who couldn’t. The researchers repeated the color circle experiment on these children. Rather than have them press a button (probably not a good idea), they tracked the infants’ eyes to see how long it took them to spot the odd square. (…)

As toddlers learn the names of colors, a remarkable transformation is taking place inside their heads. Before they learn their color names, they are better at distinguishing color categories in their right brain (Left Visual Field). In a sense, their right brain understands the difference between blue and green, even before they have the words for it. But once they acquire words for blue and green, this ability jumps over to the left brain (Right Visual Field).

Think about what that means. As infant brains are rewiring themselves to absorb our visual language, the seat of categorical processing jumps hemispheres from the right brain to the left. And it stays here throughout adulthood. Their brains are furiously re-categorizing the world, until mysteriously, something finally clicks into place. So the next time you see a toddler struggling with their colors, don’t be like Darwin, and cut them some slack. They’re going through a lot.”

Aatish Bhatia, Ph.D. at Rutgers University, The crayola-fication of the world: How we gave colors names, and it messed with our brains (part II), Empirical Zeal, June 11, 2012. (Illustration by Scott Campbell).

See also:

☞ Regier, T., & Kay, P. (2009). Language, thought, and color: Whorf was half right Trends in Cognitive Sciences, Trends in Cognitive Sciences, 13 (10), 439-446 
☞ Gilbert AL, Regier T, Kay P, & Ivry RB (2006), Whorf hypothesis is supported in the right visual field but not the left, Proceedings of the National Academy of Sciences of the United States of America, 103 (2), 489-94
Aatish Bhatia, The crayola-fication of the world: How we gave colors names, and it messed with our brains (part I)

"Why is the color getting lost in translation? This visual conundrum has its roots in the history of language.  (…) What really is a color? Just like the crayons, we’re taking something that has no natural boundaries – the frequencies of visible light – and dividing into convenient packages that we give a name. (…) Languages have differing numbers of color words, ranging from two to about eleven. Yet after looking at 98 different languages, they saw a pattern. It was a pretty radical idea, that there is a certain fixed order in which these color names arise. This was a common path that languages seem to follow, a road towards increasing visual diversity. (…)

Cultures are quite different in how their words paint the world. (…) For the 110 cultures, you can see how many basic words they use for colors. To the Dani people who live in the highlands of New Guiniea, objects comes in just two shades. There’s mili for the cooler shades, from blues and greens to black, and mola for the lighter shades, like reds, yellows and white. Some languages have just three basic colors, others have 4, 5, 6, and so on. (…)

If you were a mantis shrimp, your rainbow would be unimaginably rich, with thousands, maybe tens of thousands of colors that blend together, stretching from deep reds all the way to the ultraviolet. To a mantis shrimp, our visual world is unbearably dull. (Another Radiolab plug: in their episode on Color, they use a choir to convey this idea through sound. A visual spectrum becomes a musical one. It’s one of those little touches that makes this show genius.”

Color words in different languages, Fathom, Nov 8, 2012.

May
17th
Thu
permalink

The Self Illusion: How the Brain Creates Identity

            

'The Self'

"For the majority of us the self is a very compulsive experience. I happen to think it’s an illusion and certainly the neuroscience seems to support that contention. Simply from the logical positions that it’s very difficult to, without avoiding some degree of infinite regress, to say a starting point, the trail of thought, just the fractionation of the mind, when we see this happening in neurological conditions. The famous split-brain studies showing that actually we’re not integrated entities inside our head, rather we’re the output of a multitude of unconscious processes.

I happen to think the self is a narrative, and I use the self and the division that was drawn by William James, which is the “I” (the experience of conscious self) and the “me” (which is personal identity, how you would describe yourself in terms of where are you from and everything that makes you up in your predilections and your wishes for the future). Both the “I”, who is sentient of the “me”, and the “me”, which is a story of who you are, I think are stories. They’re constructs and narratives. I mean that in a sense that a story is a reduction or at least it’s a coherent framework that has some causal kind of coherence.

When I go out and give public lectures I like to illustrate the weaknesses of the “I” by using visual illusions of the most common examples. But there are other kinds of illusions that you can introduce which just reveal to people how their conscious experience is actually really just a fraction of what’s really going on. It certainly is not a true reflection of all mechanisms that are generating. Visual illusions are very obvious in that. The thing about the visual illusion effects is that even when they’re explained to you, you can’t but help see them, so that’s interesting. You can’t divorce yourself from the mechanisms that are creating the illusion and the mind that’s experienced in the illusion.

The sense of personal identity, this is where we’ve been doing experimental work showing the importance that we place upon episodic memories, autobiographical memories. In our duplication studies for example, children are quite willing to accept that you could copy a hamster with all its physical properties that you can’t necessarily see, but what you can’t copy very easily are the episodic memories that one hamster has had.

This actually resonates with the ideas of John Locke, the philosopher, who also argued that personal identity was really dependent on the autobiographical or episodic memories, and you are the sum of your memories, which, of course, is something that fractionates and fragments in various forms of dementia. As the person loses the capacity to retrieve memories, or these memoires become distorted, then the identity of the person, the personality, can be changed, amongst other things. But certainly the memories are very important.

As we all know, memory is notoriously fallible. It’s not cast in stone. It’s not something that is stable. It’s constantly reshaping itself. So the fact that we have a multitude of unconscious processes which are generating this coherence of consciousness, which is the I experience, and the truth that our memories are very selective and ultimately corruptible, we tend to remember things which fit with our general characterization of what our self is. We tend to ignore all the information that is inconsistent. We have all these attribution biases. We have cognitive dissonance. The very thing psychology keeps telling us, that we have all these unconscious mechanisms that reframe information, to fit with a coherent story, then both the “I” and the “me”, to all intents and purposes, are generated narratives.

The illusions I talk about often are this sense that there is an integrated individual, with a veridical notion of past. And there’s nothing at the center. We’re the product of the emergent property, I would argue, of the multitude of these processes that generate us.       

I use the word illusion as opposed to delusion. Delusion implies mental illness, to some extent, and illusion, we’re quite happy to accept that we’re experiencing illusions, and for me the word illusion really does mean that it’s an experience that is not what it seems. I’m not denying that there is an experience. We all have this experience, and what’s more, you can’t escape it easily. I think it’s more acceptable to call it an illusion whereas there’s a derogatory nature of calling something a delusion. I suspect there’s probably a technical difference which ought to do with mental illness, but no, I think we’re all perfectly normally, experience this illusion.      

Oliver Sacks has famously written about various case studies of patients which seem so bizarre, people who have various forms of perceptual anomalies, they mistake their wife for a hat, or there are patients who can’t help but copy everything they see. I think that in many instances, because the self is so core to our normal behavior having an understanding that self is this constructive process, I think if this was something that clinicians were familiar with, then I think that would make a lot of sense.

Neuroethics

In fact, it’s not only in clinical practice, I think in a lot of things. I think neuroethics is a very interesting field. I’ve got another colleague, David Eagleman, he’s very interested in these ideas. The culpability, responsibility. We premise our legal systems on this notion there is an individual who is to be held accountable. Now, I’m not suggesting that we abandon that, and I’m not sure what you would put in its place, but I think we can all recognize that there are certain situations where we find it very difficult to attribute blame to someone. For example, famously, Charles Whitman, the Texan sniper, when they had the autopsy, they discovered a very sizeable tumor in a region of the brain which could have very much influenced his ability to control his rage. I’m not suggesting every mass murder has inoperable tumors in their brain, but it’s conceivable that there will be, with our increasing knowledge of how the brain operates, and our ability to understand it, it’s conceivable there will be more situations where the lawyers will be looking to put the blame on some biological abnormality.

Where is the line to be drawn? I think that’s a very tough one to deal with. It’s a problem that’s not going to go away. It’s something that we’re going to continually face as we start to learn more about the genetics of aggression.

There’s a lot of interest in this thing called the warrior gene. To what extent is this a gene which predisposes you to violence? Or do you need the interaction between the gene and the abusive childhood in order to get this kind of profile? So it’s not just clinicians, it’s actually just about every realm of human activity where you posit the existence of a self and individuals, and responsibility. Then it will reframe the way you think about things. Just the way that we heap blame and praise, the flip side of blaming people is that we praise individuals. But it could be, in a sense, a multitude of factors that have led them to be successful. I think that it’s a pervasive notion. Whether or not we actually change the way we do anything, I’m not so sure, because I think it would be really hard to live our lives dealing with non-individuals, trying to deal with multitude and the history that everyone brings to the table. There’s a good reason why we have this experience of the self. It’s a very sort of succinct and economical way of interacting with each other. We deal with individuals. We fall in love with individuals, not multitudes of past experiences and aspects of hidden agendas, we just pick them out. (…)

The objects are part of the extended sense of self

I keep tying this back to my issues about why certain objects are overvalued, and I happen to believe, like James again, that objects are part of the extended sense of self. We surround ourselves with objects. We place a lot of value on objects that we think are representative of our self.  (…)

We’re the only species on this planet that invests a lot of time and evaluation through our objects, and this has been something that has been with us for a very, very long time.

Think of some of the early artifacts. The difficulty would have been to make these artifacts, the time invested in these things, means that from a very early point in our civilization, or before civilization, I think the earliest pieces are probably about 90,000 years old. There are certainly older things that are tools, but pieces of artwork, about 90,000 years old. So it’s been with us a long time. And yes, some of them are obviously sacred objects, power of religious purposes and so forth. But outside of that, there’s still this sense of having materials or things that we value, and that intrigues me in so many ways. And I don’t think it’s necessarily universal as well. It’s been around a lot, but the endowment effect, for example, is not found everywhere. There’s some intriguing work coming out of Africa. 

The endowment effect is this rather intriguing idea that we will spontaneously overvalue an object as soon as we believe it’s in our possession, we don’t actually have to have it physically, just bidding on something, as soon as you make your connection to an object, then you value it more, you’ll actually remember more about it, you’ll remember objects which you think are in your possession in comparison to someone else. It gets a whole sense of attribution and value associated with it, which is one of the reasons why people never get the asking price for the things that they’re trying to sell, they always think their objects are worth more than other people are willing to pay for them.

There was the first experimental demonstration by Richard Thaler and Danny Kahneman, and the early behavioral economics, was this demonstration that if you just give people coffee cups, students, coffee cups, and then you ask them to sell it, they always ask more than what someone’s willing to pay for it. It turns out it’s not just coffee cups, it’s wine, it’s chocolate, it’s anything, basically. There’s been quite a bit of work done on the endowment effect now. As I say, it’s been looked at in different species, and the brain mechanisms of having to sell something at a lower price, like loss aversion, it’s seen as quite painful, triggers the same pain centers, if you think you’re going to lose out on a deal

What is it about the objects that give us this self-evaluated sense? Well, I think James spoke of this, again, William James commented on the way that we use objects to extend our self. Russell Belk is a marketing psychologist. He has also talked about the extended self in terms of objects. As I say, this is something that I think marketers know in that they create certain quality brands that are perceived to signal to others how good your social status is.

It’s something in us, but it may not be universal because there are tribes, there are some recent reports from nomadic tribes in central Africa, who don’t seem to have this sense of ownership. It might be a reflection more of the fact that a lot of this work has been done in the West where we’re very individualistic, and of course individualism almost creates a lot of endowment ideas and certainly supports the endowment, materialism that we see. But this is an area I’d like to do more work with because we have not found any evidence of the endowment effect in children below five, six years of age. I’m interested: is this something that just emerges spontaneously? I suspect not. I suspect this is something that culture is definitely shaping. That’s my hunch, so that’s an empirical question I need to pick apart.

The irrational superstitious behaviors

Another line of research I’ve been working on in the past five years … this was a little bit like putting the cart before the horse, so I put forward an idea, it wasn’t entirely original. It was a combination of ideas of others, most notably Pascal Boyer. Paul Bloom, to some extent, had been thinking something similar. A bunch of us were interested in why religion was around. I didn’t want to specifically focus on religion. I wanted to get to the more general point about belief because it was my hunch that even a lot of atheists or self-stated atheists or agnostics, still nevertheless entertained beliefs which were pretty irrational. I wasn’t meaning irrational in a kind of behavioral economics type of way. I meant irrational in that there were these implicit views that would violate the natural laws as we thought about them. Violations of the natural laws I see as being supernatural. That’s what makes them supernatural. I felt that this was an area worth looking at. They’d been looked at 50, 60 years ago very much in the behaviorist association tradition.

BF Skinner famously wrote a paper on the superstitious behavior of pigeons, and he argued if you simply set up a reinforcement schedule at a random kind of interval, pigeons will adopt typical patterns that they think are somehow related to the reward, and then you could shape irrational superstitious behaviors. Now that work has turned out to be a bit dubious and I’m not sure that stood the test of time. But in terms of people’s rituals and routines, it’s quite clear and I know them in myself. There are these things that we do which are familiar, and we get a little bit irritated we don’t get to do them, so we do, most of us, entertain some degree of superstitious behavior.

At the time there was a lot of interest in religion and a lot of the hoo-ha about The God Delusion, and I felt that maybe we just need to redress this idea that it’s all to do with indoctrination, because I couldn’t believe the whole edifice of this kind of belief system was purely indoctrination. I’m not saying there’s not indoctrination, and clearly, religions are culturally transmitted. You’re not born to be Jewish or born to be Christian. But what I think religions do is they capitalize on a lot of inclinations that children have. Then I entered into a series of work, and my particular interest was this idea of essentialism and sacred objects and moral contamination.

We took a lot of the work that Paul Rozin had done, talking about things like killers’ cardigans, and we started to see if there was any empirical measures of transfer. For example, would you find yourself wanting to wash your hands more? Would you find priming effects for words which were related to good and evil, based on whether you had touched the object or not? For me there had to be this issue of physical contact. It struck me as this was why it wasn’t a pure association mechanism. It was actually something to do with the belief, a naïve belief there was some biological entity that can somehow, moral contamination can transfer.

We started to look at, actually not children now, but looking at adults because doing this sort of work with children is very difficult and probably somewhat controversial. But the whole area of research is premised on this idea that there are intuitive ways of seeing the world. Sometimes this is referred to as System One and System Two, or automatic and control. It reappears in a variety of psychological contexts. I just think about it as these unconscious, rapid systems which are triggered automatically. I think their origins are in children. Whilst you can educate people with a kind of slower System Two, if you like, you never eradicate the intuitive ways of seeing the world because they were never taught in the first place. They’re always there. I suppose if you want to ask me if there any kind of thing that you can have as a theory that you haven’t yet proven, it’s the idea is, I don’t think you ever throw away any belief system or any ideas that have been derived through these unconscious intuitive processes. You can supersede them, you can overwrite them, but they never go away, and they will reemerge under the right contexts. If you put people through stressful situations or you overload it, you can see the reemergence of these kinds of ways of thinking. The empirical evidence seems to be supporting that. They’ve got wrinkles in their brains. They’re never going to go away. You can try and override them, but they’re always there and they will reappear under the right circumstances, which is why you see the reemergence under stress of a lot of irrational thinking.

For example, teleological explanations, the idea that everything is made for a purpose or a function, is a natural way to see the world. This is Deb Kelemen's work. You will find that people who considered themselves fairly rational and well educated will, nevertheless, default back to teleological explanations if you put them under a stressful timed kind of situation. So it’s a way of seeing the world that is never eradicated. I think that’s going to be a general principle, in the same way that a reflex, if you think about reflexes, that’s an unlearned behavioral response. You’re born with a whole set of reflexes. Many of them disappear, but they never entirely go away. They become typically reintegrated into more complex behaviors, but if someone goes into a coma, you can see the reflexes reemerging.

What we think is going on is that in the course of development, these very automatic behaviors become controlled by top-down processes from the cortex, all these higher order systems which are regulating and controlling and suppressing, trying to keep these things under wraps. But when the cortex is put out of action through a coma or head injury, then you can see many of these things reemerging again. I don’t see why there should be any point of departure from a motor system to a perceptual system, to a cognitive system, because they’re all basically patterns of neural firing in the brain, and so I don’t see why it can’t be the case that if concepts are derived through these processes, they could remain dormant and latent as well.

The hierarchy of representations in the brain

One of the things that has been fascinating me is the extent to which we can talk about the hierarchy of representations in the brain. Representations are literally re-presentations. That’s the language of the brain, that’s the mode of thinking in the brain, it’s representation. It’s more than likely, in fact, it’s most likely that there is already representation wired into the brain. If you think about the sensory systems, the array of the eye, for example, is already laid out in a topographical representation of the external world, to which it has not yet been exposed. What happens is that this is general layout, arrangements that become fine-tuned. We know of a lot of work to show that the arrangements of the sensory mechanisms do have a spatial arrangement, so that’s not learned in any sense. But these can become changed through experiences, and that’s why the early work of Hubel and Weisel, about the effects of abnormal environments showed that the general pattern could be distorted, but the pattern was already in place in the first place.

When you start to move beyond sensory into perceptual systems and then into cognitive systems, that’s when you get into theoretical arguments and the gloves come off. There are some people who argue that it has to be the case that there are certain primitives built into the conceptual systems. I’m talking about the work of, most notably, Elizabeth Spelke.  

There certainly seems to be a lot of perceptual ability in newborns in terms of constancies, noticing invariant aspects of the physical world. I don’t think I have a problem with any of that, but I suppose this is where the debates go. (…)

Shame in the East is something that is at least recognized as a major factor of identity

I’ve been to Japan a couple of time. I’m not an expert in the cultural variation of cognition, but clearly shame is a major factor in motivation, or avoidance of shame, in eastern cultures. I think it reflects the sense of self worth and value in eastern culture. It is very much a collective notion that they place a lot of emphasis on not letting the team down. I believe they even have a special word for that aspect or experience of shame that we don’t have. That doesn’t mean that it’s a concept that we can never entertain, but it does suggest that in the East this is something that is at least recognized as a major factor of identity.

Children don’t necessarily feel shame. I don’t think they’ve got a sense of self until well into their second year. They have the “I”, they have the notion of being, of having control. They will experience the willingness to move their arms, and I’m sure they make that connection very quickly, so they have this sense of self, in that “I” notion, but I don’t think they’ve got personal identity, and that’s one of the reasons that they don’t have much, or very few of us have much memory of our earlier times. Our episodic memories are very fragmented, sensory events. But from about two to three years on they start to get a sense of who they are. Knowing who you are means becoming integrated into your social environment, and part of becoming integrated into your social environment means acquiring a sense of shame. Below two, three years of age, I don’t think many children have a notion of shame. But from then on, as they have to become members of the social tribe, then they have to be made aware of the consequences of being antisocial or doing things not what’s expected of them. I think that’s probably late in the acquisition.”

Bruce Hood, Canadian-born experimental psychologist who specialises in developmental cognitive neuroscience, Director of the Bristol Cognitive Development Centre, based at the University of Bristol, Essentialism, Edge, May, 17, 2012. (Illustration source)

The Illusion of the Self

"For me, an illusion is a subjective experience that is not what it seems. Illusions are experiences in the mind, but they are not out there in nature. Rather, they are events generated by the brain. Most of us have an experience of a self. I certainly have one, and I do not doubt that others do as well – an autonomous individual with a coherent identity and sense of free will. But that experience is an illusion – it does not exist independently of the person having the experience, and it is certainly not what it seems. That’s not to say that the illusion is pointless. Experiencing a self illusion may have tangible functional benefits in the way we think and act, but that does not mean that it exists as an entity. (…)

For most of us, the sense of our self is as an integrated individual inhabiting a body. I think it is helpful to distinguish between the two ways of thinking about the self that William James talked about. There is conscious awareness of the present moment that he called the “I,” but there is also a self that reflects upon who we are in terms of our history, our current activities and our future plans. James called this aspect of the self, “me” which most of us would recognize as our personal identity—who we think we are. However, I think that both the “I” and the “me” are actually ever-changing narratives generated by our brain to provide a coherent framework to organize the output of all the factors that contribute to our thoughts and behaviors.

I think it helps to compare the experience of self to subjective contours – illusions such as the Kanizsa pattern where you see an invisible shape that is really defined entirely by the surrounding context. People understand that it is a trick of the mind but what they may not appreciate is that the brain is actually generating the neural activation as if the illusory shape was really there. In other words, the brain is hallucinating the experience. There are now many studies revealing that illusions generate brain activity as if they existed. They are not real but the brain treats them as if they were.

Now that line of reasoning could be applied to all perception except that not all perception is an illusion. There are real shapes out there in the world and other physical regularities that generate reliable states in the minds of others. The reason that the status of reality cannot be applied to the self, is that it does not exist independently of my brain alone that is having the experience. It may appear to have a consistency of regularity and stability that makes it seem real, but those properties alone do not make it so.

Similar ideas about the self can be found in Buddhism and the writings of Hume and Spinoza. The difference is that there is now good psychological and physiological evidence to support these ideas that I cover in the book. (…)

There are many cognitive scientists who would doubt that the experience of I is constructed from a multitude of unconscious mechanisms and processes. Me is similarly constructed, though we may be more aware of the events that have shaped it over our lifetime. But neither is cast in stone and both are open to all manner of reinterpretation. As artists, illusionists, movie makers, and more recently experimental psychologists have repeatedly shown, conscious experience is highly manipulatable and context dependent. Our memories are also largely abstracted reinterpretations of events – we all hold distorted memories of past experiences. (…)

The developmental processes that shape our brains from infancy onwards to create our identities as well as the systematic biases that distort the content of our identity to form a consistent narrative. I believe much of that distortion and bias is socially relevant in terms of how we would like to be seen by others. We all think we would act and behave in a certain way, but the reality is that we are often mistaken. (…)

Q: What role do you think childhood plays in shaping the self?

Just about everything we value in life has something to do with other people. Much of that influence occurs early in our development, which is one reason why human childhoods are so prolonged in comparison to other species. We invest so much effort and time into our children to pass on as much knowledge and experience as possible. It is worth noting that other species that have long periods of rearing also tend to be more social and intelligent in terms of flexible, adaptive behaviors. Babies are born social from the start but they develop their sense of self throughout childhood as they move to become independent adults that eventually reproduce. I would contend that the self continues to develop throughout a lifetime, especially as our roles change to accommodate others. (…)

The role of social networking in the way we portray our self

There are some interesting phenomena emerging. There is evidence of homophily – the grouping together of individuals who share a common perspective, which is not too surprising. More interesting is evidence of polarization. Rather than opening up and exposing us to different perspectives, social networking on the Internet can foster more radicalization as we seek out others who share our positions. The more others validate our opinions, the more extreme we become. I don’t think we need to be fearful, and I am less concerned than the prophets of doom who predict the downfall of human civilization, but I believe it is true that the way we create the narrative of the self is changing.

Q: If the self is an illusion, what is your position on free will?

Free will is certainly a major component of the self illusion, but it is not synonymous. Both are illusions, but the self illusion extends beyond the issues of choice and culpability to other realms of human experience. From what I understand, I think you and I share the same basic position about the logical impossibility of free will. I also think that compatibilism (that determinism and free will can co-exist) is incoherent. We certainly have more choices today to do things that are not in accord with our biology, and it may be true that we should talk about free will in a meaningful way, as Dennett has argued, but that seems irrelevant to the central problem of positing an entity that can make choices independently of the multitude of factors that control a decision. To me, the problem of free will is a logical impasse – we cannot choose the factors that ultimately influence what we do and think. That does not mean that we throw away the social, moral, and legal rulebooks, but we need to be vigilant about the way our attitudes about individuals will be challenged as we come to understand the factors (both material and psychological) that control our behaviors when it comes to attributing praise and blame. I believe this is somewhat akin to your position. (…)

The self illusion explains so many aspects of human behavior as well as our attitudes toward others. When we judge others, we consider them responsible for their actions. But was Mary Bale, the bank worker from Coventry who was caught on video dropping a cat into a garbage can, being true to her self? Or was Mel Gibson’s drunken anti-Semitic rant being himself or under the influence of someone else? What motivated Senator Weiner to text naked pictures of himself to women he did not know? In the book, I consider some of the extremes of human behavior from mass murderers with brain tumors that may have made them kill, to rising politicians who self-destruct. By rejecting the notion of a core self and considering how we are a multitude of competing urges and impulses, I think it is easier to understand why we suddenly go off the rails. It explains why we act, often unconsciously, in a way that is inconsistent with our self image – or the image of our self as we believe others see us.

That said, the self illusion is probably an inescapable experience we need for interacting with others and the world, and indeed we cannot readily abandon or ignore its influence, but we should be skeptical that each of us is the coherent, integrated entity we assume we are.

Bruce Hood Canadian-born experimental psychologist who specialises in developmental cognitive neuroscience, Director of the Bristol Cognitive Development Centre, based at the University of Bristol, interviewed by Sam Harris, The Illusion of the Self, Sam Harris blog, May 22, 2012.

See also:

Existence: What is the self?, Lapidarium notes
Paul King on what is the best explanation for identity
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
Professor George Lakoff: Reason is 98% Subconscious Metaphor in Frames & Cultural Narratives
Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking

Apr
29th
Sun
permalink

The time machine in our mind. The imagistic mental machinery that allows us to travel through time

            

Our ability to close our eyes and imagine the pleasures of Super Bowl Sunday or remember the excesses of New Year’s Eve is a fairly recent evolutionary development, and our talent for doing this is unparalleled in the animal kingdom. We are a race of time travelers, unfettered by chronology and capable of visiting the future or revisiting the past whenever we wish. If our neural time machines are damaged by illness, age or accident, we may become trapped in the present. (…)

Why did evolution design our brains to go wandering in time? Perhaps it’s because an experience is a terrible thing to waste. Moving around in the world exposes organisms to danger, so as a rule they should have as few experiences as possible and learn as much from each as they can. (…)

Time travel allows us to pay for an experience once and then have it again and again at no additional charge, learning new lessons with each repetition. When we are busy having experiences—herding children, signing checks, battling traffic—the dark network is silent, but as soon as those experiences are over, the network is awakened, and we begin moving across the landscape of our history to see what we can learn—for free.

Animals learn by trial and error, and the smarter they are, the fewer trials they need. Traveling backward buys us many trials for the price of one, but traveling forward allows us to dispense with trials entirely. Just as pilots practice flying in flight simulators, the rest of us practice living in life simulators, and our ability to simulate future courses of action and preview their consequences enables us to learn from mistakes without making them.

We don’t need to bake a liver cupcake to find out that it is a stunningly bad idea; simply imagining it is punishment enough. The same is true for insulting the boss and misplacing the children. We may not heed the warnings that prospection provides, but at least we aren’t surprised when we wake up with a hangover or when our waists and our inseams swap sizes. (…)

Perhaps the most startling fact about the dark network isn’t what it does but how often it does it. Neuroscientists refer to it as the brain’s default mode, which is to say that we spend more of our time away from the present than in it. People typically overestimate how often they are in the moment because they rarely take notice when they take leave. It is only when the environment demands our attention—a dog barks, a child cries, a telephone rings—that our mental time machines switch themselves off and deposit us with a bump in the here and now. We stay just long enough to take a message and then we slip off again to the land of Elsewhen, our dark networks awash in light.”

Daniel Gilbert, Professor of Psychology at Harvard University, Essay: The Brain: Time Travel in the Brain, TIME, Jan. 29, 2007. (Illustration for TIME by Jeffery Fischer).

Kurt Stocker: The time machine in our mind (2012)

                                            
                                          (Click image to open research paper in pdf)

Abstract:

"This article provides the first comprehensive conceptual account for the imagistic mental machinery that allows us to travel through time—for the time machine in our mind. It is argued that language reveals this imagistic machine and how we use it. Findings from a range of cognitive fields are theoretically unified and a recent proposal about spatialized mental time travel is elaborated on. The following novel distinctions are offered: external vs. internal viewing of time; “watching” time vs. projective “travel” through time; optional vs. obligatory mental time travel; mental time travel into anteriority or posteriority vs. mental time travel into the past or future; single mental time travel vs. nested dual mental time travel; mental time travel in episodic memory vs. mental time travel in semantic memory; and “seeing” vs. “sensing” mental imagery. Theoretical, empirical, and applied implications are discussed.”

"The theoretical strategy I adopt is to use language as an entree to a conceptual level that seems deeper than language itself (Pinker, 2007; Talmy, 2000). The logic of this strategy is in accordance with recent findings that many conceptualizations observed in language have also been found to exist in mental representations that are more basic than language itself. (…)

It is proposed that this strategy helps to uncover an imagistic mental machinery that allows us to travel through time—that this strategy helps us to uncover the time machine in our mind.

A central term used in this article is “the imagery structuring of time.” By this I refer to an invisible spatial scaffolding in our mental imagery across which temporal material can be splayed, the existence of which will be proposed in this article. At times it will be quite natural to assume that a space-to-time mapping in the sense of conceptual metaphor theory is involved in the structuring of this invisible scaffolding. (…)

It is thus for the present investigation more coherent to assume that mental time is basically constructed out of “spatialized” mental imagery—“spatialized” is another central term that I use in this article. I use it in the sense that it is neutral as to whether some of the imagery might be transferred via space-to-time mappings or whether some of the imagery might relate to space-to-time mappings only in an etymological sense. An example of temporal constructions that are readily characterized in terms of spatialized temporal imagery structuring are the conceptualizations underlying the use of before and after, conceptualizations that are often treated as having autonomous temporal status and as relating only etymologically to space.

The current investigation can refine this view somewhat, by postulating that spatialized temporal structures still play a very vital role in the imagery structuring underlying before and after. (…)

The theoretical strategy, to use linguistic expressions about time as an entree to conceptual structures about time that seem deeper than language itself, has been applied quite fruitfully, since it has allowed for the development of a rather comprehensive and precise conceptual account of the time machine in our mind. The theory is not an ad-hoc theory, since linguistic conceptualizations cannot be interpreted in a totally arbitrary way—for example language does not allow us to assume that a sentence such as I shopped at the store before I went home means that first the going home took place and then the shopping. In this respect the theory is to some degree already a data-guided theory, since linguistic expressions are data. However, the proposal of the theory that language has helped us to uncover a specific system of spatialized imagery structuring of time can only be evaluated by carrying out corresponding psychological (cognitive and neurocognitive) experiments and some ideas for such experiments have been presented. Since the time machine in our mind is a deeply fascinating apparatus, I am confident that theoretical and empirical investigations will continue to explore it.”

— Kurt Stocker, The time machine in our mind (pdf), Institute of Cognitive and Brain Sciences, University of California, Berkeley, CA, USA, 2012

See also:

☞ T. Suddendorf, D. Rose Addis and M C. Corballis, Mental time travel and the shaping of the human mind (pdf), The Royal Society, 2009.

Abstract: “Episodic memory, enabling conscious recollection of past episodes, can be distinguished from semantic memory, which stores enduring facts about the world. Episodic memory shares a core neural network with the simulation of future episodes, enabling mental time travel into both the past and the future. The notion that there might be something distinctly human about mental time travel has provoked ingenious attempts to demonstrate episodic memory or future simulation in nonhuman animals, but we argue that they have not yet established a capacity comparable to the human faculty. The evolution of the capacity to simulate possible future events, based on episodic memory, enhanced fitness by enabling action in preparation of different possible scenarios that increased present or future survival and reproduction chances. Human language may have evolved in the first instance for the sharing of past and planned future events, and, indeed, fictional ones, further enhancing fitness in social settings.”

☞ George Lakoff, Mark Johnson, Conceptual Metaphor in Everyday Language (pdf), The Journal of Philosophy, Vol 77, 1980.
Our sense of time is deeply entangled with memory
Time tag on Lapidarium notes

Apr
25th
Wed
permalink

Waking Life animated film focuses on the nature of dreams, consciousness, and existentialism



Waking Life is an American animated film (rotoscoped based on live action), directed by Richard Linklater and released in 2001. The entire film was shot using digital video and then a team of artists using computers drew stylized lines and colors over each frame.

The film focuses on the nature of dreams, consciousness, and existentialism. The title is a reference to philosopher George Santayana's maxim: “Sanity is a madness put to good uses; waking life is a dream controlled.”

Waking Life is about an unnamed young man in a persistent dream-like state that eventually progresses to lucidity. He initially observes and later participates in philosophical discussions of issues such as reality, free will, the relationship of the subject with others, and the meaning of life. Along the way the film touches on other topics including existentialism, situationist politics, posthumanity, the film theory of André Bazin, and lucid dreaming itself. By the end, the protagonist feels trapped by his perpetual dream, broken up only by unending false awakenings. His final conversation with a dream character reveals that reality may be only a single instant which the individual consciousness interprets falsely as time (and, thus, life) until a level of understanding is achieved that may allow the individual to break free from the illusion.

Ethan Hawke and Julie Delpy reprise their characters from Before Sunrise in one scene. (Wiki)

Eamonn Healy speaks about telescopic evolution and the future of humanity

We won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). (…) The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially).

So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the twentieth century saw only about 25 years of progress (again at today’s rate of progress) since we have been speeding up to current rates. So the twenty-first century will see almost a thousand times greater technological change than its predecessor.

Ray Kurzweil, American author, scientist, inventor and futurist, The Law of Accelerating Returns, KurzweilAI, March 7, 2001.

"If we’re looking at the highlights of human development, you have to look at the evolution of the organism and then at the development of its interaction with the environment. Evolution of the organism will begin with the evolution of life perceived through the hominid coming to the evolution of mankind. Neanderthal and Cro-Magnon man. Now, interestingly, what you’re looking at here are three strings: biological, anthropological — development of the cities — and cultural, which is human expression.

Now, what you’ve seen here is the evolution of populations, not so much the evolution of individuals. And in addition, if you look at the time scales that are involved here — two billion years for life, six million years for the hominid, 100,000 years for mankind as we know it — you’re beginning to see the telescoping nature of the evolutionary paradigm. And then when you get to agricultural, when you get to scientific revolution and industrial revolution, you’re looking at 10,000 years, 400 years, 150 years. Uou’re seeing a further telescoping of this evolutionary time. What that means is that as we go through the new evolution, it’s gonna telescope to the point we should be able to see it manifest itself within our lifetime, within this generation.

The new evolution stems from information, and it stems from two types of information: digital and analog. The digital is artificial intelligence. The analog results from molecular biology, the cloning of the organism. And you knit the two together with neurobiology. Before on the old evolutionary paradigm, one would die and the other would grow and dominate. But under the new paradigm, they would exist as a mutually supportive, noncompetitive grouping. Okay, independent from the external.

And what is interesting here is that evolution now becomes an individually centered process, emanating from the needs and desires of the individual, and not an external process, a passive process where the individual is just at the whim of the collective. So, you produce a neo-human, okay, with a new individuality and a new consciousness. But that’s only the beginning of the evolutionary cycle because as the next cycle proceeds, the input is now this new intelligence. As intelligence piles on intelligence, as ability piles on ability, the speed changes. Until what? Until we reach a crescendo in a way could be imagined as an enormous instantaneous fulfillment of human? human and neo-human potential. It could be something totally different. It could be the amplification of the individual, the multiplication of individual existences. Parallel existences now with the individual no longer restricted by time and space.

And the manifestations of this neo-human-type evolution, manifestations could be dramatically counter-intuitive. That’s the interesting part. The old evolution is cold. It’s sterile. It’s efficient, okay? And its manifestations of those social adaptations. We’re talking about parasitism, dominance, morality, okay? Uh, war, predation, these would be subject to de-emphasis. These will be subject to de-evolution. The new evolutionary paradigm will give us the human traits of truth, of loyalty, of justice, of freedom. These will be the manifestations of the new evolution. And that is what we would hope to see from this. That would be nice.”

Eamonn Healy, professor of chemistry at St. Edward’s University in Austin, Texas, where his research focuses on the design of structure-activity probes to elucidate enzymatic activity. He appears in Richard Linklater's 2001 film Waking Life discussing concepts similar to a technological singularity and explaining “telescopic evolution.”, Eamonn Healy speaks about telescopic evolution and the future of humanity from Brandon Sergent, Transcript

See also:

Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

Apr
15th
Sun
permalink

How liberal and conservative brains are wired differently. Liberals and conservatives don’t just vote differently, they think differently

           

"There’s now a large body of evidence showing that those who opt for the political left and those who opt for the political right tend to process information in divergent ways and to differ on any number of psychological traits.

Perhaps most important, liberals consistently score higher on a personality measure called “openness to experience,” one of the “Big Five” personality traits, which are easily assessed through standard questionnaires. That means liberals tend to be the kind of people who want to try new things, including new music, books, restaurants and vacation spots — and new ideas.

“Open people everywhere tend to have more liberal values,” said psychologist Robert McCrae, who conducted voluminous studies on personality while at the National Institute on Aging at the National Institutes of Health.

Conservatives, in contrast, tend to be less open — less exploratory, less in need of change — and more “conscientious,” a trait that indicates they appreciate order and structure in their lives. This gels nicely with the standard definition of conservatism as resistance to change — in the famous words of William F. Buckley Jr., a desire to stand “athwart history, yelling ‘Stop!’ ” (…)

We see the consequences of liberal openness and conservative conscientiousness everywhere — and especially in the political battle over facts. (…)

Compare this with a different irrationality: refusing to admit that humans are a product of evolution, a chief point of denial for the religious right. In a recent poll, just 43 percent of tea party adherents accepted the established science here. Yet unlike the vaccine issue, this denial is anything but new and trendy; it is well over 100 years old. The state of Tennessee is even hearkening back to the days of the Scopes “Monkey” Trial, more than 85 years ago. It just passed a bill that will weaken the teaching of evolution.

Such are some of the probable consequences of openness, or the lack thereof. (…)

Now consider another related trait implicated in our divide over reality: the “need for cognitive closure.” This describes discomfort with uncertainty and a desire to resolve it into a firm belief. Someone with a high need for closure tends to seize on a piece of information that dispels doubt or ambiguity, and then freeze, refusing to consider new information. Those who have this trait can also be expected to spend less time processing information than those who are driven by different motivations, such as achieving accuracy.

A number of studies show that conservatives tend to have a greater need for closure than do liberals, which is precisely what you would expect in light of the strong relationship between liberalism and openness. “The finding is very robust,” explained Arie Kruglanski, a University of Maryland psychologist who has pioneered research in this area and worked to develop a scale for measuring the need for closure.

The trait is assessed based on responses to survey statements such as “I dislike questions which could be answered in many different ways” and “In most social conflicts, I can easily see which side is right and which is wrong.” (…)

Anti-evolutionists have been found to score higher on the need for closure. And in the global-warming debate, tea party followers not only strongly deny the science but also tend to say that they “do not need any more information” about the issue.

I’m not saying that liberals have a monopoly on truth. Of course not. They aren’t always right; but when they’re wrong, they are wrong differently.

When you combine key psychological traits with divergent streams of information from the left and the right, you get a world where there is no truth that we all agree upon. We wield different facts, and hold them close, because we truly experience things differently. (…)”

Chris Mooney, science and political journalist, author of four books, including the New York Times bestselling The Republican War on Science and the forthcoming The Republican Brain: The Science of Why They Deny Science and Reality (April 2012), Liberals and conservatives don’t just vote differently. They think differently, The Washington Post, April 13, 2012. (Illustration: Koren Shadmi for The Washington Post)

See also:

Political science: why rejecting expertise has become a campaign strategy, Lapidarium notes
Cognitive and Social Consequences of the Need for Cognitive Closure, European Review of Social Psychology
☞ Antonio Chirumbolo, The relationship between need for cognitiveclosure and political orientation: the mediating role of authoritarianism, Department of Social and Developmental Psychology, University of Rome ‘La Sapienza’
Paul Nurse, Stamp out anti-science in US politics, New Scientist, 14 Sept 2011
☞ Chris Mooney, Why Republicans Deny Science: The Quest for a Scientific Explanation, The Huffington Post, Jan 11, 2012
☞ John Allen Paulos, Why Don’t Americans Elect Scientists?, NYTimes, Feb 13, 2012.
Study: Conservatives’ Trust in Science Has Fallen Dramatically Since Mid-1970s, American Sociological Association, March 29, 2012.
Why people believe in strange things, Lapidarium notes

Mar
21st
Wed
permalink

Richard Doyle on Creativity, evolution of mind and the rhetorical membrane between humans and an informational universe

              

Q [Jason Silva]: The Jesuit Priest and scientist Pierre Teilhard de Chardin spoke of the Noosphere very early on. A profile in WIRED Magazine article said, 

"Teilhard imagined a stage of evolution characterized by a complex membrane of information enveloping the globe and fueled by human consciousness”.. Teilhard saw the Net coming more than half a century before it arrived. He believed this vast thinking membrane would ultimately coalesce into “the living unity of a single tissue” containing our collective thoughts and experiences."  Teilhard wrote, "The living world is constituted by consciousness clothed in flesh and bone.

He argued that the primary vehicle for increasing complexity consciousness among living organisms was the nervous system. The informational wiring of a being, he argued - whether of neurons or electronics - gives birth to consciousness. As the diversification of nervous connections increases, evolution is led toward greater consciousness… thoughts?

Richard Doyle: Yes, he also called it this process of the evolution of consciousness “Omega Point”. The noosphere imagined here relied on a change in our relationship to  consciousness as much to any technological change and was part of evolution’s epic quest for self awareness. Here Teilhard is in accord with Julian Huxley (Aldous’ brother, a biologist) and Carl Sagan when they observed that “we are a way for the cosmos to know itself.” Sri Aurobindo’s The Life Divine traces out this evolution of consciousness as well through the greek and Sanskrit traditions as well as Darwinism and (relatively) modern philosophy. All are describing evolution’s slow and dynamic quest towards understanding itself.

         

I honestly think we are still grappling with the fact that our minds are distributed across a network by technology, and have been in a feedback loop between our brains and technologies at least since the invention of writing. As each new “mutation” occurs in the history of evolution of information technology, the very character of our minds shifts. McLuhan's Understanding Media is instructive here as well (he parsed it as the Global Village), and of course McLuhan was the bard who advised Leary on "Tune in, Turn on, Drop Out" and very influential on Terence McKenna.

One difference between now and Plato’s time is the infoquake through which we are all living. This radical increase in quantity no doubt has qualitative effects - it changes what it feels like to think and remember. Plato was working through the effect of one new information technology – writing – whereas today we “upgrade” every six months or so…Teilhard observes the correlative of this evolutionary increase in information - and the sudden thresholds it crosses - in the evolution of complexity and nervous systemsThe noosphere is a way of helping us deal with this “phase transition” of consciousness that may well be akin to the phase transition between liquid water and water vapor - a change in degree that effects a change in kind.

Darwin’s Pharmacy suggests that ecodelics were precisely such a mutation in information technology that increased sexually selective fitness through the capacity to process greater amounts of information, and that they are “extraordinarily sensitive to initial rhetorical traditions.” What this means is that because ecodelic experiences are so sensitive to the context in which we experience them, they can help make us aware of the effect of language and music etc on our consciousness, and thereby offer an awareness of our ability to effect our own consciousness through our linguistic and creative choices. This can be helpful when trying to browse the infoquake. Many other practices do so as well - meditation is the most well established practice for noticing the effects we can have on our own consciousness, and Sufi dervishes demonstrate this same outcome for dancing. I do the same on my bicycle, riding up a hill and chanting.

One problem I have with much of the discourse of “memes" is that it is often highly reductionistic - it often forgets that ideas have an ecology too, they must be "cultured." Here I would argue that drawing on Lawrence Lessig's work on the commons, the “brain” is a necessary but insufficient “spawning” ground for ideas that becomes actual. The commons is the spawning ground of ideas; brains are pretty obviously social as well as individual. Harvard biologist Richard Lewontin notes that there is no such thing as “self replicating” molecules, since they always require a context to be replicated. This problem goes back at last to computer scientist John Von Neumann's 1947 paper on Self reproducing automata.

I think Terence McKenna described the condition as "language is loose on planet three", and its modern version probably occurs first in the work of writer William S. Burroughs, whose notion of the "word virus" predates the "meme" by at least a decade. Then again this notion of "ideas are real" goes back to cosmologies that begin with the priority of consciousness over matter, as in "In the beginning was the word, and the word was god, and the word was with god." So even Burroughs could get a late pass for his idea. (…)

Q: Richard Dawkin's definition of a meme is quite powerful: 

“I think that a new kind of replicator has recently emerged on this very planet, […] already achieving evolutionary change at a rate that leaves the old gene panting far behind.” [the replicator is] human culture; the vector of transmission is language, and the spawning ground is the brain.”  

This notion that the ”the vector of transmission is language" is very compelling.. It seems to suggest that just as in biological evolution the vector of transmission has been the DNA molecule, in the noosphere, the next stage up, it is LANGUAGE that has become a major player in the transfer of information towards achieving evolutionary change.. Kind of affects how you think about the phrase “words have power”. This insight reminds me of a quote that describes, in words, the subjective ecstasy that a mind feels when upon having a transcendent realization that feels as if it advances evolution: 

"A universe of possibilities,

Grey infused by color,

The invisible revealed,

The mundane blown away

by awe” 

Is this what you mean by ‘the ecstasy of language’?

Richard Doyle: Above, I noted that ecodelics can make us aware of the feedback loops between our creative choices – should I eat mushrooms in a box? - Should I eat them with a fox? - and our consciousness. In other words, they can make us aware of the tremendous freedom we have in creating our own experience. Leary called this “internal freedom.” Becoming aware of the practically infinite choices we have to compose our lives, including the words we use to map them, can be overwhelming – we feel in these instances the “vertigo of freedom.” What to do? In ecodelic experience we can perceive the power of our maps. That moment in which we can learn to abide the tremendous creative choice we have, and take responsibility for it, is what I mean by the “ecstasy of language.” 

I would point out, though, that for those words you quote to do their work, they have to be read. The language does not do it "on its own" but as a result of the highly focused attention of readers. This may seem trivial but it is often left out, with some serious consequences. And “reading” can mean “follow up with interpretation”. I cracked up when I googled those lines above and found them in a corporate blog about TED, for example. Who knew that neo-romantic poetry was the emerging interface of the global corporate noosphere? (…)

Q: Buckminster Fuller described humans as "pattern integrities", Ray Kurzweil says we are "patterns of information". James Gleick's new book, The Information, says that “information may be more primary than matter”..  what do you make of this? And if we indeed are complex patterns, how can we hack the limitations of biology and entropy to preserve our pattern integrity indefinitely? 

Richard Doyle: First: It is important to remember that the history of the concept and tools of “information” is full of blindspots – we seem to be constantly tempted to underestimate the complexity of any given system needed to make any bit of information meaningful or useful. Caitlin, Kolmogorov Stephan Wolfram and John Von Neumann each came independently to the conclusion that information is only meaningful when it is “run” - you can’t predict the outcome of even many trivial programs without running the program. So to say that “information may be more primary than matter” we have to remember that “information” does not mean “free from constraints.” Thermodynamics – including entropy – remains.

Molecular and informatic reductionism – the view that you can best understand the nature of a biological system by cutting it up into the most significant bits, e.g. DNA – is a powerful model that enables us to do things with biological systems that we never could before. Artist Eduardo Kac collaborated with a French scientist to make a bioluminescent bunny. That’s new! But sometimes it is so powerful that we forget its limitations. The history of the human genome project illustrates this well. AND the human genome is incredibly interesting. It’s just not the immortality hack many thought it would be.

In this sense biology is not a limitation to be “transcended” (Kurzweil), but a medium of exploration whose constraints are interesting and sublime. On this scale of ecosystems, “death” is not a “limitation” but an attribute of a highly dynamic interactive system. Death is an attribute of life. Viewing biology as a “limitation” may not be the best way to become healthy and thriving beings.

Now, that said, looking at our characteristics as “patterns of information” can be immensely powerful, and I work with it at the level of consciousness as well as life. Thinking of ourselves as “dynamic patterns of multiply layered and interconnected self transforming information” is just as accurate of a description of human beings as “meaningless noisy monkeys who think they see god”, and is likely to have much better effects. A nice emphasis on this “pattern” rather than the bits that make it up can be found in Carl Sagan’s “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.”

Q: Richard Dawkins declared in 1986 that ”What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions, […] If you want to understand life,” Dawkins wrote, “don’t think about vibrant, throbbing gels and oozes, think about information technology.” How would you explain the relationship between information technology and the reality of the physical world?

Richard Doyle: Again, information is indeed physical. We can treat a sequence of information as abstraction and take it out of its context – like a quotation or a jellyfish gene spliced into a rabbit to enable it to glow. We can compress information, dwindling the resources it takes to store or process it. But “Information, words, instructions” all require physical instantiation to even be “information, words, instructions.” Researcher Rolf Landauer showed back in the 1960s that even erasure is physical. So I actually think throbbing gels and oozes and slime mold and bacteria eating away at the garbage gyre are very important when we wish to “understand” life. I actually think Dawkins gets it wrong here – he is talking about “modeling” life, not “understanding” it. Erwin Schrödinger, the originator of the idea of the genetic code and therefore the beginning of the “informatic” tradition of biology that Dawkins speaks in here, knew this very well and insisted on the importance of first person experience for understanding.

So while I find these metaphors useful, that is exactly what they are: metaphors. There is a very long history to the attempt to model words and action together: Again, John 1:1 is closer to Dawkin’s position here than he may be comfortable with: “In the Beginning was the word, and the word was god, and the word was with god” is a way of working with this capacity of language to bring phenomena into being. It is really only because we habitually think of language as “mere words” that we continually forget that they are a manifestation of a physical system and that they have very actual effects not limited to the physics of their utterance – the words “I love you” can have an effect much greater than the amount of energy necessary to utter them. Our experiences are highly tuneable by the language we use to describe them.

Q: Talk about the mycelial archetype. Author Paul Stamet compares the pattern of the mushroom mycelium with the overlapping information-sharing systems that comprise the Internet, with the networked neurons in the brain, and with a computer model of dark matter in the universe. All share this densely intertwingled filamental structure…. what is the connection? what is the pattern that connects here? 

Richard Doyle: First things first: Paul Stamets is a genius and we should listen to his world view carefully and learn from it. Along with Lynn Margulis and Dorion Sagan, whose work I borrow from extensively in Darwin’s Pharmacy (as well as many others), Stamets is asking us to contemplate and act on the massive interconnection between all forms of life. This is a shift in worldview that is comparable to the Copernican shift from a geocentric cosmos – it is a shift toward interconnection and consciousness of interconnection. And I like how you weave in Gregory Bateson's phrase “the pattern that connects” here, because Bateson (whose father, William Bateson, was one of the founders of modern genetics) continuously pointed toward the need to develop ways of perceiving the whole. The “mycelial archetype”, as you call it, is a reliable and rather exciting way to recall the whole: What we call “mushrooms” are really the fruiting bodies of an extensive network of cross connection.

That fuzz growing in an open can of tomato paste in your fridge – mycelium. So even opening our refrigerator – should we be lucky enough to have one, with food in it - can remind us that what we take to be reality is is an actuality only appearance – a sliver, albeit a significant one for our world, of the whole. That fuzz can remind us that (1) appearance and reality or not the same thing at all and (2) beyond appearance there is a massive interconnection in unity. This can help remind us who and what we really are. 

With the word ‘archetype”, you of course invoke the psychologist Carl Jung who saw archetypes as templates for understanding, ways of organizing our story of the world. There are many archetypes – the Hero, the Mother, the Trickster, the sage. They are very powerful because they help stitch together what can seem to be a chaotic world – that is both their strength and their weakness. It is a weakness because most of the time we are operating within an archetype and we don’t even know it, and we don’t know therefore that we can change our archetype

By experimenting with a different archetype – imagining, for example, the world through the lens of a 2400 year old organism that is mostly invisible to a very short lived and recent species becoming aware of its creative responsibility in altering the planet – is incredibly powerful, and in Darwin’s Pharmacy I am trying to offer a way to experiment with the idea of plant planet as well as “mycelium” archetype. One powerful aspect of the treating the mycelium as our archetype as humanity is that it is “distributed” - it does not operate via a center of control but through cross connection “distributed” over a space.

Anything we can do to remember both our individuation and our interconnection is timely – we experience the world as individuals, and our task is to discover our nature within the larger scale reality of our dense ecological interconnection. In the book I point to the Upanishad’s “Tat Tvam Asi as a way of comprehending how we can both be totally individual and an aspect of the whole.

Q: You’ve talked about the ecstasy of language and the role of rhetoric in shaping reality.. These notions echo some of Terence McKenna's ideas about language… He calls language an “ecstatic activity of signification”… and says that for the “inspired one, it is almost as if existence is uttering itself through him”… Can you expand on this? How does language create reality?? 

Richard Doyle: It’s incredibly fun and insightful to echo Terence McKenna. He’s really in this shamanic bard tradition that goes all the back to Empedocles at least, and is distributed widely across the planet. He’s got a bit of Whitman in him with his affirmation of the erotic aspects of enlightenment. He was Emerson speaking to a Lyceum crowd remixed through rave culture. Leary and McKenna were resonating with the irish bard archetype. And Terrence was echoing Henry Munn, who was echoing Maria Sabina, whose chants and poetics can make her seem like Echo herself – a mythological story teller and poet (literally “sound”) who so transfixes Hera (Zeus’s wife) that Zeus can consort with nymphs. Everywhere we look there are allegories of sexual selection’s role in the evolution of poetic & shamanic language! 

And Terrence embodies the spirit of eloquence, helping translate our new technological realities (e.g. virtual reality, a fractal view of nature, radical ecology) and the states of mind that were likely to accompany them. Merlin Donald writes of the effects of “external symbolic storage” on human culture – as a onetime student of McLuhan’s, Donald was following up on Plato’s insights I mentioned above that writing changes how we think, and therefore, who we are

Human culture is going through a fantastic “reality crisis” wherein we discover the creative role we play in nature. Our role in global climate change – not to mention our role in dwindling biodiversity – is the “shadow” side of our increasing awareness that humans have a radical creative responsibility for their individual and collective lives. And our lives are inseparable from the ecosystems with which we are enmeshed. THAT is reality. To the extent that we can gather and focus our attention on retuning our relation towards ecosystems in crisis, language can indeed shape reality. We’ll get the future we imagine, not necessarily the one we deserve.

Q: Robert Anton Wilson spoke about “reality tunnels”…. These ‘constructs’ can limit our perspectives and perception of reality, they can trap us, belittle us, enslave us, make us miserable or set us free… How can we hack our reality tunnel?  Is it possible to use rhetoric and/or psychedelics to “reprogram” our reality tunnel? 

Richard Doyle: We do nothing but program and reprogram our reality tunnelsSeriously, the Japanese reactor crisis follows on the BP oil spill as a reminder that we are deeply interconnected on the level of infrastructure – technology is now planetary in scale, so what happens here effects somebody, sometimes Everybody, there. These infrastructures – our food sheds, our energy grid, our global media - run on networks, protocols, global standards, agreements: language, software, images, databases and their mycelial networks.

The historian Michel Foucault called these “discourses”, but we need to connect these discourses to the nonhuman networks with which they are enmeshed, and globalization has been in part about connecting discourses to each other across the planet. Ebola ends up in Virginia, Starbucks in Hong Kong. This has been true for a long time, of course – Mutual Assured Destruction was planetary in scale and required a communication and control structure linking, for example, a Trident submarine under the arctic ice sheet – remember that? - to a putatively civilian political structure Eisenhower rightly warned us about: the military industrial complex. The moon missions illustrate this principle as well – we remember what was said as much as what else was done, and what was said, for a while, seem to induce a sense of truly radical and planetary possibility.

So if we think of words as a description of reality rather than part of the infrastructure of reality, we miss out on the way different linguistic patterns act as catalysts for different realities. I call these “rhetorical softwares”. In my first two books, before I really knew about Wilson’s work or had worked through Korzybski with any intensity, I called these “rhetorical softwares.”

Now the first layer of our reality tunnel is our implicit sense of self – this is the only empirical reality any of us experiences – what we subjectively experience. RAW was a brilliant analyst of the ways experience is shaped by the language we use to describe it. One of my favorite examples from his work is his observation that in English, “reality” is a noun, so we start to treat it as a “thing”, when in fact reality, this cosmos, is also quite well mapped as an action – a dynamic unfolding for 13.7 billion years. That is a pretty big mismatch between language and reality, and can give us a sense that reality is inert, dead, lifeless, “concrete”, and thus not subject to change. By experimenting with what Wilson, following scientist John Lilly, called “metaprograms”, we can change the maps that shape the reality we inhabit. (…)

Q: The film Inception explored the notion that our inner world can be a vivid, experiential dimension, and that we can hack it, and change our reality… what do you make of this? 

Richard Doyle: The whole contemplative tradition insists on this dynamic nature of consciousness. “Inner” and “outer” are models for aspects of reality – words that map the world only imperfectly. Our “inner world” - subjective experience – is all we ever experience, so if we change it obviously we will see a change in what we label “external” reality it is of course part of and not separable from. One of the maps we should experiment with, in my view, is this “inner” and “outer” one – this is why one of my aliases is “mobius.” A mobius strip helps makes clear that “inside” and “outside” are… labels. As you run your finger along a mobius strip, the “inside” becomes “outside” and the “outside” becomes “inside.”.

Q: Can we give put inceptions out into the world?

Richard Doyle: We do nothing but! And, it is crucial to add, so too does the rest of our ecosystem. Bacteria engage in quorum sensing, begin to glow, and induce other bacteria to glow – this puts their inceptions into the world. Thanks to the work of scientists like Anthony Trewavas, we know that plants engage in signaling behavior between and across species and even kingdoms: orchids “throw” images of female wasps into the world, attracting male wasps, root cells map the best path through the soil. The whole blooming confusion of life is signaling, mapping and informing itself into the world. The etymology of “inception” is “to begin, take in hand” - our models and maps are like imagined handholds on a dynamic reality.

Q: What is the relationship between psychedelics and information technology? How are ipods, computers and the internet related to LSD? 

Richard Doyle: This book is part of a trilogy on the history of information in the life sciences. So, first: psychedelics and biology. It turns out that molecular biology and psychedelics were important contexts for each other. I first started noticing this when I found that many people who had taken LSD were talking about their experiences in the language of molecular biology – accessing their DNA and so forth. When I learned that psychedelic experience was very sensitive to “set and setting” - the mindset and context of their use - I wanted to find out how this language of molecular biology was effecting people’s experiences of the compounds. In other words, how did the language affect something supposedly caused by chemistry? 

Tracking the language through thousands of pages, I found that both the discourse of psychedelics and molecular biology were part of the “informatic vision” that was restructuring the life sciences as well as the world, and found common patterns of language in the work of Timothy Leary (the Harvard psychologist) and Francis Crick (who won the Nobel prize with James Watson and Maurice Wilkins for determining the structure of DNA in 1954), so in 2002 I published an article describing the common “language of information” spoken by Leary and Crick. I had no idea that Crick had apparently been using LSD when he was figuring out the structure of DNA. Yes, that blew my mind when it came out in 2004. I feel like I read that between the lines of Crick’s papers, which gave me confidence to write the rest of the book about the feedback between psychedelics and the world we inhabit.

The paper did hone in on the role that LSD played in the invention of PCR (polymerase chain reaction) – Kary Mullis, who won the Nobel prize for the invention of this method of making copies of a sequence of DNA, talked openly of the role that LSD played in the process of invention. Chapter 4 of the book looks to use of LSD in “creative problem solving” studies of the 1960s. These studies – hard to imagine now, 39 years into the War on Drugs, but we can Change the Archetype - suggest that used with care, psychedelics can be part of effective training in remembering how to discern the difference between words and things, maps and territories.

In short, this research suggested that psychedelics were useful for seeing the limitations of words as well as their power, perhaps occasioned by the experience of the linguistic feedback loops between language and psychedelic experiences that themselves could never be satisfactorily described in language. I argue that Mullis had a different conception of information than mainstream molecular biology – a pragmatic concept steeped in what you can do with words rather than in what they mean. Mullis seems to have thought of information as “algorithms” - recipes of code, while the mainsteam view was thinking of it as implicitly semantically, as “words with meaning.”

Ipods, Internet, etc: Well, in some cases there are direct connections. Perhaps Bill Joy said it best when he said that there was a reason that LSD and Unix were both from BerkeleyWhat the Doormouse Said by John Markoff came out after I wrote my first paper on Mullis and I was working on the book, and it was really confirmation of a lot of what I seeing indicated by my conceptual model of what is going on, which is as follows: Sexual selection is a good way to model the evolution of information technology. It yields bioluminescence – the most common communication strategy on the planet – chirping insects, singing birds, Peacocks fanning their feathers, singing whales, speaking humans, and humans with internet access. These are all techniques of information production, transformation or evaluation. I am persuaded by Geoffrey Miller’s update of Charles Darwin’s argument that language and mind are sexually selected traits, selected not simply for survival or even the representation of fitness, but for their sexiness. Leary: “Intelligence is the greatest aphrodisiac.”

I offer the hypothesis that psychedelics enter the human toolkit as “eloquence adjuncts” - tools and techniques for increasing the efficacy of language to seemingly create reality – different patterns of language ( and other attributes of set and setting) literally causes different experiences. The informatic revolution is about applying this ability to create reality with different “codes” to the machine interface. Perhaps this is one of the reason people like Mitch Kapor (a pioneer of computer spreadsheets), Stewart Brand (founder of a pre-internet computer commons known as the Well) and Bob Wallace (one of the original Microsoft seven and an early proponent of shareware), Mark Pesce were or are all psychonauts.

Q: Cyborg Anthropologist Amber Case has written about Techno-social wormholes.. the instant compression of time and space created every time we make a telephone call…  What do you make of this compression of time and space made possible by the engineering “magic” of technology? 

Richard Doyle:  It’s funny the role that the telephone call plays as an example in the history of our attempts to model the effects of information technologies. William Gibson famously defined cyberspace as the place where a telephone call takes place. (Gibson’s coinage of the term “cyberspace” is a good example of an “inception”) Avital Ronell wrote about Nietzsche’s telephone call to the beyond and interprets the history of philosophy according to a “telephonic logic”. When I was a child my father once threw our telephone into the atlantic ocean – that was what he made of the magic of that technology, at least in one moment of anger. This was back in the day when Bell owned your phone and there was some explaining to do. This magic of compression has other effects – my dad got phone calls all day at work, so when was at home he wanted to turn it off. The only way he knew to turn it off was to rip it out of the wall – there was no modular plug, just a wire into the wall - and throw it into the ocean.

So there is more than compression going on here: Deleuze and Guattari, along with the computer scientist Pierre Levy after them, call it “deterritorialization”. The differences between “here” and “there” are being constantly renegotiated as our technologies of interaction develop. Globalization is the collective effect of these deterritorializations and reterritorializations at any given moment.

And the wormhole example is instructive: the forces that enable such collapse of space and time as the possibility of time travel would likely tear us to smithereens. The tensions and torsions of this deterritorialization at part of what is at play in the Wikileaks revolutions, this compression of time and space offers promise for distributed governance as well as turbulence. Time travel through wormholes, by the way, is another example of an inception – Carl Sagan was looking for a reasonable way to transport his fictional aliens in Contact, called Cal Tech physicist Skip Thorne for help, and Thorne came up with the idea.

Q: The film Vanilla Sky explored the notion of a scientifically-induced lucid dream where we can live forever and our world is built out of our memories and ”sculpted moment to moment and lived with the romantic abandon of a summer day or the feeling of a great movie or a pop song you always loved”. Can we sculpt ‘real’ reality as if it were a “lucid dream”

Richard Doyle:Some traditions model reality as a lucid dream. The Diamond Sutra tells us that to be enlightened we must view reality as “a phantom, a dew drop, a bubble.”  This does not mean, of course, that reality does not exist, only that appearance has no more persistence than a dream and that what we call “reality” is our map of reality. When we wake up, the dream that had been so compelling is seen to be what it was: a dream, nothing more or less. Dreams do not lack reality – they are real patterns of information. They just aren’t what we usually think they are. Ditto for “ordinary” reality. Lucid dreaming has been practiced by multiple traditions for a long time – we can no doubt learn new ways of doing so. In the meantime, by recognizing and acting according to the practice of looking beyond appearances, we can find perhaps a smidgeon more creative freedom to manifest our intentions in reality.

Q: Paola Antonelli, design curator of MoMa, has written about Existenz Maximum, the ability of portable music devices like the ipod to create”customized realities”, imposing a soundtrack on the movie of our own life. This sounds empowering and godlike- can you expand on this notion? How is technology helping us design every aspect of both our external reality as well as our internal, psychological reality?

Richard Doyle: Well, the Upanishads and the Book of Luke both suggest that we “get our inner Creator on”, the former by suggesting that “Tat Tvam Asi” - there is an aspect of you that is connected to Everything, and the latter by recommending that we look not here or there for the Kingdom of God, but “within.” So if this sounds “god like”, it is part of a long and persistent tradition. I personally find the phrase “customized realities” redundant given the role of our always unique programs and metaprograms. So what we need to focus on his: to which aspect of ourselves do we wish to give this creative power? These customized realities could be enpowering and god like for corporations that own the material, or they could enpower our planetary aspect that unites all of us, and everything in between. It is, as always, the challenge of the magus and the artist to decide how we want to customize reality once we know that we can.

Q: The Imaginary Foundation says that "to understand is to perceive patterns"… Some advocates of psychedelic therapy have said that certain chemicals heighten our perception of patterns..They help! us “see more”.  What exactly are they helping us understand? 

Richard Doyle: Understanding! One of the interesting bits of knowledge that I found in my research was some evidence that psychonauts scored better on the Witkin Embedded Figure test, a putative measure of a human subject’s ability to “distinguish a simple geometrical figure embedded in a complex colored figure.” When we perceive the part within the whole, we can suddenly get context, understanding.

Q: An article pointing to the use of psychedelics as catalysts for breakthrough innovation in silicon valley says that users …

"employ these cognitive catalysts, de-condition their thinking periodically and come up with the really big connectivity ideas arrived at wholly outside the linear steps of argument. These are the gestalt-perceiving, asterism-forming “aha’s!” that connect the dots and light up the sky with a new archetypal pattern."

This seems to echo what other intellectuals have been saying for ages.  You referred to Cannabis as “an assassin of referentiality, inducing a butterfly effect in thought. Cannabis induces a parataxis wherein sentences resonate together and summon coherence in the bardos between one statement and another.”

Baudelaire also wrote about cannabis as inducing an artificial paradise of thought:  

“…It sometimes happens that people completely unsuited for word-play will improvise an endless string of puns and wholly improbable idea relationships fit to outdo the ablest masters of this preposterous craft. […and eventually]… Every philosophical problem is resolved. Every contradiction is reconciled. Man has surpassed the gods.”

Anthropologist Henry Munn wrote that:

"Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth… At times… the words leap to mind, one after another, of themselves without having to be searched for: a phenomenon similar to the automatic dictation of the surrealists except that here the flow of consciousness, rather than being disconnected, tends to be coherent: a rational enunciation of meanings.  The spontaneity they liberate is not only perceptual, but linguistic, the spontaneity of speech, of fervent, lucid discourse, astonishing. […] For the inspired one, it is as if existence were uttering itself through him […]

Can you expand a bit on how certain ecodelics (as well as marijuana) can help us de-condition our thinking, have creative breakthroughs as well as intellectual catharsis? How is it that “intoxication” could, under certain conditions, actually improve our cognition and creativity and contribute to the collective intelligence of the species?

Richard Doyle: I would point, again, to Pahnke's description of ego death. This is by definition an experience when our maps of the world are humbled. In the breakdown of our ordinary worldview - such as when a (now formerly)  secular being such as myself finds himself  feeling unmistakably sacred - we get a glimpse of reality without our usual filters. It is just not possible to use the old maps, so we get even an involuntary glimpse of reality. This is very close to the Buddhist practice of exhausting linguistic reference through chanting or Koans - suddenly we see the world through something besides our verbal mind.

Ramana Maharshi says that in the silence of the ego we perceive reality - reality IS the breakdown of the ego. Aldous Huxley, who was an extraordinarily adroit and eloquent writer with knowledge of increasingly rare breadth and depth, pointed to a quote by William Blake when trying to sum up his experience: the doors of perception were cleansed. This is a humble act, if you think about it: Huxley, faced with the beauty and grandeur of his mescaline experience, offers the equivalent of ‘What he said!”. Huxley also said that psychedelics offered a respite from “the throttling embrace of the self”, suggesting that we see the world without the usual filters of our egoic self. (…)

And if you look carefully at the studies by pioneers such as Myron Stolaroff and Willis Harman that you reference, as I do in the book, you will see that great care was taken to compose the best contexts for their studies. Subjects, for example, were told not to think about personal problems but to focus on their work at hand, and, astonishingly enough, it seems to have worked. These are very sensitive technologies and we really need much more research to explore their best use. This means more than studying their chemical function - it means studying the complex experiences human beings have with them. Step one has to be accepting that ecodelics are and always have been an integral part of human culture for some subset of the population. (…)

Q: Kevin Kelly refers to technological evolution as following the momentum begun at the big bang - he has stated:

"…there is a continuum, a connection back all the way to the Big Bang with these self-organizing systems that make the galaxies, stars, and life, and now is producing technology in the same way. The energies flowing through these things are, interestingly, becoming more and more dense. If you take the amount of energy that flows through one gram per second in a galaxy, it is increased when it goes through a star, and it is actually increased in life…We don’t realize this. We think of the sun as being a hugely immense amount of energy. Yet the amount of energy running through a sunflower per gram per second of the livelihood, is actually greater than in the sun. Actually, it’s so dense that when it’s multiplied out, the sunflower actually has a higher amount of energy flowing through it. "..

Animals have even higher energy usage than the plant, and a jet engine has even higher than an animal. The most energy-dense thing that we know about in the entire universe is the computer chip in your computer. It is sending more energy per gram per second through that than anything we know. In fact, if it was to send it through any faster, it would melt or explode. It is so energy-dense that it is actually at the edge of explosion.”…  

Can you comment on the implications of what he’s saying here?

Richard Doyle: I think maps of “continuity” are crucial and urgently needed. We can model the world as either “discrete” - made up of parts - or “continuous” - composing a whole - to powerful effect. Both are in this sense true. This is not “relativism” but a corollary of that creative freedom to choose our models that seems to be an attribute of consciousness. The mechanistic worldview extracts, separates and reconnects raw materials, labor and energy in ways that produce astonishing order as well as disorder (entropy).

By mapping the world as discrete – such as the difference between one second and another – and uniform – to a clock, there is no difference between one second and another – we have transformed the planet. Consciousness informed by discrete maps of reality has been an actual geological force in a tiny sliver of time. In so doing, we have have transformed the biosphere. So you can see just how actual this relation between consciousness, its maps, and earthly reality is. This is why Vernadsky, a geophysicist, thought we needed a new term for the way consciousness functions as a geological force: noosphere.

These discrete maps of reality are so powerful that we forget that they are maps. Now if the world can be cut up into parts, it is only because it forms a unity. A Sufi author commented that the unity of the world was both the most obvious and obscure fact. It is obvious because our own lives and the world we inhabit can be seen to continue without any experienced interruption – neither the world nor our lives truly stops and starts. This unity can be obscure because in a literal sense we can’t perceive it with our senses – this unity can only be “perceived” by our minds. We are so effective as separate beings that we forget the whole for the part.

The world is more than a collection of parts, and we can quote Carl Sagan: “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.” Equally beautiful is what Sagan follows up with: “The cosmos is also within us. We are made of star stuff.” Perhaps this is why models such as Kelly’s feel so powerful: reminding ourselves that there is a continuity between the Big Bang and ourselves means we are an aspect of something unfathomably grand, beautiful, complex and unbroken. This is perhaps the “grandeur” Darwin was discussing. And when we experience that grandeur it can help us think and act in aways appropriate to a geological force.

I am not sure about the claims for energy that Kelly is making – I would have to see the context and the source of his data – but I do know that when it comes to thermodynamics, what he is saying rings true. We are dissipative structures far from equilibrium, meaning that we fulfill the laws of thermodynamics. Even though biological systems such as ourselves are incredibly orderly – and we export that order through our maps onto and into the world – we also yield more entropy than our absence. Living systems, according to an emerging paradigm of Stanley Salthe, Rob Swenson, the aforementioned Margulis and Sagan, Eric Schneider, James J. kay and others, maximize entropy, and the universe is seeking to dissipate ever greater amounts of entropy.

Order is a way to dissipate yet more energy. We’re thermodynamic beings, so we are always on the prowl for new ways to dissipate energy as heat and create uncertainty (entropy), and consciousness helps us find ever new ways to do so. (In case you are wondering, Consciousness is the organized effort to model reality that yields ever increasing spirals of uncertainty in Deep Time. But you knew that.) It is perhaps in this sense that, again following Carl Sagan, “ We are a way for the cosmos to know itself.” That is pretty great map of continuity.

What I don’t understand in Kelly’s work, and I need to look at with more attention, is the discontinuity he posits between biology and technology. In my view our maps have made us think of technology as different in kind from biology, but the global mycelial web of fungi suggests otherwise, and our current view of technology seems to intensify this sense of separation even as we get interconnected through technology. I prefer Noosphere to what Kelly calls the Technium because it reminds us of the ways we are biologically interconnected with our technosocial realities. Noosphere sprouts from biosphere.

Q: There is this notion of increasing complexity… Yet in a universe where entropy destroys almost everything, here we are, the cutting edge of evolution, taking the reigns and accelerating this emergent complexity.. Kurzweil says that this makes us “very important”: 

“…It turns out that we are central, after all.  Our ability to create models—virtual realities—in our brains, combined with ou modest-looking thumbs, has been sufficient to usher in another form of evolution: technology. That development enabled the persistence of the accelerating pace that started with biological evolution. It will continue until the entire universe is at our fingertips.”   

What do you think?

Richard Doyle: Well, I think from my remarks already you can see that I agree with Kurzweil here and can only suggest that it is for this very reason that we must be very creative, careful and cunning with our models. Do we model the technologies that we are developing according to the effects they will have on the planetary whole? Only rarely, though this is what we are trying to do at the Penn State Center for Nanofutures, as are lots of people involved in Science, Technology and Society as well as engineering education. When we develop technologies - and that is the way psychedelics arrived in modern culture, as technologies -  we must model their effects not only on the individuals who use them, but on the whole of our ecosystem and planetary society.

If our technological models are based on the premise that this is a dead planet – and most of them very much are, one is called all kinds of names if you suggest otherwise - animist, vitalist, Gaian intelligence agent, names I wear with glee – then we will end up with a asymptotically dead planet. Consciousness will, of course, like the Terminator, “Be Back” should we perish, but let us hope that it learns to experiment better with its maps and learns to notice reality just a little bit more. I am actually an optimist on this front and think that a widespread “aha” moment is occurring where there is a collective recognition of the feedback loops that make up our technological & biological evolution.

Again, I don’t know why Kurzweil seems to think that technological evolution is discontinuous with biological evolution – technology is nested within the network of “wetwares” that make it work, and our wetwares are increasingly interconnected with our technological infrastructure, as the meltdowns in Japan demonstrate along with the dependence of many of us – we who are more bacterial than human by dry weight - upon a network of pharmaceuticals and electricity for continued life. The E. coli outbreak in Europe is another case in point – our biological reality is linked with the technological reality of supply chain management. Technological evolution is biological evolution enabled by the maps of reality forged by consciousness. (…)

Whereas technology for many promised the “disenchantment” of the world –the rationalization of this world of the contemplative spirit as everything became a Machine – here was mystical contemplative experience manifesting itself directly within what sociologist Max Weber called the “iron cage of modernity”, Gaia bubbling up through technological “Babylon.”

Now many contemplatives have sought to share their experiences through writing – pages and pages of it. As we interconnect through information technology, we perhaps have the opportunity to repeat this enchanted contemplative experience of radical interconnection on another scale, and through other means. Just say Yes to the Noosphere!”

Richard Doyle, Professor of English Affiliate Faculty, Information Science and Technology at Pennsylvania State University, in conversation with Jason Silva, Creativity, evolution of mind and the “vertigo of freedom”, Big Think, June 21, 2011. (Illustrations: 1) Randy Mora, Artífices del sonido, 2) Noosphere)

See also:

☞ RoseRose, Google and the Myceliation of Consciousness
Kevin Kelly on Why the Impossible Happens More Often
Luciano Floridi on the future development of the information society
Luciano Floridi on The Digital Revolution as a Fourth Revolution: “P2P does’t mean Pirate to Pirate but Platonist to Platonist”
The Rise of the Conversation Society: information, communication and collaboration
Keen On… James Gleick: Why Cyberspace, As a Mode of Being, Will Never Go Away (TCTV), (video) TechCrunch, Jun 23, 2011
Timothy Leary on cybernetics and a new global culture
Mark Changizi on Humans, Version 3.0.
Cyberspace tag on Lapidarium

Mar
18th
Sun
permalink

Are We “Meant” to Have Language and Music? How Language and Music Mimicked Nature and Transformed Ape to Man

                

"We’re fish out of water, living in radically unnatural environments and behaving ridiculously for a great ape. So, if one were interested in figuring out which things are fundamentally part of what it is to be human, then those million crazy things we do these days would not be on the list. (…)

At the top of the list of things we do that we’re supposed to be doing, and that are at the core of what it is to be human rather than some other sort of animal, are language and music. Language is the pinnacle of usefulness, and was key to our domination of the Earth (and the Moon). And music is arguably the pinnacle of the arts. Language and music are fantastically complex, and we’re brilliantly capable at absorbing them, and from a young age. That’s how we know we’re meant to be doing them, i.e., how we know we evolved brains for engaging in language and music.

But what if this gets language and music all wrong? What if we’re not, in fact, meant to have language and music? What if our endless yapping and music-filled hours each day are deeply unnatural behaviors for our species? (…)

I believe that language and music are, indeed, not part of our core—that we never evolved by natural selection to engage in them. The reason we have such a head for language and music is not that we evolved for them, but, rather, that language and music evolved—culturally evolved over millennia—for us. Our brains aren’t shaped for these pinnacles of humankind. Rather, these pinnacles of humankind are shaped to be good for our brains.

But how on Earth can one argue for such a view? If language and music have shaped themselves to be good for non-linguistic and amusical brains, then what would their shapes have to be?

They’d have to possess the auditory structure of…nature. That is, we have auditory systems which have evolved to be brilliantly capable at processing the sounds from nature, and language and music would need to mimic those sorts of sounds in order to harness—to “nature-harness,” as I call it—our brain.

And language and music do nature-harness. (…) The two most important classes of auditory stimuli for humans are (i) events among objects (most commonly solid objects), and (ii) events among humans (i.e., human behavior). And, in my research I have shown that the signature sounds in these two auditory domains drive the sounds we humans use in (i) speech and (ii) music, respectively.

For example, the principal source of modulation of pitch in the natural world comes from the Doppler shift, where objects moving toward you have a high pitch and objects moving away have a low pitch; from these pitch modulations a listener can hear an object’s direction of movement relative to his or her position. In the book I provide a battery of converging evidence that melody in music has culturally evolved to sound like the (often exaggerations of) Doppler shifts of a person moving in one’s midst. Consider first that a mover’s pitch will modulate within a fixed range, the top and bottom pitches occurring when the mover is headed, respectively, toward and away from you. Do melodies confine themselves to fixed ranges? They tend to, and tessitura is the musical term to refer to this range. In the book I run through a variety of specific predictions.

Here’s one. If melody is “trying” to sound like the Doppler shifts of a mover—and thereby convey to the auditory system the trajectory of a fictional mover—then a faster mover will have a greater difference between its top and bottom pitch. Does faster music tend to have a wider tessitura? That is, does music with a faster tempo—more beats, or footsteps, per second—tend to have a wider tessitura? Notice that the performer of faster tempo music would ideally like the tessitura to narrow, not widen! But what we found is that, indeed, music having a greater tempo tends to have a wider tessitura, just what one would expect if the meaning of melody is the direction of a mover in your midst.

The preliminary conclusion of the research is that, human speech sounds like solid objects events, and music sounds like human behavior!

That’s just what we expect if we were never meant to do language and music. Language and music have the fingerprints of being unnatural (i.e., of not having their origins via natural selection)…and the giveaway is, ironically, that their shapes are natural (i.e., have the structure of natural auditory events).

We also find this for another core capability that we know we’re not “meant” to do: reading. Writing was invented much too recently for us to have specialized reading mechanisms in the brain (although there are new hints of early writing as old as 30,000 years), and yet reading has the hallmarks of instinct. As I have argued in my research and in my second book, The Vision Revolution, writing slides so well into our brain because it got shaped by cultural evolution to look “like nature,” and, specifically, to have the signature contour-combinations found in natural scenes (which consists mostly of opaque objects strewn about).

My research suggests that language and music aren’t any more part of our biological identity than reading is. Counterintuitively, then, we aren’t “supposed” to be speaking and listening to music. They aren’t part of our “core” after all.

Or, at least, they aren’t part of the core of Homo sapiens as the species originally appeared. But, it seems reasonable to insist that, whether or not language and music are part of our natural biological history, they are indeed at the core of what we take to be centrally human now. Being human today is quite a different thing than being the original Homo sapiens.

So, what is it to be human? Unlike Homo sapiens, we’re grown in a radically different petri dish. Our habitat is filled with cultural artifacts—the two heavyweights being language and music—designed to harness our brains’ ancient capabilities and transform them into new ones.

Humans are more than Homo sapiens. Humans are Homo sapiens who have been nature-harnessed into an altogether novel creature, one designed in part via natural selection, but also in part via cultural evolution.

Mark Changizi, an evolutionary neurobiologist, Are We “Meant” to Have Language and Music?, Discover Magazine, March 15th, 2012. (Illustration: Harnessed)

See also:

Mark Changizi, Music Sounds Like Moving People, Science 2.0, Jan 10, 2010.
☞ Mark Changizi, How To Put Art And Brain Together
☞ Mark Changizi, How we read
Mark Changizi on brain’s perception of the world
A brief history of writing, Lapidarium notes
Mark Changizi on Humans, Version 3.0.

Jan
21st
Sat
permalink

'Human beings are learning machines,' says philosopher (nature vs. nurture)

                       

"The point is that in scientific writing (…) suggest a very inflexible view of human nature, that we are determined by our biology. From my perspective the most interesting thing about the human species is our plasticity, our flexibility. (…)

It is striking in general that human beings mistake the cultural for the natural; you see it in many domains. Take moral values. We assume we have moral instincts: we just know that certain things are right and certain things are wrong. When we encounter people whose values differ from ours we think they must be corrupted or in some sense morally deformed. But this is clearly an instance where we mistake our deeply inculcated preferences for natural law. (…)

Q: At what point with morality does biology stop and culture begin?

One important innate contribution to morality is emotions. An aggressive response to an attack is not learned, it is biological. The question is how emotions that are designed to protect each of us as individuals get extended into generalised rules that spread within a group. One factor may be imitation. Human beings are great imitative learners. Rules that spread in a family can be calibrated across a whole village, leading to conformity in the group and a genuine system of morality.

Nativists will say that morality can emerge without instruction. But with innate domains, there isn’t much need for instruction, whereas in the moral domain, instruction is extensive. Kids learn through incessant correction. Between the ages of 2 and 10, parents correct their children’s behaviour every 8 minutes or so of waking life. In due course, our little monsters become little angels, more or less. This gives us reason to think morality is learned.

Q: One of the strongest arguments for innateness comes from linguists such as Noam Chomsky, who argue that humans are born with the basic rules of grammar already in place. But you disagree with them?

Chomsky singularly deserves credit for giving rise to the new cognitive sciences of the mind. He was instrumental in helping us think about the mind as a kind of machine. He has made some very compelling arguments to explain why everybody with an intact brain speaks grammatically even though children are not explicitly taught the rules of grammar.

But over the past 10 years we have started to see powerful evidence that children might learn language statistically, by unconsciously tabulating patterns in the sentences they hear and using these to generalise to new cases. Children might learn language effortlessly not because they possess innate grammatical rules, but because statistical learning is something we all do incessantly and automatically. The brain is designed to pick up on patterns of all kinds.

Q: How hard has it been to put this alternative view on the table, given how Chomskyan thought has dominated the debate in recent years?

Chomsky’s views about language are so deeply ingrained among academics that those who take statistical learning seriously are subject to a kind of ridicule. There is very little tolerance for dissent. This has been somewhat limiting, but there is a new generation of linguists who are taking the alternative very seriously, and it will probably become a very dominant position in the next generation.

Q: You describe yourself as an “unabashed empiricist” who favours nurture over nature. How did you come to this position, given that on many issues the evidence is still not definitive either way?

Actually I think the debate has been settled. You only have to stroll down the street to see that human beings are learning machines. Sure, for any given capacity the debate over biology versus culture will take time to resolve. But if you compare us with other species, our degree of variation is just so extraordinary and so obvious that we know prior to doing any science that human beings are special in this regard, and that a tremendous amount of what we do is as a result of learning. So empiricism should be the default position. The rest is just working out the details of how all this learning takes place.

Q: What are the implications of an empirical understanding of human nature for the way we go about our lives. How should it affect the way we behave?

In general, we need to cultivate a respect for difference. We need to appreciate that people with different values to us are not simply evil or ignorant, and that just like us they are products of socialisation. This should lead to an increase in international understanding and respect. We also need to understand that group differences in performance are not necessarily biologically fixed. For example, when we see women performing less well than men in mathematics, we should not assume that this is because of a difference in biology.

Q: How much has cognitive science contributed to our understanding of what it is to be human, traditionally a philosophical question?

Cognitive science is in the business of settling long-running philosophical debates on human nature, innate knowledge and other issues. The fact that these theories have been churning about for a couple of millennia without any consensus is evidence that philosophical methods are better at posing questions than answering them. Philosophy tells us what is possible, and science tells us what is true.

Cognitive science has transformed philosophy. At the beginning of the 20th century, philosophers changed their methodology quite dramatically by adopting logic. There has been an equally important revolution in 21st-century philosophy in that philosophers are turning to the empirical sciences and to some extent conducting experimental work themselves to settle old questions. As a philosopher, I hardly go a week without conducting an experiment.

My whole working day has changed because of the infusion of science.”

Jesse Prinz is a distinguished professor of philosophy at the City University of New York, specialising in the philosophy of psychology. He is a pioneer in experimental philosophy, using findings from the cognitive sciences, anthropology and other fields to develop empiricist theories of how the mind works. He is the author of The Emotional Construction of Morals (Oxford University Press, 2007), Gut Reactions (OUP, 2004) and Furnishing the Mind (MIT Press, 2002) and Beyond Human Nature: How culture and experience make us who we are, 'Human beings are learning machines,' says philosopher, NewScientist, Jan 20, 2012. (Illustration: Fritz Kahn, British Library)

See also:

Jesse Prinz: Morality is a Culturally Conditioned Response
Human Nature. Sapolsky, Maté, Wilkinson, Gilligan, discuss on human behavior and the nature vs. nurture debate