Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Oct
28th
Mon
permalink

Douglas Hofstadter: The Man Who Would Teach Machines to Think

"All the limitative theorems of metamathematics and the theory of computation suggest that once the ability to represent your own structure has reached a certain critical point, that is the kiss of death: it guarantees that you can never represent yourself totally. Gödel’s Incompleteness Theorem, Church’s Undecidability Theorem, Turing’s Halting Theorem, Tarski’s Truth Theorem — all have the flavour of some ancient fairy tale which warns you that “To seek self-knowledge is to embark on a journey which … will always be incomplete, cannot be charted on any map, will never halt, cannot be described.”

Douglas Hofstadter, 1979, cited in Vinod K. Wadhawan, Complexity Explained: 17. Epilogue, Nirmukta, 04 April 2010.

image M. C. Escher, Print Gallery. Hofstadter calls this Escher work a “pictorial parable for Godel’s Incompleteness Theorem.” Why? Look to the center of the painting, is there any way logical way to complete it? — source (Click picture to enlarge)

"On [Douglas] Hofstadter's office wall is a somewhat tattered reproduction of one of his favorite mind-twisting M. C. Escher prints, The Print Gallery.” In it, a boy stands looking at a print depicting a town where a woman looks down from her window at the roof of a gallery in which - yes - the boy stands looking at the print. We appreciate the paradox without being thrown by it, because we are outside looking in. Something like that creates our own unfathomable feelings of self. The self, subject and object, perceiver and perceived, is everywhere in the paradox.

It is a ”circling back,” the Tortoise tells Achilles, ”of a complex representation of the system together with its representations of all the rest of the world.”

”It is just so hard, emotionally,” Achilles tells the Tortoise, ”to acknowledge that a ‘soul’ emerges from so physical a system.” (…)

But philosophers like  [Daniel] Dennett believe, with Hofstadter, that scientific answers can be found without cheating by reducing the question to a manageable scale. (…) [T]he danger of looking only at the lowest biological level is in losing sight of the essential humanity that, in Hofstadter’s view, exists in the pattern and in the paradox. ”There seems to be no alternative to accepting some sort of incomprehensible quality to existence,” as Hofstadter puts it. ”Take your pick.” 

James Gleick, on Douglas R. Hofstadter in Exploring the Labyrinth of the Mind, The New York Times, August 21, 1983.

"In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself. Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.” (…)

“Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines as “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes” and a young man’s style as “hipsterish” and on and on ceaselessly throughout your day. That’s what it means to understand. But how does understanding work? For three decades, Hofstadter and his students have been trying to find out, trying to build “computer models of the fundamental mechanisms of thought.”

“At every moment,” Hofstadter writes in Surfaces and Essences, his latest book (written with Emmanuel Sander), “we are simultaneously faced with an indefinite number of overlapping and intermingling situations.” It is our job, as organisms that want to live, to make sense of that chaos. We do it by having the right concepts come to mind. This happens automatically, all the time. Analogy is Hofstadter’s go-to word. The thesis of his new book, which features a mélange of A’s on its cover, is that analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.

“Look at your conversations,” he says. “You’ll see over and over again, to your surprise, that this is the process of analogy-making.” Someone says something, which reminds you of something else; you say something, which reminds the other person of something else—that’s a conversation. It couldn’t be more straightforward. But at each step, Hofstadter argues, there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.

“Beware,” he writes, “of innocent phrases like ‘Oh, yeah, that’s exactly what happened to me!’ … behind whose nonchalance is hidden the entire mystery of the human mind.” (…)

[Hofstadter] spends most of his time in his study, two rooms on the top floor of his house, carpeted, a bit stuffy, and messier than he would like. His study is the center of his world. He reads there, listens to music there, studies there, draws there, writes his books there, writes his e‑mails there. (Hofstadter spends four hours a day writing e‑mail. “To me,” he has said, “an e‑mail is identical to a letter, every bit as formal, as refined, as carefully written … I rewrite, rewrite, rewrite, rewrite all of my e‑mails, always.”) He lives his mental life there, and it shows. Wall-to-wall there are books and drawings and notebooks and files, thoughts fossilized and splayed all over the room. It’s like a museum for his binges, a scene out of a brainy episode of Hoarders.

“Anything that I think about becomes part of my professional life,” he says. Daniel Dennett, who co-edited The Mind’s I with him, has explained that “what Douglas Hofstadter is, quite simply, is a phenomenologist, a practicing phenomenologist, and he does it better than anybody else. Ever.” He studies the phenomena—the feelings, the inside actions—of his own mind. “And the reason he’s good at it,” Dennett told me, “the reason he’s better than anybody else, is that he is very actively trying to have a theory of what’s going on backstage, of how thinking actually happens in the brain.” (…)

He makes photocopies of his notebook pages, cuts them up with scissors, and stores the errors in filing cabinets and labeled boxes around his study.

For Hofstadter, they’re clues. “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.” Correct speech isn’t very interesting; it’s like a well-executed magic trick—effective because it obscures how it works. What Hofstadter is looking for is “a tip of the rabbit’s ear … a hint of a trap door.

As the wind tunnel was to the Wright brothers, so the computer is to FARG. The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited. In Hofstadter’s view, this is the great opportunity of artificial intelligence. Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why. “I have always felt that the only hope of humans ever coming to fully understand the complexity of their minds,” Hofstadter has written, “is by modeling mental processes on computers and learning from the models’ inevitable failures.” (…)

But very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.

The modern era of mainstream AI—an era of steady progress and commercial success that began, roughly, in the early 1990s and continues to this day—is the long unlikely springtime after a period, known as the AI Winter, that nearly killed off the field.

It came down to a basic dilemma. On the one hand, the software we know how to write is very orderly; most computer programs are organized like a well-run army, with layers of commanders, each layer passing instructions down to the next, and routines that call subroutines that call subroutines. On the other hand, the software we want to write would be adaptable—and for that, a hierarchy of rules seems like just the wrong idea. Hofstadter once summarized the situation by writing, “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”

Machine learning

The “expert systems” that had once been the field’s meal ticket were foundering because of their brittleness. Their approach was fundamentally broken. Take machine translation from one language to another, long a holy grail of AI. The standard attack involved corralling linguists and translators into a room and trying to convert their expertise into rules for a program to follow. The standard attack failed for reasons you might expect: no set of rules can ever wrangle a human language; language is too big and too protean; for every rule obeyed, there’s a rule broken.

If machine translation was to survive as a commercial enterprise—if AI was to survive—it would have to find another way. Or better yet, a shortcut.

The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence. One such device, of course, is the human brain—but the whole point is to avoid grappling with the brain’s complexity. So what you do instead is start with a machine so simple, it almost doesn’t work: a machine, say, that randomly spits out French words for the English words it’s given.

Imagine a box with thousands of knobs on it. Some of these knobs control general settings: given one English word, how many French words, on average, should come out? And some control specific settings: given jump, what is the probability that shot comes next? The question is, just by tuning these knobs, can you get your machine to convert sensible English into sensible French?

It turns out that you can. What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.) You proceed one pair at a time. After you’ve entered a pair, take the English half and feed it into your machine to see what comes out in French. If that sentence is different from what you were expecting—different from the known correct translation—your machine isn’t quite right. So jiggle the knobs and try again. After enough feeding and trying and jiggling, feeding and trying and jiggling again, you’ll get a feel for the knobs, and you’ll be able to produce the correct French equivalent of your English sentence.

By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable result. And the beauty is that you never needed to program the machine explicitly; you never needed to know why the knobs should be twisted this way or that. (…)

Google has projects that gesture toward deeper understanding: extensions of machine learning inspired by brain biology; a “knowledge graph” that tries to map words, like Obama, to people or places or things. But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself. It’s like an enormous Rosetta Stone, the calcified hieroglyphics of minds once at work. (…)

Ever since he was about 15, Hofstadter has read The Catcher in the Rye once every 10 years. In the fall of 2011, he taught an undergraduate seminar called “Why Is J. D. Salinger’s The Catcher in the Rye a Great Novel?” He feels a deep kinship with Holden Caulfield. When I mentioned that a lot of the kids in my high-school class didn’t like Holden—they thought he was a whiner—Hofstadter explained that “they may not recognize his vulnerability.” You imagine him standing like Holden stood at the beginning of the novel, alone on the top of a hill, watching his classmates romp around at the football game below. “I have too many ideas already,” Hofstadter tells me. “I don’t need the stimulation of the outside world.” (…)

“Ars longa, vita brevis,” Hofstadter likes to say. “I just figure that life is short. I work, I don’t try to publicize. I don’t try to fight.”

There’s an analogy he made for me once. Einstein, he said, had come up with the light-quantum hypothesis in 1905. But nobody accepted it until 1923. “Not a soul,” Hofstadter says. “Einstein was completely alone in his belief in the existence of light as particles—for 18 years.

“That must have been very lonely.” “

— James Somers, to read the full article click The Man Who Would Teach Machines to Think, The Atlantic, Oct 23 2013

Douglas Hofstadter, is an American professor of cognitive science whose research focuses on the sense of “I”, consciousness, analogy-making, artistic creation, literary translation, and discovery in mathematics and physics. He is best known for his book Gödel, Escher, Bach: an Eternal Golden Braid, first published in 1979. It won both the Pulitzer Prize for general non-fiction.

See also:

The Mathematical Art of M.C. Escher, Lapidarium notes

Aug
22nd
Wed
permalink

The Nature of Consciousness: How the Internet Could Learn to Feel

       

“The average human brain has a hundred billion neurons and synapses on the order of a hundred trillion or so. But it’s not just sheer numbers. It’s the incredibly complex and specific ways in which these things are wired up. That’s what makes it different from a gigantic sand dune, which might have a billion particles of sand, or from a galaxy. Our Milky Way, for example, contains a hundred billion suns, but the way these suns interact is very simple compared to the way neurons interact with each other. (…)

It doesn’t matter so much that you’re made out of neurons and bones and muscles. Obviously, if we lose neurons in a stroke or in a degenerative disease like Alzheimer’s, we lose consciousness. But in principle, what matters for consciousness is the fact that you have these incredibly complicated little machines, these little switching devices called nerve cells and synapses, and they’re wired together in amazingly complicated ways.

The Internet now already has a couple of billion nodes. Each node is a computer. Each one of these computers contains a couple of billion transistors, so it is in principle possible that the complexity of the Internet is such that it feels like something to be conscious. I mean, that’s what it would be if the Internet as a whole has consciousness. Depending on the exact state of the transistors in the Internet, it might feel sad one day and happy another day, or whatever the equivalent is in Internet space. (…)

What I’m serious about is that the Internet, in principle, could have conscious states. Now, do these conscious states express happiness? Do they express pain? Pleasure? Anger? Red? Blue? That really depends on the exact kind of relationship between the transistors, the nodes, the computers. It’s more difficult to ascertain what exactly it feels. But there’s no question that in principle it could feel something. (…)

Q: Would humans recognize that certain parts of the Internet are conscious? Or is that beyond our understanding?

That’s an excellent question. If we had a theory of consciousness, we could analyze it and say yes, this entity, this simulacrum, is conscious. Or because it displays independent behavior. At some point, suddenly it develops some autonomous behavior that nobody programmed into it, right? Then, people would go, “Whoa! What just happened here?” It just sort of self-organized in some really weird way. It wasn’t a bug. It wasn’t a virus. It wasn’t a botnet that was paid for by some nefarious organization. It did it by itself. If this autonomous behavior happens on a regular basis, then I think many people would say, yeah, I guess it’s alive in some sense, and it may have conscious sensation. (…)

Q: How do you define consciousness?

Typically, it means having subjective states. You see something. You hear something. You’re aware of yourself. You’re angry. You’re sad. Those are all different conscious states. Now, that’s not a very precise definition. But if you think historically, almost every scientific field has a working definition and the definitions are subject to change. For example, my Caltech colleague Michael Brown has redefined planets. So Pluto is not a planet anymore, right? Because astronomers got together and decided that. And what’s a gene? A gene is very tricky to define. Over the last 50 years, people have had all sorts of changing definitions. Consciousness is not easy to define, but don’t worry too much about the definition. Otherwise, you get trapped in endless discussions about what exactly you mean. It’s much more important to have a working definition, run with it, do experiments, and then modify it as necessary. (…)

I see a universe that’s conducive to the formation of stable molecules and to life. And I do believe complexity is associated with consciousness. Therefore, we seem to live in a universe that’s particularly conducive to the emergence of consciousness. That’s why I call myself a “romantic reductionist.”

Christof Koch, American neuroscientist working on the neural basis of consciousness, Professor of Cognitive and Behavioral Biology at California Institute of Technology, The Nature of Consciousness: How the Internet Could Learn to Feel, The Atlantic, Aug 22, 2012. (Illustration: folkert: Noosphere)

See also:

Google and the Myceliation of Consciousness
Richard Doyle on Creativity, evolution of mind and the rhetorical membrane between humans and an informational universe
Consciousness tag on Lapidarium

Jun
3rd
Sun
permalink

Self as Symbol. The loopy nature of consciousness trips up scientists studying themselves

              
                                                          M.C. Escher’s “Drawing Hands”

"The consciousness problem remains popular on lists of problems that might never be solved.

Perhaps that’s because the consciousness problem is inherently similar to another famous problem that actually has been proved unsolvable: finding a self-consistent set of axioms for deducing all of mathematics. As the Austrian logician Kurt Gödel proved eight decades ago, no such axiomatic system is possible; any system as complicated as arithmetic contains true statements that cannot be proved within the system.

Gödel’s proof emerged from deep insights into the self-referential nature of mathematical statements. He showed how a system referring to itself creates paradoxes that cannot be logically resolved — and so certain questions cannot in principle be answered. Consciousness, in a way, is in the same logical boat. At its core, consciousness is self-referential awareness, the self’s sense of its own existence. It is consciousness itself that is trying to explain consciousness.

Self-reference, feedback loops, paradoxes and Gödel’s proof all play central roles in the view of consciousness articulated by Douglas Hofstadter in his 2007 book I Am a Strange Loop. Hofstadter is (among other things) a computer scientist, and he views consciousness through lenses unfamiliar to most neuroscientists. In his eyes, it’s not so bizarre to compare math and numbers to the mind and consciousness. Math is, after all, deeply concerned with logic and reason — the stuff of thought. Mathematical paradoxes, Hofstadter points out, open up “profound questions concerning the nature of reasoning — and thus concerning the elusive nature of thinking — and thus concerning the mysterious nature of the human mind itself.”

Enter the loop

In particular, Hofstadter seizes on Gödel’s insight that a mathematical formula — a statement about a number — can itself be represented by a number. So you can take the number describing a formula and insert that number into the formula, which then becomes a statement about itself. Such a self-referential capability introduces a certain “loopiness” into mathematics, Hofstadter notes, something like the famous Escher print of a right hand drawing a left hand, which in turn is drawing the right hand. This “strange loopiness” in math suggested to Hofstadter that something similar is going on in human thought.

So when he titled his book “I Am a Strange Loop,” Hofstadter didn’t mean that he was personally loopy, but that the concept of an individual — a persistent identity, an “I,” that accompanies what people refer to as consciousness — is a loop of a certain sort. It’s a feedback loop, like the circuit that turns a whisper into an ear-piercing screech when the microphone whispered into is too close to the loudspeaker emitting the sound.

But consciousness is more than just an ordinary feedback loop. It’s a strange loop, which Hofstadter describes as a loop capable of perceiving patterns in its environment and assigning common symbolic meanings to sufficiently similar patterns. An acoustic feedback loop generates no symbols, just noise. A human brain, though, can assign symbols to patterns. While patterns of dots on a TV screen are just dots to a mosquito, to a person, the same dots evoke symbols, such as football players, talk show hosts or NCIS agents. Floods of raw sensory data trigger perceptions that fall into categories designated by “symbols that stand for abstract regularities in the world,” Hofstadter asserts. Human brains create vast repertoires of these symbols, conferring the “power to represent phenomena of unlimited complexity and thus to twist back and to engulf themselves via a strange loop.”

Consciousness itself occurs when a system with such ability creates a higher-level symbol, a symbol for the ability to create symbols. That symbol is the self. The I. Consciousness. “You and I are mirages that perceive themselves,” Hofstadter writes.

This self-generated symbol of the self operates only on the level of symbols. It has no access to the workings of nerve cells and neurotransmitters, the microscopic electrochemical machinery of neurobiological life. The symbols that consciousness contemplates don’t look much like the real thing, the way a map of Texas conveys nothing of the grass and dirt and asphalt and bricks that cover the physical territory.

And just like a map of Texas remains remarkably stable over many decades — it doesn’t change with each new pothole in a Dallas street — human self-identity remains stable over a lifetime, despite constant changes on the micro level of proteins and cells. As an individual grows, matures, changes in many minute ways, the conscious self’s identity remains intact, just as Texas remains Texas even as new skyscrapers rise in the cities, farms grow different crops and the Red River sometimes shifts the boundary with Oklahoma a bit.

If consciousness were merely a map, a convenient shortcut symbol for a complex mess of neurobiological signaling, perhaps it wouldn’t be so hard to figure out. But its mysteries multiply because the symbol is generated by the thing doing the symbolizing. It’s like Gödel’s numbers that refer to formulas that represent truths about numbers; this self-referentialism creates unanswerable questions, unsolvable problems.

A typical example of such a Gödelian paradox is the following sentence: This sentence cannot be true.

Is that sentence true? Obviously not, because it says it isn’t true. But wait — then it is true. Except that it can’t be. Self-referential sentences seem to have it both ways — or neither way.

And so perceptual systems able to symbolize themselves — self-referential minds — can’t be explained just by understanding the parts that compose them. Simply describing how electric charges travel along nerve cells, how small molecules jump from one cell to another, how such signaling sends messages from one part of the brain to another — none of that explains consciousness any more than knowing the English alphabet letter by letter (and even the rules of grammar) will tell you the meaning of Shakespeare’s poetry.

Hofstadter does not contend, of course, that all the biochemistry and cellular communication is irrelevant. It provides the machinery for perceiving and symbolizing that makes the strange loop of consciousness possible. It’s just that consciousness does not itself deal with molecules and cells; it copes with thoughts and emotions, hopes and fears, ideas and desires. Just as numbers can represent the complexities of all of mathematics (including numbers), a brain can represent the complexities of experience (including the brain itself). Gödel’s proof showed that math is “incomplete”; it contains truths that can’t be proven. And consciousness is a truth of a sort that can’t be comprehended within a system of molecules and cells alone.

That doesn’t mean that consciousness can never be understood — Gödel’s work did not undermine human understanding of mathematics, it enriched it. And so the realization that consciousness is self-referential could also usher in a deeper understanding of what the word means — what it symbolizes.

Information handler

Viewed as a symbol, consciousness is very much like many of the other grand ideas of science. An atom is not so much a thing as an idea, a symbol for matter’s ultimate constituents, and the modern physical understanding of atoms bears virtually no resemblance to the original conception in the minds of the ancient Greeks who named them. Even Francis Crick’s gene made from DNA turned out to be much more elusive than the “unit of heredity” imagined by Gregor Mendel in the 19th century. The later coinage of the word gene to describe such units long remained a symbol; early 20th century experiments allowed geneticists to deduce a lot about genes, but nobody really had a clue what a gene was.

“In a sense people were just as vague about what genes were in the 1920s as they are now about consciousness,” Crick said in 1998. “It was exactly the same. The more professional people in the field, which was biochemistry at that time, thought that it was a problem that was too early to tackle.”

It turned out that with genes, their physical implementation didn’t really matter as much as the information storage and processing that genes engaged in. DNA is in essence a map, containing codes allowing one set of molecules to be transcribed into others necessary for life. It’s a lot easier to make a million copies of a map of Texas than to make a million Texases; DNA’s genetic mapping power is the secret that made the proliferation of life on Earth possible. Similarly, consciousness is deeply involved in representing information (with symbols) and putting that information together to make sense of the world. It’s the brain’s information processing powers that allow the mind to symbolize itself.

Koch believes that focusing on information could sharpen science’s understanding of consciousness. A brain’s ability to find patterns in influxes of sensory data, to send signals back and forth to integrate all that data into a coherent picture of reality and to trigger appropriate responses all seem to be processes that could be quantified and perhaps even explained with the math that describes how information works.

“Ultimately I think the key thing that matters is information,” Koch says. “You have these causal interactions and they can be quantified using information theory. Somehow out of that consciousness has to arrive.” An inevitable consequence of this point of view is that consciousness doesn’t care what kind of information processors are doing all its jobs — whether nerve cells or transistors.

“It’s not the stuff out of which your brain is made,” Koch says. “It’s what that stuff represents that’s conscious, and that tells us that lots of other systems could be conscious too.”

Perhaps, in the end, it will be the ability to create unmistakable features of consciousness in some stuff other than a biological brain that will signal success in the quest for an explanation. But it’s doubtful that experimentally exposing consciousness as not exclusively human will displace humankind’s belief in its own primacy. People will probably always believe that it can only be the strange loop of human consciousness that makes the world go ’round.

“We … draw conceptual boundaries around entities that we easily perceive, and in so doing we carve out what seems to us to be reality,” Hofstadter wrote. “The ‘I’ we create for each of us is a quintessential example of such a perceived or invented reality, and it does such a good job of explaining our behavior that it becomes the hub around which the rest of the world seems to rotate.”

Tom Siegfried, American journalist, author, Self as Symbol, Science News, Feb 11, 2012.

See also:

☞ Laura Sanders, Ph.D. in Molecular Biology from the University of Southern California in Los Angeles, Emblems of Awareness, Science News, Feb 11, 2012.

                                            Degress of thought

                                          (Credit: Stanford University)

"Awareness typically tracks with wakefulness — especially in normal states of consciousness (bold). People in coma or under general anesthesia score low on both measures, appearing asleep with no signs of awareness. Sometimes, wakefulness and awareness become uncoupled, such as among people in a persistent vegetative state. In this case, a person seems awake and is sometimes able to move but is unaware of the surroundings."  (…)

“Messages constantly zing around the brain in complex patterns, as if trillions of tiny balls were simultaneously dropped into a pinball machine, each with a prescribed, mission-critical path. This constant flow of information might be what creates consciousness — and interruptions might destroy it. (…)

“If you knock on a wooden table or a bucket full of nothing, you get different noises,” Massimini says. “If you knock on the brain that is healthy and conscious, you get a very complex noise.” (…)

In the same way that “life” evades a single, clear definition (growth, reproduction or a healthy metabolism could all apply), consciousness might turn out to be a collection of remarkable phenomena, Seth says. “If we can explain different aspects of consciousness, then my hope is that it will start to seem slightly less mysterious that there is consciousness at all in the universe.” (…)

Recipe for consciousness

Somehow a sense of self emerges from the many interactions of nerve cells and neurotransmitters in the brain — but a single source behind the phenomenon remains elusive.

            

                                                      Illustration: Nicolle Rager Fuller

1. Parietal cortex Brain activity in the parietal cortex is diminished by anesthetics, when people fall into a deep sleep and in people in a vegetative state or coma. There is some evidence suggesting that the parietal cortex is where first-person perspective is generated.

2. Frontal cortex Some researchers argue that parts of the frontal cortex (along with connections to the parietal cortex) are required for consciousness. But other scientists point to a few studies in which people with damaged frontal areas retain consciousness.

3. Claustrum An enigmatic, thin sheet of neural tissue called the claustrum has connections with many other regions. Though the structure has been largely ignored by modern scientists, Francis Crick became keenly interested in the claustrum’s role in consciousness just before his death in 2004.

4. Thalamus As one of the brain’s busiest hubs of activity, the thalamus is believed by many to have an important role in consciousness. Damage to even a small spot in the thalamus can lead to consciousness disorders.

5. Reticular activating system Damage to a particular group of nerve cell clusters, called the reticular activating system and found in the brain stem, can render a person comatose.”

☞ Bruce Hood, The Self Illusion: How the Brain Creates Identity
Theories of consciousness. Make Up Your Own Mind (visualization)
Malcolm MacIver on why did consciousness evolve, and how can we modify it?
Consciousness tag on Lapidarium

May
17th
Thu
permalink

The Self Illusion: How the Brain Creates Identity

            

'The Self'

"For the majority of us the self is a very compulsive experience. I happen to think it’s an illusion and certainly the neuroscience seems to support that contention. Simply from the logical positions that it’s very difficult to, without avoiding some degree of infinite regress, to say a starting point, the trail of thought, just the fractionation of the mind, when we see this happening in neurological conditions. The famous split-brain studies showing that actually we’re not integrated entities inside our head, rather we’re the output of a multitude of unconscious processes.

I happen to think the self is a narrative, and I use the self and the division that was drawn by William James, which is the “I” (the experience of conscious self) and the “me” (which is personal identity, how you would describe yourself in terms of where are you from and everything that makes you up in your predilections and your wishes for the future). Both the “I”, who is sentient of the “me”, and the “me”, which is a story of who you are, I think are stories. They’re constructs and narratives. I mean that in a sense that a story is a reduction or at least it’s a coherent framework that has some causal kind of coherence.

When I go out and give public lectures I like to illustrate the weaknesses of the “I” by using visual illusions of the most common examples. But there are other kinds of illusions that you can introduce which just reveal to people how their conscious experience is actually really just a fraction of what’s really going on. It certainly is not a true reflection of all mechanisms that are generating. Visual illusions are very obvious in that. The thing about the visual illusion effects is that even when they’re explained to you, you can’t but help see them, so that’s interesting. You can’t divorce yourself from the mechanisms that are creating the illusion and the mind that’s experienced in the illusion.

The sense of personal identity, this is where we’ve been doing experimental work showing the importance that we place upon episodic memories, autobiographical memories. In our duplication studies for example, children are quite willing to accept that you could copy a hamster with all its physical properties that you can’t necessarily see, but what you can’t copy very easily are the episodic memories that one hamster has had.

This actually resonates with the ideas of John Locke, the philosopher, who also argued that personal identity was really dependent on the autobiographical or episodic memories, and you are the sum of your memories, which, of course, is something that fractionates and fragments in various forms of dementia. As the person loses the capacity to retrieve memories, or these memoires become distorted, then the identity of the person, the personality, can be changed, amongst other things. But certainly the memories are very important.

As we all know, memory is notoriously fallible. It’s not cast in stone. It’s not something that is stable. It’s constantly reshaping itself. So the fact that we have a multitude of unconscious processes which are generating this coherence of consciousness, which is the I experience, and the truth that our memories are very selective and ultimately corruptible, we tend to remember things which fit with our general characterization of what our self is. We tend to ignore all the information that is inconsistent. We have all these attribution biases. We have cognitive dissonance. The very thing psychology keeps telling us, that we have all these unconscious mechanisms that reframe information, to fit with a coherent story, then both the “I” and the “me”, to all intents and purposes, are generated narratives.

The illusions I talk about often are this sense that there is an integrated individual, with a veridical notion of past. And there’s nothing at the center. We’re the product of the emergent property, I would argue, of the multitude of these processes that generate us.       

I use the word illusion as opposed to delusion. Delusion implies mental illness, to some extent, and illusion, we’re quite happy to accept that we’re experiencing illusions, and for me the word illusion really does mean that it’s an experience that is not what it seems. I’m not denying that there is an experience. We all have this experience, and what’s more, you can’t escape it easily. I think it’s more acceptable to call it an illusion whereas there’s a derogatory nature of calling something a delusion. I suspect there’s probably a technical difference which ought to do with mental illness, but no, I think we’re all perfectly normally, experience this illusion.      

Oliver Sacks has famously written about various case studies of patients which seem so bizarre, people who have various forms of perceptual anomalies, they mistake their wife for a hat, or there are patients who can’t help but copy everything they see. I think that in many instances, because the self is so core to our normal behavior having an understanding that self is this constructive process, I think if this was something that clinicians were familiar with, then I think that would make a lot of sense.

Neuroethics

In fact, it’s not only in clinical practice, I think in a lot of things. I think neuroethics is a very interesting field. I’ve got another colleague, David Eagleman, he’s very interested in these ideas. The culpability, responsibility. We premise our legal systems on this notion there is an individual who is to be held accountable. Now, I’m not suggesting that we abandon that, and I’m not sure what you would put in its place, but I think we can all recognize that there are certain situations where we find it very difficult to attribute blame to someone. For example, famously, Charles Whitman, the Texan sniper, when they had the autopsy, they discovered a very sizeable tumor in a region of the brain which could have very much influenced his ability to control his rage. I’m not suggesting every mass murder has inoperable tumors in their brain, but it’s conceivable that there will be, with our increasing knowledge of how the brain operates, and our ability to understand it, it’s conceivable there will be more situations where the lawyers will be looking to put the blame on some biological abnormality.

Where is the line to be drawn? I think that’s a very tough one to deal with. It’s a problem that’s not going to go away. It’s something that we’re going to continually face as we start to learn more about the genetics of aggression.

There’s a lot of interest in this thing called the warrior gene. To what extent is this a gene which predisposes you to violence? Or do you need the interaction between the gene and the abusive childhood in order to get this kind of profile? So it’s not just clinicians, it’s actually just about every realm of human activity where you posit the existence of a self and individuals, and responsibility. Then it will reframe the way you think about things. Just the way that we heap blame and praise, the flip side of blaming people is that we praise individuals. But it could be, in a sense, a multitude of factors that have led them to be successful. I think that it’s a pervasive notion. Whether or not we actually change the way we do anything, I’m not so sure, because I think it would be really hard to live our lives dealing with non-individuals, trying to deal with multitude and the history that everyone brings to the table. There’s a good reason why we have this experience of the self. It’s a very sort of succinct and economical way of interacting with each other. We deal with individuals. We fall in love with individuals, not multitudes of past experiences and aspects of hidden agendas, we just pick them out. (…)

The objects are part of the extended sense of self

I keep tying this back to my issues about why certain objects are overvalued, and I happen to believe, like James again, that objects are part of the extended sense of self. We surround ourselves with objects. We place a lot of value on objects that we think are representative of our self.  (…)

We’re the only species on this planet that invests a lot of time and evaluation through our objects, and this has been something that has been with us for a very, very long time.

Think of some of the early artifacts. The difficulty would have been to make these artifacts, the time invested in these things, means that from a very early point in our civilization, or before civilization, I think the earliest pieces are probably about 90,000 years old. There are certainly older things that are tools, but pieces of artwork, about 90,000 years old. So it’s been with us a long time. And yes, some of them are obviously sacred objects, power of religious purposes and so forth. But outside of that, there’s still this sense of having materials or things that we value, and that intrigues me in so many ways. And I don’t think it’s necessarily universal as well. It’s been around a lot, but the endowment effect, for example, is not found everywhere. There’s some intriguing work coming out of Africa. 

The endowment effect is this rather intriguing idea that we will spontaneously overvalue an object as soon as we believe it’s in our possession, we don’t actually have to have it physically, just bidding on something, as soon as you make your connection to an object, then you value it more, you’ll actually remember more about it, you’ll remember objects which you think are in your possession in comparison to someone else. It gets a whole sense of attribution and value associated with it, which is one of the reasons why people never get the asking price for the things that they’re trying to sell, they always think their objects are worth more than other people are willing to pay for them.

There was the first experimental demonstration by Richard Thaler and Danny Kahneman, and the early behavioral economics, was this demonstration that if you just give people coffee cups, students, coffee cups, and then you ask them to sell it, they always ask more than what someone’s willing to pay for it. It turns out it’s not just coffee cups, it’s wine, it’s chocolate, it’s anything, basically. There’s been quite a bit of work done on the endowment effect now. As I say, it’s been looked at in different species, and the brain mechanisms of having to sell something at a lower price, like loss aversion, it’s seen as quite painful, triggers the same pain centers, if you think you’re going to lose out on a deal

What is it about the objects that give us this self-evaluated sense? Well, I think James spoke of this, again, William James commented on the way that we use objects to extend our self. Russell Belk is a marketing psychologist. He has also talked about the extended self in terms of objects. As I say, this is something that I think marketers know in that they create certain quality brands that are perceived to signal to others how good your social status is.

It’s something in us, but it may not be universal because there are tribes, there are some recent reports from nomadic tribes in central Africa, who don’t seem to have this sense of ownership. It might be a reflection more of the fact that a lot of this work has been done in the West where we’re very individualistic, and of course individualism almost creates a lot of endowment ideas and certainly supports the endowment, materialism that we see. But this is an area I’d like to do more work with because we have not found any evidence of the endowment effect in children below five, six years of age. I’m interested: is this something that just emerges spontaneously? I suspect not. I suspect this is something that culture is definitely shaping. That’s my hunch, so that’s an empirical question I need to pick apart.

The irrational superstitious behaviors

Another line of research I’ve been working on in the past five years … this was a little bit like putting the cart before the horse, so I put forward an idea, it wasn’t entirely original. It was a combination of ideas of others, most notably Pascal Boyer. Paul Bloom, to some extent, had been thinking something similar. A bunch of us were interested in why religion was around. I didn’t want to specifically focus on religion. I wanted to get to the more general point about belief because it was my hunch that even a lot of atheists or self-stated atheists or agnostics, still nevertheless entertained beliefs which were pretty irrational. I wasn’t meaning irrational in a kind of behavioral economics type of way. I meant irrational in that there were these implicit views that would violate the natural laws as we thought about them. Violations of the natural laws I see as being supernatural. That’s what makes them supernatural. I felt that this was an area worth looking at. They’d been looked at 50, 60 years ago very much in the behaviorist association tradition.

BF Skinner famously wrote a paper on the superstitious behavior of pigeons, and he argued if you simply set up a reinforcement schedule at a random kind of interval, pigeons will adopt typical patterns that they think are somehow related to the reward, and then you could shape irrational superstitious behaviors. Now that work has turned out to be a bit dubious and I’m not sure that stood the test of time. But in terms of people’s rituals and routines, it’s quite clear and I know them in myself. There are these things that we do which are familiar, and we get a little bit irritated we don’t get to do them, so we do, most of us, entertain some degree of superstitious behavior.

At the time there was a lot of interest in religion and a lot of the hoo-ha about The God Delusion, and I felt that maybe we just need to redress this idea that it’s all to do with indoctrination, because I couldn’t believe the whole edifice of this kind of belief system was purely indoctrination. I’m not saying there’s not indoctrination, and clearly, religions are culturally transmitted. You’re not born to be Jewish or born to be Christian. But what I think religions do is they capitalize on a lot of inclinations that children have. Then I entered into a series of work, and my particular interest was this idea of essentialism and sacred objects and moral contamination.

We took a lot of the work that Paul Rozin had done, talking about things like killers’ cardigans, and we started to see if there was any empirical measures of transfer. For example, would you find yourself wanting to wash your hands more? Would you find priming effects for words which were related to good and evil, based on whether you had touched the object or not? For me there had to be this issue of physical contact. It struck me as this was why it wasn’t a pure association mechanism. It was actually something to do with the belief, a naïve belief there was some biological entity that can somehow, moral contamination can transfer.

We started to look at, actually not children now, but looking at adults because doing this sort of work with children is very difficult and probably somewhat controversial. But the whole area of research is premised on this idea that there are intuitive ways of seeing the world. Sometimes this is referred to as System One and System Two, or automatic and control. It reappears in a variety of psychological contexts. I just think about it as these unconscious, rapid systems which are triggered automatically. I think their origins are in children. Whilst you can educate people with a kind of slower System Two, if you like, you never eradicate the intuitive ways of seeing the world because they were never taught in the first place. They’re always there. I suppose if you want to ask me if there any kind of thing that you can have as a theory that you haven’t yet proven, it’s the idea is, I don’t think you ever throw away any belief system or any ideas that have been derived through these unconscious intuitive processes. You can supersede them, you can overwrite them, but they never go away, and they will reemerge under the right contexts. If you put people through stressful situations or you overload it, you can see the reemergence of these kinds of ways of thinking. The empirical evidence seems to be supporting that. They’ve got wrinkles in their brains. They’re never going to go away. You can try and override them, but they’re always there and they will reappear under the right circumstances, which is why you see the reemergence under stress of a lot of irrational thinking.

For example, teleological explanations, the idea that everything is made for a purpose or a function, is a natural way to see the world. This is Deb Kelemen's work. You will find that people who considered themselves fairly rational and well educated will, nevertheless, default back to teleological explanations if you put them under a stressful timed kind of situation. So it’s a way of seeing the world that is never eradicated. I think that’s going to be a general principle, in the same way that a reflex, if you think about reflexes, that’s an unlearned behavioral response. You’re born with a whole set of reflexes. Many of them disappear, but they never entirely go away. They become typically reintegrated into more complex behaviors, but if someone goes into a coma, you can see the reflexes reemerging.

What we think is going on is that in the course of development, these very automatic behaviors become controlled by top-down processes from the cortex, all these higher order systems which are regulating and controlling and suppressing, trying to keep these things under wraps. But when the cortex is put out of action through a coma or head injury, then you can see many of these things reemerging again. I don’t see why there should be any point of departure from a motor system to a perceptual system, to a cognitive system, because they’re all basically patterns of neural firing in the brain, and so I don’t see why it can’t be the case that if concepts are derived through these processes, they could remain dormant and latent as well.

The hierarchy of representations in the brain

One of the things that has been fascinating me is the extent to which we can talk about the hierarchy of representations in the brain. Representations are literally re-presentations. That’s the language of the brain, that’s the mode of thinking in the brain, it’s representation. It’s more than likely, in fact, it’s most likely that there is already representation wired into the brain. If you think about the sensory systems, the array of the eye, for example, is already laid out in a topographical representation of the external world, to which it has not yet been exposed. What happens is that this is general layout, arrangements that become fine-tuned. We know of a lot of work to show that the arrangements of the sensory mechanisms do have a spatial arrangement, so that’s not learned in any sense. But these can become changed through experiences, and that’s why the early work of Hubel and Weisel, about the effects of abnormal environments showed that the general pattern could be distorted, but the pattern was already in place in the first place.

When you start to move beyond sensory into perceptual systems and then into cognitive systems, that’s when you get into theoretical arguments and the gloves come off. There are some people who argue that it has to be the case that there are certain primitives built into the conceptual systems. I’m talking about the work of, most notably, Elizabeth Spelke.  

There certainly seems to be a lot of perceptual ability in newborns in terms of constancies, noticing invariant aspects of the physical world. I don’t think I have a problem with any of that, but I suppose this is where the debates go. (…)

Shame in the East is something that is at least recognized as a major factor of identity

I’ve been to Japan a couple of time. I’m not an expert in the cultural variation of cognition, but clearly shame is a major factor in motivation, or avoidance of shame, in eastern cultures. I think it reflects the sense of self worth and value in eastern culture. It is very much a collective notion that they place a lot of emphasis on not letting the team down. I believe they even have a special word for that aspect or experience of shame that we don’t have. That doesn’t mean that it’s a concept that we can never entertain, but it does suggest that in the East this is something that is at least recognized as a major factor of identity.

Children don’t necessarily feel shame. I don’t think they’ve got a sense of self until well into their second year. They have the “I”, they have the notion of being, of having control. They will experience the willingness to move their arms, and I’m sure they make that connection very quickly, so they have this sense of self, in that “I” notion, but I don’t think they’ve got personal identity, and that’s one of the reasons that they don’t have much, or very few of us have much memory of our earlier times. Our episodic memories are very fragmented, sensory events. But from about two to three years on they start to get a sense of who they are. Knowing who you are means becoming integrated into your social environment, and part of becoming integrated into your social environment means acquiring a sense of shame. Below two, three years of age, I don’t think many children have a notion of shame. But from then on, as they have to become members of the social tribe, then they have to be made aware of the consequences of being antisocial or doing things not what’s expected of them. I think that’s probably late in the acquisition.”

Bruce Hood, Canadian-born experimental psychologist who specialises in developmental cognitive neuroscience, Director of the Bristol Cognitive Development Centre, based at the University of Bristol, Essentialism, Edge, May, 17, 2012. (Illustration source)

The Illusion of the Self

"For me, an illusion is a subjective experience that is not what it seems. Illusions are experiences in the mind, but they are not out there in nature. Rather, they are events generated by the brain. Most of us have an experience of a self. I certainly have one, and I do not doubt that others do as well – an autonomous individual with a coherent identity and sense of free will. But that experience is an illusion – it does not exist independently of the person having the experience, and it is certainly not what it seems. That’s not to say that the illusion is pointless. Experiencing a self illusion may have tangible functional benefits in the way we think and act, but that does not mean that it exists as an entity. (…)

For most of us, the sense of our self is as an integrated individual inhabiting a body. I think it is helpful to distinguish between the two ways of thinking about the self that William James talked about. There is conscious awareness of the present moment that he called the “I,” but there is also a self that reflects upon who we are in terms of our history, our current activities and our future plans. James called this aspect of the self, “me” which most of us would recognize as our personal identity—who we think we are. However, I think that both the “I” and the “me” are actually ever-changing narratives generated by our brain to provide a coherent framework to organize the output of all the factors that contribute to our thoughts and behaviors.

I think it helps to compare the experience of self to subjective contours – illusions such as the Kanizsa pattern where you see an invisible shape that is really defined entirely by the surrounding context. People understand that it is a trick of the mind but what they may not appreciate is that the brain is actually generating the neural activation as if the illusory shape was really there. In other words, the brain is hallucinating the experience. There are now many studies revealing that illusions generate brain activity as if they existed. They are not real but the brain treats them as if they were.

Now that line of reasoning could be applied to all perception except that not all perception is an illusion. There are real shapes out there in the world and other physical regularities that generate reliable states in the minds of others. The reason that the status of reality cannot be applied to the self, is that it does not exist independently of my brain alone that is having the experience. It may appear to have a consistency of regularity and stability that makes it seem real, but those properties alone do not make it so.

Similar ideas about the self can be found in Buddhism and the writings of Hume and Spinoza. The difference is that there is now good psychological and physiological evidence to support these ideas that I cover in the book. (…)

There are many cognitive scientists who would doubt that the experience of I is constructed from a multitude of unconscious mechanisms and processes. Me is similarly constructed, though we may be more aware of the events that have shaped it over our lifetime. But neither is cast in stone and both are open to all manner of reinterpretation. As artists, illusionists, movie makers, and more recently experimental psychologists have repeatedly shown, conscious experience is highly manipulatable and context dependent. Our memories are also largely abstracted reinterpretations of events – we all hold distorted memories of past experiences. (…)

The developmental processes that shape our brains from infancy onwards to create our identities as well as the systematic biases that distort the content of our identity to form a consistent narrative. I believe much of that distortion and bias is socially relevant in terms of how we would like to be seen by others. We all think we would act and behave in a certain way, but the reality is that we are often mistaken. (…)

Q: What role do you think childhood plays in shaping the self?

Just about everything we value in life has something to do with other people. Much of that influence occurs early in our development, which is one reason why human childhoods are so prolonged in comparison to other species. We invest so much effort and time into our children to pass on as much knowledge and experience as possible. It is worth noting that other species that have long periods of rearing also tend to be more social and intelligent in terms of flexible, adaptive behaviors. Babies are born social from the start but they develop their sense of self throughout childhood as they move to become independent adults that eventually reproduce. I would contend that the self continues to develop throughout a lifetime, especially as our roles change to accommodate others. (…)

The role of social networking in the way we portray our self

There are some interesting phenomena emerging. There is evidence of homophily – the grouping together of individuals who share a common perspective, which is not too surprising. More interesting is evidence of polarization. Rather than opening up and exposing us to different perspectives, social networking on the Internet can foster more radicalization as we seek out others who share our positions. The more others validate our opinions, the more extreme we become. I don’t think we need to be fearful, and I am less concerned than the prophets of doom who predict the downfall of human civilization, but I believe it is true that the way we create the narrative of the self is changing.

Q: If the self is an illusion, what is your position on free will?

Free will is certainly a major component of the self illusion, but it is not synonymous. Both are illusions, but the self illusion extends beyond the issues of choice and culpability to other realms of human experience. From what I understand, I think you and I share the same basic position about the logical impossibility of free will. I also think that compatibilism (that determinism and free will can co-exist) is incoherent. We certainly have more choices today to do things that are not in accord with our biology, and it may be true that we should talk about free will in a meaningful way, as Dennett has argued, but that seems irrelevant to the central problem of positing an entity that can make choices independently of the multitude of factors that control a decision. To me, the problem of free will is a logical impasse – we cannot choose the factors that ultimately influence what we do and think. That does not mean that we throw away the social, moral, and legal rulebooks, but we need to be vigilant about the way our attitudes about individuals will be challenged as we come to understand the factors (both material and psychological) that control our behaviors when it comes to attributing praise and blame. I believe this is somewhat akin to your position. (…)

The self illusion explains so many aspects of human behavior as well as our attitudes toward others. When we judge others, we consider them responsible for their actions. But was Mary Bale, the bank worker from Coventry who was caught on video dropping a cat into a garbage can, being true to her self? Or was Mel Gibson’s drunken anti-Semitic rant being himself or under the influence of someone else? What motivated Senator Weiner to text naked pictures of himself to women he did not know? In the book, I consider some of the extremes of human behavior from mass murderers with brain tumors that may have made them kill, to rising politicians who self-destruct. By rejecting the notion of a core self and considering how we are a multitude of competing urges and impulses, I think it is easier to understand why we suddenly go off the rails. It explains why we act, often unconsciously, in a way that is inconsistent with our self image – or the image of our self as we believe others see us.

That said, the self illusion is probably an inescapable experience we need for interacting with others and the world, and indeed we cannot readily abandon or ignore its influence, but we should be skeptical that each of us is the coherent, integrated entity we assume we are.

Bruce Hood Canadian-born experimental psychologist who specialises in developmental cognitive neuroscience, Director of the Bristol Cognitive Development Centre, based at the University of Bristol, interviewed by Sam Harris, The Illusion of the Self, Sam Harris blog, May 22, 2012.

See also:

Existence: What is the self?, Lapidarium notes
Paul King on what is the best explanation for identity
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
Professor George Lakoff: Reason is 98% Subconscious Metaphor in Frames & Cultural Narratives
Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking

Apr
26th
Thu
permalink

Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

   

"That’s what we do with all of our art. A beautiful cathedral, a beautiful painting, a beautiful song—-all of those are ecstatic visions held in stasis; in some sense the artist is saying “here is a glimpse I had of something ephemeral and fleeting and magical, and I’m doing my best to instantiate that into stone, into paint, into stasis.” And that’s what human beings have always done, we try to capture these experiences before they go dim, we try to make sure that what we glimpse doesn’t fade away before we get hungry or sleepy later. (…)

We want to transcend our biological limitations. We don’t want biology or entropy to interrupt the ecstasy of consciousness. Consciousness, when it’s unburdened by the body, is something that’s ecstatic; we use the mind to watch the mind, and that’s the meta-nature of our consciousness, we know that we know that we know, and that’s such a delicious feeling, but when it’s unburdened by biology and entropy it becomes more than delicious; it becomes magical. I mean, think of the unburdening of the ego that takes place when we watch a film; we sit in a dark room, it’s sort of a modern church, we turn out the lights and an illumination beams out from behind us creating these ecstatic visions. We lose ourselves in the story, we experience a genuine catharsis, the virtual becomes real—-it’s total transcendence, right? (…)

This haunting idea of the passing of time, of the slipping away of the treasured moments of our lives, became a catalyst for my thinking a lot about mortality. This sense that the moment is going to end, the night will be over, and that we’re all on this moving walkway headed towards death; I wanted a diversion from that reality. In Ernest Becker's book The Denial of Death, he talks about how the neurotic human condition is not a product of our sexual repression, but rather our repression in the face of death anxiety. We have this urgent knot in our stomach because we’re keenly aware that we’re mortal, and so we try to find these diversions so that we don’t think about it—-and these have manifested into the religious impulse, the romantic impulse, and the creative impulse.

As we increasingly become sophisticated, cosmopolitan people, the religious impulse is less relevant. The romantic impulse has served us well, particularly in popular culture, because that’s the impulse that allows us to turn our lovers into deities; we say things like “she’s like salvation, she’s like the wind,” and we end up worshipping our lovers. We invest in this notion that to be loved by someone is to be saved by someone. But ultimately no relationship can bear the burden of godhood; our lovers reveal their clay feet and their frailties and they come back down to the world of biology and entropy. 

So then we look for salvation in the creative impulse, this drive to create transcendent art, or to participate in aesthetic arrest. We make beautiful architecture, or beautiful films that transport us to this lair where we’re like gods outside of time. But it’s still temporal. The arts do achieve that effect, I think, and so do technologies to the extent that they’re extensions of the human mind, extensions of our human longing. In a way, that is the first pathway to being immortal gods. Particularly with technologies like the space shuttle, which make us into gods in the sense that they let us hover over the earth looking down on it. But then we’re not gods, because we still age and we die.

But even if you see the singularity only as a metaphor, you have to admit it’s a pretty wonderful metaphor, because human nature, if nothing else, consists of this desire to transcend our boundaries—-the entire history of man from hunter gatherer to technologist to astronaut is this story of expanding and transcending our boundaries using our tools. And so whether the metaphor works for you or not, that’s a wonderful way to live your life, to wake up every day and say, “even if I am going to die I am going to transcend my human limitations.” And then if you make it literal, if you drop this pretense that it’s a metaphor, you notice that we actually have doubled our lifespan, we really have improved the quality of life across the world, we really have created magical devices that allow us to send our thoughts across space at nearly the speed of light. We really are on the cusp of reprogramming our biology like we program computers. 

All of the sudden this metaphor of the singularity spills over into the realm of the possible, and it makes it that much more intoxicating; it’s like going from two dimensions to three dimensions, or black and white to color. It just keeps going and going, and it never seems to hit the wall that other ideas hit, where you have to stop and say to yourself “stop dreaming.” Here you can just kind of keep dreaming, you can keep making these extrapolations of Moore’s Law, and say “yeah, we went from building-sized supercomputers to the iPhone, and in forty-five years it will be the size of a blood cell.” That’s happening, and there’s no reason to think it’s going to stop.

Q: Going through your videos, I noticed that one vision of the singularity that you keep returning to is this idea of “substrate-independent minds.” Can you explain what a substrate independent mind is, and why it makes for such a compelling vision of the future?

Jason Silva: That has to do with what’s called STEM compression, which is this notion that all technologies become compressed in terms of space, time, energy and matter (STEM) as they evolve. Our brain is a great example of this; it’s got this dizzying level of complexity for such a small space, but the brain isn’t optimal. The optimal scenario would be to have brain-level complexity, or even higher-level complexity in something that’s the size of cell. If we radically upgrade our bodies with biotech, we might find that in addition to augmenting our biological capabilities, we’re also going to be replacing more of our biology with non-biological components, so that things are backed up and decentralized and not subject to entropy. More and more of the data processing that makes up our consciousness is going to be non-biological, and eventually we might be able to discard biology altogether, because we’ll have finally invented a computational substrate that supports the human mind. 

At that point, if we’re doing computing at the nano scale, or the femto scale, which is even smaller, you could see extraordinary things. What if we could store all of the computing capacity of the world’s computer networks in something that operates at the femto scale? What if we could have thinking, dreaming, conscious minds operating at the femto scale? That would be a substrate independent mind.

You can even go beyond that. John Smart has this really interesting idea he calls the Transcension Hypothesis. It’s this idea that that all civilizations hit a technological singularity, after which they stop expanding outwards, and instead become subject to STEM compression that pushes them inward into denser and denser computational states until eventually we disappear out of the visible universe, and we enter into a black-hole-like condition. So you’ve got digital minds exponentially more powerful than the ones we use today, operating in the computational substrate, at the femto scale, and they’re compressing further and further into a black hole state, because a black hole is the most efficient computational substrate that physics has ever described. I’m not a physicist, but I have read physicists who say that black holes are the ultimate computers, and that’s why the whole STEM compression idea is so interesting, especially with substrate independent minds; minds that can hop back and forth between different organizational structures of matter.  (…)

With technology, we’ve been doing the same thing we used to with religion, which is to dream of a better way to exist, but technology actually gives you real ways to extend your thoughts and your vision. (…)



The mind is always participating in these feedback loops with the spaces it resides in; whatever is around us is a mirror that we’re holding up to ourselves, because everything we’re thinking about we’re creating a model of in our heads. So when you’re in constrained spaces you’re having constrained thoughts, and when you’re in vast spaces you have vast thoughts. So when you get to sit and contemplate actual outer space, solar systems, and galaxies, and super clusters—-think about how much that expands your inner world. That’s why we get off on space. 

I also get off on synthetic biology, because I love the metaphors that exist between technology and biology: the idea that we may be able to reprogram the operating system, or upgrade the software of our biology. It’s a great way to help people understand what’s possible with biology, because people already understand the power we have over the digital world—-we’re like gods in cyberspace, we can make anything come into being. When the software of biology is subject to that very same power, we’re going to be able to do those same things in the realm of living things. There’s this Freeman Dyson line that I have quoted a million times in my videos, to the point where people are actually calling me out about it, but the reason I keep coming back to it is that it’s so emblematic of my awe in thinking about this stuff—-he says that "in the future, a new generation of artists will be writing genomes as fluently as Blake and Byron wrote verses." It’s a really well placed analogy, because the alphabet is a technology; you can use it to engender alphabetic rapture with literature and poetry. Guys like Shakespeare and Blake and Byron were technologists who used the alphabet to engineer wonderful things in the world. With biology, new generations of artists will be able to perform the same miracles that Shakespeare and those guys did with words, only they’ll be doing it with genes.

Q: You romanticize technology in some really interesting ways; in one of your videos you say that if you could watch the last century in time lapse you would see ideas spilling out of the human mind and into the physical universe. Do you expect that interface between the mind and the physical to become even more lubricated as time passes? Or are there limits, physical or otherwise, that we’re eventually going to run up against?

Jason Silva: It’s hard to say, because as our tools become more powerful they shrink the buffer time between our dreams and our creations. Today we still have this huge lag time between thinking and creation. We think of something, and then we have to go get the stuff for it, and then we have to build it—-it’s not like we can render it at the speed of thought. But eventually it will get to the point where it will be like that scene in Inception where he says that we can create and perceive our world at the same time. Because, again, if you look at human progress in time lapse, it is like that scene in Inception. People thought “airplane, aviation, jet engine” and then those things were in the world. If you look at the assembly line of an airplane in time lapse it actually looks self-organizing; you don’t see all of these agencies building it, instead it’s just being formed. And when you see the earth as the biosphere, as this huge integrated system, then you see this stuff just forming over time, just popping into existence. There’s this process of intention, imagination and instantiation, and the buffer time between each of those steps is getting smaller and smaller. (…)”

Jason Silva, Venezuelan-American television personality, filmmaker, gonzo journalist and founding producer/host for Current TV, A Timothy Leary for the Viral Video Age, The Atlantic, Apr 12, 2012.

Turning Into Gods - ‘Concept Teaser’ by Jason Silva

"Turning Into Gods is a new feature length documentary exploring mankind’s journey to ‘play jazz with the universe’… it is a story of our ultimate potential, the reach of our intelligence, the scope of our scientific and engineering abilities and the transcendent quality of our heroic and noble calling.

Thinking, feeling, striving, man is whatPierre Teilhard de Chardin called “the ascending arrow of the great biological synthesis.”… today we walk a tight-rope between ape and Nietzsche’s Overman… how will we make it through, and what is the texture and color of our next refined and designed evolutionary leap? (…)

"We’re on the cusp of a bio-tech/nanotech/artificial-intelligence revolution that will open up new worlds of exploration. And we should open our minds to the limitless, mind-boggling possibilities.”

Why We Could All Use a Heavy Dose of Techno-optimism, Vanity Fair, May 7, 2010.

See also:

‘To understand is to perceive patterns’, Lapidarium notes
Wildcat and Jason Silva on immortality
☞ Jason Silva, The beginning of infinity (video)
Kevin Kelly on information, evolution and technology: ‘The essence of life is not energy but ideas’, Lapidarium notes
Kevin Kelly on Why the Impossible Happens More Often
Waking Life ☞ animated film focuses on the nature of dreams, consciousness, and existentialism. Eamonn Healy speaks about telescopic evolution and the future of humanity
Mark Changizi on Humans, Version 3.0.
Science historian George Dyson: Unravelling the digital code
Technology tag on Lapidarium notes

Apr
25th
Wed
permalink

Waking Life animated film focuses on the nature of dreams, consciousness, and existentialism



Waking Life is an American animated film (rotoscoped based on live action), directed by Richard Linklater and released in 2001. The entire film was shot using digital video and then a team of artists using computers drew stylized lines and colors over each frame.

The film focuses on the nature of dreams, consciousness, and existentialism. The title is a reference to philosopher George Santayana's maxim: “Sanity is a madness put to good uses; waking life is a dream controlled.”

Waking Life is about an unnamed young man in a persistent dream-like state that eventually progresses to lucidity. He initially observes and later participates in philosophical discussions of issues such as reality, free will, the relationship of the subject with others, and the meaning of life. Along the way the film touches on other topics including existentialism, situationist politics, posthumanity, the film theory of André Bazin, and lucid dreaming itself. By the end, the protagonist feels trapped by his perpetual dream, broken up only by unending false awakenings. His final conversation with a dream character reveals that reality may be only a single instant which the individual consciousness interprets falsely as time (and, thus, life) until a level of understanding is achieved that may allow the individual to break free from the illusion.

Ethan Hawke and Julie Delpy reprise their characters from Before Sunrise in one scene. (Wiki)

Eamonn Healy speaks about telescopic evolution and the future of humanity

We won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). (…) The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially).

So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the twentieth century saw only about 25 years of progress (again at today’s rate of progress) since we have been speeding up to current rates. So the twenty-first century will see almost a thousand times greater technological change than its predecessor.

Ray Kurzweil, American author, scientist, inventor and futurist, The Law of Accelerating Returns, KurzweilAI, March 7, 2001.

"If we’re looking at the highlights of human development, you have to look at the evolution of the organism and then at the development of its interaction with the environment. Evolution of the organism will begin with the evolution of life perceived through the hominid coming to the evolution of mankind. Neanderthal and Cro-Magnon man. Now, interestingly, what you’re looking at here are three strings: biological, anthropological — development of the cities — and cultural, which is human expression.

Now, what you’ve seen here is the evolution of populations, not so much the evolution of individuals. And in addition, if you look at the time scales that are involved here — two billion years for life, six million years for the hominid, 100,000 years for mankind as we know it — you’re beginning to see the telescoping nature of the evolutionary paradigm. And then when you get to agricultural, when you get to scientific revolution and industrial revolution, you’re looking at 10,000 years, 400 years, 150 years. Uou’re seeing a further telescoping of this evolutionary time. What that means is that as we go through the new evolution, it’s gonna telescope to the point we should be able to see it manifest itself within our lifetime, within this generation.

The new evolution stems from information, and it stems from two types of information: digital and analog. The digital is artificial intelligence. The analog results from molecular biology, the cloning of the organism. And you knit the two together with neurobiology. Before on the old evolutionary paradigm, one would die and the other would grow and dominate. But under the new paradigm, they would exist as a mutually supportive, noncompetitive grouping. Okay, independent from the external.

And what is interesting here is that evolution now becomes an individually centered process, emanating from the needs and desires of the individual, and not an external process, a passive process where the individual is just at the whim of the collective. So, you produce a neo-human, okay, with a new individuality and a new consciousness. But that’s only the beginning of the evolutionary cycle because as the next cycle proceeds, the input is now this new intelligence. As intelligence piles on intelligence, as ability piles on ability, the speed changes. Until what? Until we reach a crescendo in a way could be imagined as an enormous instantaneous fulfillment of human? human and neo-human potential. It could be something totally different. It could be the amplification of the individual, the multiplication of individual existences. Parallel existences now with the individual no longer restricted by time and space.

And the manifestations of this neo-human-type evolution, manifestations could be dramatically counter-intuitive. That’s the interesting part. The old evolution is cold. It’s sterile. It’s efficient, okay? And its manifestations of those social adaptations. We’re talking about parasitism, dominance, morality, okay? Uh, war, predation, these would be subject to de-emphasis. These will be subject to de-evolution. The new evolutionary paradigm will give us the human traits of truth, of loyalty, of justice, of freedom. These will be the manifestations of the new evolution. And that is what we would hope to see from this. That would be nice.”

Eamonn Healy, professor of chemistry at St. Edward’s University in Austin, Texas, where his research focuses on the design of structure-activity probes to elucidate enzymatic activity. He appears in Richard Linklater's 2001 film Waking Life discussing concepts similar to a technological singularity and explaining “telescopic evolution.”, Eamonn Healy speaks about telescopic evolution and the future of humanity from Brandon Sergent, Transcript

See also:

Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

Mar
21st
Wed
permalink

Richard Doyle on Creativity, evolution of mind and the rhetorical membrane between humans and an informational universe

              

Q [Jason Silva]: The Jesuit Priest and scientist Pierre Teilhard de Chardin spoke of the Noosphere very early on. A profile in WIRED Magazine article said, 

"Teilhard imagined a stage of evolution characterized by a complex membrane of information enveloping the globe and fueled by human consciousness”.. Teilhard saw the Net coming more than half a century before it arrived. He believed this vast thinking membrane would ultimately coalesce into “the living unity of a single tissue” containing our collective thoughts and experiences."  Teilhard wrote, "The living world is constituted by consciousness clothed in flesh and bone.

He argued that the primary vehicle for increasing complexity consciousness among living organisms was the nervous system. The informational wiring of a being, he argued - whether of neurons or electronics - gives birth to consciousness. As the diversification of nervous connections increases, evolution is led toward greater consciousness… thoughts?

Richard Doyle: Yes, he also called it this process of the evolution of consciousness “Omega Point”. The noosphere imagined here relied on a change in our relationship to  consciousness as much to any technological change and was part of evolution’s epic quest for self awareness. Here Teilhard is in accord with Julian Huxley (Aldous’ brother, a biologist) and Carl Sagan when they observed that “we are a way for the cosmos to know itself.” Sri Aurobindo’s The Life Divine traces out this evolution of consciousness as well through the greek and Sanskrit traditions as well as Darwinism and (relatively) modern philosophy. All are describing evolution’s slow and dynamic quest towards understanding itself.

         

I honestly think we are still grappling with the fact that our minds are distributed across a network by technology, and have been in a feedback loop between our brains and technologies at least since the invention of writing. As each new “mutation” occurs in the history of evolution of information technology, the very character of our minds shifts. McLuhan's Understanding Media is instructive here as well (he parsed it as the Global Village), and of course McLuhan was the bard who advised Leary on "Tune in, Turn on, Drop Out" and very influential on Terence McKenna.

One difference between now and Plato’s time is the infoquake through which we are all living. This radical increase in quantity no doubt has qualitative effects - it changes what it feels like to think and remember. Plato was working through the effect of one new information technology – writing – whereas today we “upgrade” every six months or so…Teilhard observes the correlative of this evolutionary increase in information - and the sudden thresholds it crosses - in the evolution of complexity and nervous systemsThe noosphere is a way of helping us deal with this “phase transition” of consciousness that may well be akin to the phase transition between liquid water and water vapor - a change in degree that effects a change in kind.

Darwin’s Pharmacy suggests that ecodelics were precisely such a mutation in information technology that increased sexually selective fitness through the capacity to process greater amounts of information, and that they are “extraordinarily sensitive to initial rhetorical traditions.” What this means is that because ecodelic experiences are so sensitive to the context in which we experience them, they can help make us aware of the effect of language and music etc on our consciousness, and thereby offer an awareness of our ability to effect our own consciousness through our linguistic and creative choices. This can be helpful when trying to browse the infoquake. Many other practices do so as well - meditation is the most well established practice for noticing the effects we can have on our own consciousness, and Sufi dervishes demonstrate this same outcome for dancing. I do the same on my bicycle, riding up a hill and chanting.

One problem I have with much of the discourse of “memes" is that it is often highly reductionistic - it often forgets that ideas have an ecology too, they must be "cultured." Here I would argue that drawing on Lawrence Lessig's work on the commons, the “brain” is a necessary but insufficient “spawning” ground for ideas that becomes actual. The commons is the spawning ground of ideas; brains are pretty obviously social as well as individual. Harvard biologist Richard Lewontin notes that there is no such thing as “self replicating” molecules, since they always require a context to be replicated. This problem goes back at last to computer scientist John Von Neumann's 1947 paper on Self reproducing automata.

I think Terence McKenna described the condition as "language is loose on planet three", and its modern version probably occurs first in the work of writer William S. Burroughs, whose notion of the "word virus" predates the "meme" by at least a decade. Then again this notion of "ideas are real" goes back to cosmologies that begin with the priority of consciousness over matter, as in "In the beginning was the word, and the word was god, and the word was with god." So even Burroughs could get a late pass for his idea. (…)

Q: Richard Dawkin's definition of a meme is quite powerful: 

“I think that a new kind of replicator has recently emerged on this very planet, […] already achieving evolutionary change at a rate that leaves the old gene panting far behind.” [the replicator is] human culture; the vector of transmission is language, and the spawning ground is the brain.”  

This notion that the ”the vector of transmission is language" is very compelling.. It seems to suggest that just as in biological evolution the vector of transmission has been the DNA molecule, in the noosphere, the next stage up, it is LANGUAGE that has become a major player in the transfer of information towards achieving evolutionary change.. Kind of affects how you think about the phrase “words have power”. This insight reminds me of a quote that describes, in words, the subjective ecstasy that a mind feels when upon having a transcendent realization that feels as if it advances evolution: 

"A universe of possibilities,

Grey infused by color,

The invisible revealed,

The mundane blown away

by awe” 

Is this what you mean by ‘the ecstasy of language’?

Richard Doyle: Above, I noted that ecodelics can make us aware of the feedback loops between our creative choices – should I eat mushrooms in a box? - Should I eat them with a fox? - and our consciousness. In other words, they can make us aware of the tremendous freedom we have in creating our own experience. Leary called this “internal freedom.” Becoming aware of the practically infinite choices we have to compose our lives, including the words we use to map them, can be overwhelming – we feel in these instances the “vertigo of freedom.” What to do? In ecodelic experience we can perceive the power of our maps. That moment in which we can learn to abide the tremendous creative choice we have, and take responsibility for it, is what I mean by the “ecstasy of language.” 

I would point out, though, that for those words you quote to do their work, they have to be read. The language does not do it "on its own" but as a result of the highly focused attention of readers. This may seem trivial but it is often left out, with some serious consequences. And “reading” can mean “follow up with interpretation”. I cracked up when I googled those lines above and found them in a corporate blog about TED, for example. Who knew that neo-romantic poetry was the emerging interface of the global corporate noosphere? (…)

Q: Buckminster Fuller described humans as "pattern integrities", Ray Kurzweil says we are "patterns of information". James Gleick's new book, The Information, says that “information may be more primary than matter”..  what do you make of this? And if we indeed are complex patterns, how can we hack the limitations of biology and entropy to preserve our pattern integrity indefinitely? 

Richard Doyle: First: It is important to remember that the history of the concept and tools of “information” is full of blindspots – we seem to be constantly tempted to underestimate the complexity of any given system needed to make any bit of information meaningful or useful. Caitlin, Kolmogorov Stephan Wolfram and John Von Neumann each came independently to the conclusion that information is only meaningful when it is “run” - you can’t predict the outcome of even many trivial programs without running the program. So to say that “information may be more primary than matter” we have to remember that “information” does not mean “free from constraints.” Thermodynamics – including entropy – remains.

Molecular and informatic reductionism – the view that you can best understand the nature of a biological system by cutting it up into the most significant bits, e.g. DNA – is a powerful model that enables us to do things with biological systems that we never could before. Artist Eduardo Kac collaborated with a French scientist to make a bioluminescent bunny. That’s new! But sometimes it is so powerful that we forget its limitations. The history of the human genome project illustrates this well. AND the human genome is incredibly interesting. It’s just not the immortality hack many thought it would be.

In this sense biology is not a limitation to be “transcended” (Kurzweil), but a medium of exploration whose constraints are interesting and sublime. On this scale of ecosystems, “death” is not a “limitation” but an attribute of a highly dynamic interactive system. Death is an attribute of life. Viewing biology as a “limitation” may not be the best way to become healthy and thriving beings.

Now, that said, looking at our characteristics as “patterns of information” can be immensely powerful, and I work with it at the level of consciousness as well as life. Thinking of ourselves as “dynamic patterns of multiply layered and interconnected self transforming information” is just as accurate of a description of human beings as “meaningless noisy monkeys who think they see god”, and is likely to have much better effects. A nice emphasis on this “pattern” rather than the bits that make it up can be found in Carl Sagan’s “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.”

Q: Richard Dawkins declared in 1986 that ”What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions, […] If you want to understand life,” Dawkins wrote, “don’t think about vibrant, throbbing gels and oozes, think about information technology.” How would you explain the relationship between information technology and the reality of the physical world?

Richard Doyle: Again, information is indeed physical. We can treat a sequence of information as abstraction and take it out of its context – like a quotation or a jellyfish gene spliced into a rabbit to enable it to glow. We can compress information, dwindling the resources it takes to store or process it. But “Information, words, instructions” all require physical instantiation to even be “information, words, instructions.” Researcher Rolf Landauer showed back in the 1960s that even erasure is physical. So I actually think throbbing gels and oozes and slime mold and bacteria eating away at the garbage gyre are very important when we wish to “understand” life. I actually think Dawkins gets it wrong here – he is talking about “modeling” life, not “understanding” it. Erwin Schrödinger, the originator of the idea of the genetic code and therefore the beginning of the “informatic” tradition of biology that Dawkins speaks in here, knew this very well and insisted on the importance of first person experience for understanding.

So while I find these metaphors useful, that is exactly what they are: metaphors. There is a very long history to the attempt to model words and action together: Again, John 1:1 is closer to Dawkin’s position here than he may be comfortable with: “In the Beginning was the word, and the word was god, and the word was with god” is a way of working with this capacity of language to bring phenomena into being. It is really only because we habitually think of language as “mere words” that we continually forget that they are a manifestation of a physical system and that they have very actual effects not limited to the physics of their utterance – the words “I love you” can have an effect much greater than the amount of energy necessary to utter them. Our experiences are highly tuneable by the language we use to describe them.

Q: Talk about the mycelial archetype. Author Paul Stamet compares the pattern of the mushroom mycelium with the overlapping information-sharing systems that comprise the Internet, with the networked neurons in the brain, and with a computer model of dark matter in the universe. All share this densely intertwingled filamental structure…. what is the connection? what is the pattern that connects here? 

Richard Doyle: First things first: Paul Stamets is a genius and we should listen to his world view carefully and learn from it. Along with Lynn Margulis and Dorion Sagan, whose work I borrow from extensively in Darwin’s Pharmacy (as well as many others), Stamets is asking us to contemplate and act on the massive interconnection between all forms of life. This is a shift in worldview that is comparable to the Copernican shift from a geocentric cosmos – it is a shift toward interconnection and consciousness of interconnection. And I like how you weave in Gregory Bateson's phrase “the pattern that connects” here, because Bateson (whose father, William Bateson, was one of the founders of modern genetics) continuously pointed toward the need to develop ways of perceiving the whole. The “mycelial archetype”, as you call it, is a reliable and rather exciting way to recall the whole: What we call “mushrooms” are really the fruiting bodies of an extensive network of cross connection.

That fuzz growing in an open can of tomato paste in your fridge – mycelium. So even opening our refrigerator – should we be lucky enough to have one, with food in it - can remind us that what we take to be reality is is an actuality only appearance – a sliver, albeit a significant one for our world, of the whole. That fuzz can remind us that (1) appearance and reality or not the same thing at all and (2) beyond appearance there is a massive interconnection in unity. This can help remind us who and what we really are. 

With the word ‘archetype”, you of course invoke the psychologist Carl Jung who saw archetypes as templates for understanding, ways of organizing our story of the world. There are many archetypes – the Hero, the Mother, the Trickster, the sage. They are very powerful because they help stitch together what can seem to be a chaotic world – that is both their strength and their weakness. It is a weakness because most of the time we are operating within an archetype and we don’t even know it, and we don’t know therefore that we can change our archetype

By experimenting with a different archetype – imagining, for example, the world through the lens of a 2400 year old organism that is mostly invisible to a very short lived and recent species becoming aware of its creative responsibility in altering the planet – is incredibly powerful, and in Darwin’s Pharmacy I am trying to offer a way to experiment with the idea of plant planet as well as “mycelium” archetype. One powerful aspect of the treating the mycelium as our archetype as humanity is that it is “distributed” - it does not operate via a center of control but through cross connection “distributed” over a space.

Anything we can do to remember both our individuation and our interconnection is timely – we experience the world as individuals, and our task is to discover our nature within the larger scale reality of our dense ecological interconnection. In the book I point to the Upanishad’s “Tat Tvam Asi as a way of comprehending how we can both be totally individual and an aspect of the whole.

Q: You’ve talked about the ecstasy of language and the role of rhetoric in shaping reality.. These notions echo some of Terence McKenna's ideas about language… He calls language an “ecstatic activity of signification”… and says that for the “inspired one, it is almost as if existence is uttering itself through him”… Can you expand on this? How does language create reality?? 

Richard Doyle: It’s incredibly fun and insightful to echo Terence McKenna. He’s really in this shamanic bard tradition that goes all the back to Empedocles at least, and is distributed widely across the planet. He’s got a bit of Whitman in him with his affirmation of the erotic aspects of enlightenment. He was Emerson speaking to a Lyceum crowd remixed through rave culture. Leary and McKenna were resonating with the irish bard archetype. And Terrence was echoing Henry Munn, who was echoing Maria Sabina, whose chants and poetics can make her seem like Echo herself – a mythological story teller and poet (literally “sound”) who so transfixes Hera (Zeus’s wife) that Zeus can consort with nymphs. Everywhere we look there are allegories of sexual selection’s role in the evolution of poetic & shamanic language! 

And Terrence embodies the spirit of eloquence, helping translate our new technological realities (e.g. virtual reality, a fractal view of nature, radical ecology) and the states of mind that were likely to accompany them. Merlin Donald writes of the effects of “external symbolic storage” on human culture – as a onetime student of McLuhan’s, Donald was following up on Plato’s insights I mentioned above that writing changes how we think, and therefore, who we are

Human culture is going through a fantastic “reality crisis” wherein we discover the creative role we play in nature. Our role in global climate change – not to mention our role in dwindling biodiversity – is the “shadow” side of our increasing awareness that humans have a radical creative responsibility for their individual and collective lives. And our lives are inseparable from the ecosystems with which we are enmeshed. THAT is reality. To the extent that we can gather and focus our attention on retuning our relation towards ecosystems in crisis, language can indeed shape reality. We’ll get the future we imagine, not necessarily the one we deserve.

Q: Robert Anton Wilson spoke about “reality tunnels”…. These ‘constructs’ can limit our perspectives and perception of reality, they can trap us, belittle us, enslave us, make us miserable or set us free… How can we hack our reality tunnel?  Is it possible to use rhetoric and/or psychedelics to “reprogram” our reality tunnel? 

Richard Doyle: We do nothing but program and reprogram our reality tunnelsSeriously, the Japanese reactor crisis follows on the BP oil spill as a reminder that we are deeply interconnected on the level of infrastructure – technology is now planetary in scale, so what happens here effects somebody, sometimes Everybody, there. These infrastructures – our food sheds, our energy grid, our global media - run on networks, protocols, global standards, agreements: language, software, images, databases and their mycelial networks.

The historian Michel Foucault called these “discourses”, but we need to connect these discourses to the nonhuman networks with which they are enmeshed, and globalization has been in part about connecting discourses to each other across the planet. Ebola ends up in Virginia, Starbucks in Hong Kong. This has been true for a long time, of course – Mutual Assured Destruction was planetary in scale and required a communication and control structure linking, for example, a Trident submarine under the arctic ice sheet – remember that? - to a putatively civilian political structure Eisenhower rightly warned us about: the military industrial complex. The moon missions illustrate this principle as well – we remember what was said as much as what else was done, and what was said, for a while, seem to induce a sense of truly radical and planetary possibility.

So if we think of words as a description of reality rather than part of the infrastructure of reality, we miss out on the way different linguistic patterns act as catalysts for different realities. I call these “rhetorical softwares”. In my first two books, before I really knew about Wilson’s work or had worked through Korzybski with any intensity, I called these “rhetorical softwares.”

Now the first layer of our reality tunnel is our implicit sense of self – this is the only empirical reality any of us experiences – what we subjectively experience. RAW was a brilliant analyst of the ways experience is shaped by the language we use to describe it. One of my favorite examples from his work is his observation that in English, “reality” is a noun, so we start to treat it as a “thing”, when in fact reality, this cosmos, is also quite well mapped as an action – a dynamic unfolding for 13.7 billion years. That is a pretty big mismatch between language and reality, and can give us a sense that reality is inert, dead, lifeless, “concrete”, and thus not subject to change. By experimenting with what Wilson, following scientist John Lilly, called “metaprograms”, we can change the maps that shape the reality we inhabit. (…)

Q: The film Inception explored the notion that our inner world can be a vivid, experiential dimension, and that we can hack it, and change our reality… what do you make of this? 

Richard Doyle: The whole contemplative tradition insists on this dynamic nature of consciousness. “Inner” and “outer” are models for aspects of reality – words that map the world only imperfectly. Our “inner world” - subjective experience – is all we ever experience, so if we change it obviously we will see a change in what we label “external” reality it is of course part of and not separable from. One of the maps we should experiment with, in my view, is this “inner” and “outer” one – this is why one of my aliases is “mobius.” A mobius strip helps makes clear that “inside” and “outside” are… labels. As you run your finger along a mobius strip, the “inside” becomes “outside” and the “outside” becomes “inside.”.

Q: Can we give put inceptions out into the world?

Richard Doyle: We do nothing but! And, it is crucial to add, so too does the rest of our ecosystem. Bacteria engage in quorum sensing, begin to glow, and induce other bacteria to glow – this puts their inceptions into the world. Thanks to the work of scientists like Anthony Trewavas, we know that plants engage in signaling behavior between and across species and even kingdoms: orchids “throw” images of female wasps into the world, attracting male wasps, root cells map the best path through the soil. The whole blooming confusion of life is signaling, mapping and informing itself into the world. The etymology of “inception” is “to begin, take in hand” - our models and maps are like imagined handholds on a dynamic reality.

Q: What is the relationship between psychedelics and information technology? How are ipods, computers and the internet related to LSD? 

Richard Doyle: This book is part of a trilogy on the history of information in the life sciences. So, first: psychedelics and biology. It turns out that molecular biology and psychedelics were important contexts for each other. I first started noticing this when I found that many people who had taken LSD were talking about their experiences in the language of molecular biology – accessing their DNA and so forth. When I learned that psychedelic experience was very sensitive to “set and setting” - the mindset and context of their use - I wanted to find out how this language of molecular biology was effecting people’s experiences of the compounds. In other words, how did the language affect something supposedly caused by chemistry? 

Tracking the language through thousands of pages, I found that both the discourse of psychedelics and molecular biology were part of the “informatic vision” that was restructuring the life sciences as well as the world, and found common patterns of language in the work of Timothy Leary (the Harvard psychologist) and Francis Crick (who won the Nobel prize with James Watson and Maurice Wilkins for determining the structure of DNA in 1954), so in 2002 I published an article describing the common “language of information” spoken by Leary and Crick. I had no idea that Crick had apparently been using LSD when he was figuring out the structure of DNA. Yes, that blew my mind when it came out in 2004. I feel like I read that between the lines of Crick’s papers, which gave me confidence to write the rest of the book about the feedback between psychedelics and the world we inhabit.

The paper did hone in on the role that LSD played in the invention of PCR (polymerase chain reaction) – Kary Mullis, who won the Nobel prize for the invention of this method of making copies of a sequence of DNA, talked openly of the role that LSD played in the process of invention. Chapter 4 of the book looks to use of LSD in “creative problem solving” studies of the 1960s. These studies – hard to imagine now, 39 years into the War on Drugs, but we can Change the Archetype - suggest that used with care, psychedelics can be part of effective training in remembering how to discern the difference between words and things, maps and territories.

In short, this research suggested that psychedelics were useful for seeing the limitations of words as well as their power, perhaps occasioned by the experience of the linguistic feedback loops between language and psychedelic experiences that themselves could never be satisfactorily described in language. I argue that Mullis had a different conception of information than mainstream molecular biology – a pragmatic concept steeped in what you can do with words rather than in what they mean. Mullis seems to have thought of information as “algorithms” - recipes of code, while the mainsteam view was thinking of it as implicitly semantically, as “words with meaning.”

Ipods, Internet, etc: Well, in some cases there are direct connections. Perhaps Bill Joy said it best when he said that there was a reason that LSD and Unix were both from BerkeleyWhat the Doormouse Said by John Markoff came out after I wrote my first paper on Mullis and I was working on the book, and it was really confirmation of a lot of what I seeing indicated by my conceptual model of what is going on, which is as follows: Sexual selection is a good way to model the evolution of information technology. It yields bioluminescence – the most common communication strategy on the planet – chirping insects, singing birds, Peacocks fanning their feathers, singing whales, speaking humans, and humans with internet access. These are all techniques of information production, transformation or evaluation. I am persuaded by Geoffrey Miller’s update of Charles Darwin’s argument that language and mind are sexually selected traits, selected not simply for survival or even the representation of fitness, but for their sexiness. Leary: “Intelligence is the greatest aphrodisiac.”

I offer the hypothesis that psychedelics enter the human toolkit as “eloquence adjuncts” - tools and techniques for increasing the efficacy of language to seemingly create reality – different patterns of language ( and other attributes of set and setting) literally causes different experiences. The informatic revolution is about applying this ability to create reality with different “codes” to the machine interface. Perhaps this is one of the reason people like Mitch Kapor (a pioneer of computer spreadsheets), Stewart Brand (founder of a pre-internet computer commons known as the Well) and Bob Wallace (one of the original Microsoft seven and an early proponent of shareware), Mark Pesce were or are all psychonauts.

Q: Cyborg Anthropologist Amber Case has written about Techno-social wormholes.. the instant compression of time and space created every time we make a telephone call…  What do you make of this compression of time and space made possible by the engineering “magic” of technology? 

Richard Doyle:  It’s funny the role that the telephone call plays as an example in the history of our attempts to model the effects of information technologies. William Gibson famously defined cyberspace as the place where a telephone call takes place. (Gibson’s coinage of the term “cyberspace” is a good example of an “inception”) Avital Ronell wrote about Nietzsche’s telephone call to the beyond and interprets the history of philosophy according to a “telephonic logic”. When I was a child my father once threw our telephone into the atlantic ocean – that was what he made of the magic of that technology, at least in one moment of anger. This was back in the day when Bell owned your phone and there was some explaining to do. This magic of compression has other effects – my dad got phone calls all day at work, so when was at home he wanted to turn it off. The only way he knew to turn it off was to rip it out of the wall – there was no modular plug, just a wire into the wall - and throw it into the ocean.

So there is more than compression going on here: Deleuze and Guattari, along with the computer scientist Pierre Levy after them, call it “deterritorialization”. The differences between “here” and “there” are being constantly renegotiated as our technologies of interaction develop. Globalization is the collective effect of these deterritorializations and reterritorializations at any given moment.

And the wormhole example is instructive: the forces that enable such collapse of space and time as the possibility of time travel would likely tear us to smithereens. The tensions and torsions of this deterritorialization at part of what is at play in the Wikileaks revolutions, this compression of time and space offers promise for distributed governance as well as turbulence. Time travel through wormholes, by the way, is another example of an inception – Carl Sagan was looking for a reasonable way to transport his fictional aliens in Contact, called Cal Tech physicist Skip Thorne for help, and Thorne came up with the idea.

Q: The film Vanilla Sky explored the notion of a scientifically-induced lucid dream where we can live forever and our world is built out of our memories and ”sculpted moment to moment and lived with the romantic abandon of a summer day or the feeling of a great movie or a pop song you always loved”. Can we sculpt ‘real’ reality as if it were a “lucid dream”

Richard Doyle:Some traditions model reality as a lucid dream. The Diamond Sutra tells us that to be enlightened we must view reality as “a phantom, a dew drop, a bubble.”  This does not mean, of course, that reality does not exist, only that appearance has no more persistence than a dream and that what we call “reality” is our map of reality. When we wake up, the dream that had been so compelling is seen to be what it was: a dream, nothing more or less. Dreams do not lack reality – they are real patterns of information. They just aren’t what we usually think they are. Ditto for “ordinary” reality. Lucid dreaming has been practiced by multiple traditions for a long time – we can no doubt learn new ways of doing so. In the meantime, by recognizing and acting according to the practice of looking beyond appearances, we can find perhaps a smidgeon more creative freedom to manifest our intentions in reality.

Q: Paola Antonelli, design curator of MoMa, has written about Existenz Maximum, the ability of portable music devices like the ipod to create”customized realities”, imposing a soundtrack on the movie of our own life. This sounds empowering and godlike- can you expand on this notion? How is technology helping us design every aspect of both our external reality as well as our internal, psychological reality?

Richard Doyle: Well, the Upanishads and the Book of Luke both suggest that we “get our inner Creator on”, the former by suggesting that “Tat Tvam Asi” - there is an aspect of you that is connected to Everything, and the latter by recommending that we look not here or there for the Kingdom of God, but “within.” So if this sounds “god like”, it is part of a long and persistent tradition. I personally find the phrase “customized realities” redundant given the role of our always unique programs and metaprograms. So what we need to focus on his: to which aspect of ourselves do we wish to give this creative power? These customized realities could be enpowering and god like for corporations that own the material, or they could enpower our planetary aspect that unites all of us, and everything in between. It is, as always, the challenge of the magus and the artist to decide how we want to customize reality once we know that we can.

Q: The Imaginary Foundation says that "to understand is to perceive patterns"… Some advocates of psychedelic therapy have said that certain chemicals heighten our perception of patterns..They help! us “see more”.  What exactly are they helping us understand? 

Richard Doyle: Understanding! One of the interesting bits of knowledge that I found in my research was some evidence that psychonauts scored better on the Witkin Embedded Figure test, a putative measure of a human subject’s ability to “distinguish a simple geometrical figure embedded in a complex colored figure.” When we perceive the part within the whole, we can suddenly get context, understanding.

Q: An article pointing to the use of psychedelics as catalysts for breakthrough innovation in silicon valley says that users …

"employ these cognitive catalysts, de-condition their thinking periodically and come up with the really big connectivity ideas arrived at wholly outside the linear steps of argument. These are the gestalt-perceiving, asterism-forming “aha’s!” that connect the dots and light up the sky with a new archetypal pattern."

This seems to echo what other intellectuals have been saying for ages.  You referred to Cannabis as “an assassin of referentiality, inducing a butterfly effect in thought. Cannabis induces a parataxis wherein sentences resonate together and summon coherence in the bardos between one statement and another.”

Baudelaire also wrote about cannabis as inducing an artificial paradise of thought:  

“…It sometimes happens that people completely unsuited for word-play will improvise an endless string of puns and wholly improbable idea relationships fit to outdo the ablest masters of this preposterous craft. […and eventually]… Every philosophical problem is resolved. Every contradiction is reconciled. Man has surpassed the gods.”

Anthropologist Henry Munn wrote that:

"Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth… At times… the words leap to mind, one after another, of themselves without having to be searched for: a phenomenon similar to the automatic dictation of the surrealists except that here the flow of consciousness, rather than being disconnected, tends to be coherent: a rational enunciation of meanings.  The spontaneity they liberate is not only perceptual, but linguistic, the spontaneity of speech, of fervent, lucid discourse, astonishing. […] For the inspired one, it is as if existence were uttering itself through him […]

Can you expand a bit on how certain ecodelics (as well as marijuana) can help us de-condition our thinking, have creative breakthroughs as well as intellectual catharsis? How is it that “intoxication” could, under certain conditions, actually improve our cognition and creativity and contribute to the collective intelligence of the species?

Richard Doyle: I would point, again, to Pahnke's description of ego death. This is by definition an experience when our maps of the world are humbled. In the breakdown of our ordinary worldview - such as when a (now formerly)  secular being such as myself finds himself  feeling unmistakably sacred - we get a glimpse of reality without our usual filters. It is just not possible to use the old maps, so we get even an involuntary glimpse of reality. This is very close to the Buddhist practice of exhausting linguistic reference through chanting or Koans - suddenly we see the world through something besides our verbal mind.

Ramana Maharshi says that in the silence of the ego we perceive reality - reality IS the breakdown of the ego. Aldous Huxley, who was an extraordinarily adroit and eloquent writer with knowledge of increasingly rare breadth and depth, pointed to a quote by William Blake when trying to sum up his experience: the doors of perception were cleansed. This is a humble act, if you think about it: Huxley, faced with the beauty and grandeur of his mescaline experience, offers the equivalent of ‘What he said!”. Huxley also said that psychedelics offered a respite from “the throttling embrace of the self”, suggesting that we see the world without the usual filters of our egoic self. (…)

And if you look carefully at the studies by pioneers such as Myron Stolaroff and Willis Harman that you reference, as I do in the book, you will see that great care was taken to compose the best contexts for their studies. Subjects, for example, were told not to think about personal problems but to focus on their work at hand, and, astonishingly enough, it seems to have worked. These are very sensitive technologies and we really need much more research to explore their best use. This means more than studying their chemical function - it means studying the complex experiences human beings have with them. Step one has to be accepting that ecodelics are and always have been an integral part of human culture for some subset of the population. (…)

Q: Kevin Kelly refers to technological evolution as following the momentum begun at the big bang - he has stated:

"…there is a continuum, a connection back all the way to the Big Bang with these self-organizing systems that make the galaxies, stars, and life, and now is producing technology in the same way. The energies flowing through these things are, interestingly, becoming more and more dense. If you take the amount of energy that flows through one gram per second in a galaxy, it is increased when it goes through a star, and it is actually increased in life…We don’t realize this. We think of the sun as being a hugely immense amount of energy. Yet the amount of energy running through a sunflower per gram per second of the livelihood, is actually greater than in the sun. Actually, it’s so dense that when it’s multiplied out, the sunflower actually has a higher amount of energy flowing through it. "..

Animals have even higher energy usage than the plant, and a jet engine has even higher than an animal. The most energy-dense thing that we know about in the entire universe is the computer chip in your computer. It is sending more energy per gram per second through that than anything we know. In fact, if it was to send it through any faster, it would melt or explode. It is so energy-dense that it is actually at the edge of explosion.”…  

Can you comment on the implications of what he’s saying here?

Richard Doyle: I think maps of “continuity” are crucial and urgently needed. We can model the world as either “discrete” - made up of parts - or “continuous” - composing a whole - to powerful effect. Both are in this sense true. This is not “relativism” but a corollary of that creative freedom to choose our models that seems to be an attribute of consciousness. The mechanistic worldview extracts, separates and reconnects raw materials, labor and energy in ways that produce astonishing order as well as disorder (entropy).

By mapping the world as discrete – such as the difference between one second and another – and uniform – to a clock, there is no difference between one second and another – we have transformed the planet. Consciousness informed by discrete maps of reality has been an actual geological force in a tiny sliver of time. In so doing, we have have transformed the biosphere. So you can see just how actual this relation between consciousness, its maps, and earthly reality is. This is why Vernadsky, a geophysicist, thought we needed a new term for the way consciousness functions as a geological force: noosphere.

These discrete maps of reality are so powerful that we forget that they are maps. Now if the world can be cut up into parts, it is only because it forms a unity. A Sufi author commented that the unity of the world was both the most obvious and obscure fact. It is obvious because our own lives and the world we inhabit can be seen to continue without any experienced interruption – neither the world nor our lives truly stops and starts. This unity can be obscure because in a literal sense we can’t perceive it with our senses – this unity can only be “perceived” by our minds. We are so effective as separate beings that we forget the whole for the part.

The world is more than a collection of parts, and we can quote Carl Sagan: “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.” Equally beautiful is what Sagan follows up with: “The cosmos is also within us. We are made of star stuff.” Perhaps this is why models such as Kelly’s feel so powerful: reminding ourselves that there is a continuity between the Big Bang and ourselves means we are an aspect of something unfathomably grand, beautiful, complex and unbroken. This is perhaps the “grandeur” Darwin was discussing. And when we experience that grandeur it can help us think and act in aways appropriate to a geological force.

I am not sure about the claims for energy that Kelly is making – I would have to see the context and the source of his data – but I do know that when it comes to thermodynamics, what he is saying rings true. We are dissipative structures far from equilibrium, meaning that we fulfill the laws of thermodynamics. Even though biological systems such as ourselves are incredibly orderly – and we export that order through our maps onto and into the world – we also yield more entropy than our absence. Living systems, according to an emerging paradigm of Stanley Salthe, Rob Swenson, the aforementioned Margulis and Sagan, Eric Schneider, James J. kay and others, maximize entropy, and the universe is seeking to dissipate ever greater amounts of entropy.

Order is a way to dissipate yet more energy. We’re thermodynamic beings, so we are always on the prowl for new ways to dissipate energy as heat and create uncertainty (entropy), and consciousness helps us find ever new ways to do so. (In case you are wondering, Consciousness is the organized effort to model reality that yields ever increasing spirals of uncertainty in Deep Time. But you knew that.) It is perhaps in this sense that, again following Carl Sagan, “ We are a way for the cosmos to know itself.” That is pretty great map of continuity.

What I don’t understand in Kelly’s work, and I need to look at with more attention, is the discontinuity he posits between biology and technology. In my view our maps have made us think of technology as different in kind from biology, but the global mycelial web of fungi suggests otherwise, and our current view of technology seems to intensify this sense of separation even as we get interconnected through technology. I prefer Noosphere to what Kelly calls the Technium because it reminds us of the ways we are biologically interconnected with our technosocial realities. Noosphere sprouts from biosphere.

Q: There is this notion of increasing complexity… Yet in a universe where entropy destroys almost everything, here we are, the cutting edge of evolution, taking the reigns and accelerating this emergent complexity.. Kurzweil says that this makes us “very important”: 

“…It turns out that we are central, after all.  Our ability to create models—virtual realities—in our brains, combined with ou modest-looking thumbs, has been sufficient to usher in another form of evolution: technology. That development enabled the persistence of the accelerating pace that started with biological evolution. It will continue until the entire universe is at our fingertips.”   

What do you think?

Richard Doyle: Well, I think from my remarks already you can see that I agree with Kurzweil here and can only suggest that it is for this very reason that we must be very creative, careful and cunning with our models. Do we model the technologies that we are developing according to the effects they will have on the planetary whole? Only rarely, though this is what we are trying to do at the Penn State Center for Nanofutures, as are lots of people involved in Science, Technology and Society as well as engineering education. When we develop technologies - and that is the way psychedelics arrived in modern culture, as technologies -  we must model their effects not only on the individuals who use them, but on the whole of our ecosystem and planetary society.

If our technological models are based on the premise that this is a dead planet – and most of them very much are, one is called all kinds of names if you suggest otherwise - animist, vitalist, Gaian intelligence agent, names I wear with glee – then we will end up with a asymptotically dead planet. Consciousness will, of course, like the Terminator, “Be Back” should we perish, but let us hope that it learns to experiment better with its maps and learns to notice reality just a little bit more. I am actually an optimist on this front and think that a widespread “aha” moment is occurring where there is a collective recognition of the feedback loops that make up our technological & biological evolution.

Again, I don’t know why Kurzweil seems to think that technological evolution is discontinuous with biological evolution – technology is nested within the network of “wetwares” that make it work, and our wetwares are increasingly interconnected with our technological infrastructure, as the meltdowns in Japan demonstrate along with the dependence of many of us – we who are more bacterial than human by dry weight - upon a network of pharmaceuticals and electricity for continued life. The E. coli outbreak in Europe is another case in point – our biological reality is linked with the technological reality of supply chain management. Technological evolution is biological evolution enabled by the maps of reality forged by consciousness. (…)

Whereas technology for many promised the “disenchantment” of the world –the rationalization of this world of the contemplative spirit as everything became a Machine – here was mystical contemplative experience manifesting itself directly within what sociologist Max Weber called the “iron cage of modernity”, Gaia bubbling up through technological “Babylon.”

Now many contemplatives have sought to share their experiences through writing – pages and pages of it. As we interconnect through information technology, we perhaps have the opportunity to repeat this enchanted contemplative experience of radical interconnection on another scale, and through other means. Just say Yes to the Noosphere!”

Richard Doyle, Professor of English Affiliate Faculty, Information Science and Technology at Pennsylvania State University, in conversation with Jason Silva, Creativity, evolution of mind and the “vertigo of freedom”, Big Think, June 21, 2011. (Illustrations: 1) Randy Mora, Artífices del sonido, 2) Noosphere)

See also:

☞ RoseRose, Google and the Myceliation of Consciousness
Kevin Kelly on Why the Impossible Happens More Often
Luciano Floridi on the future development of the information society
Luciano Floridi on The Digital Revolution as a Fourth Revolution: “P2P does’t mean Pirate to Pirate but Platonist to Platonist”
The Rise of the Conversation Society: information, communication and collaboration
Keen On… James Gleick: Why Cyberspace, As a Mode of Being, Will Never Go Away (TCTV), (video) TechCrunch, Jun 23, 2011
Timothy Leary on cybernetics and a new global culture
Mark Changizi on Humans, Version 3.0.
Cyberspace tag on Lapidarium

Nov
25th
Fri
permalink

Sue Savage-Rumbaugh on Human Language—Human Consciousness. A personal narrative arises through the vehicle of language


                                        Jamie Marie Waelchli, Thought Map No. 8

Human language, coupled with human maternal care, enables the consciousness to bifurcate very early and extensively. Without the self-reflective properties inherent in a reflexive agent- recipient language, and without the objectification of the human infant — a very different kind of humanity would arise.

Human consciousness, as constructed by human language, becomes the vehicle through which the self-reflective human mind envisions time. Language enables the viewer to reflect upon the actions of the doer (and the actions of one’s internal body), while projecting forward and backward — other possible bodily actions — into imagined space/time. Thus the projected and imagined space/time increasingly becomes the conscious world and reality of the viewer who imagines or remembers actions mapped onto that projected plan. The body thus becomes a physical entity progressing through the imaged world of the viewer. As the body progresses through this imaged world, the viewer also constructs a way to mark progress from one imagined event to another. Having once marked this imagined time into units, the conscious viewer begins to order the anticipated actions of the body into a linear progression of events.

A personal narrative then arises through the vehicle of language. Indeed a personal narrative is required, expected and placed upon every human being, by the very nature of human language. This personal narrative becomes organized around the anticipated bodily changes that it is imagined will take place from birth to old age. The power of the bifurcated mind, through linguistically encoded expectancies, shapes and molds all of human behavior. When these capacities are jointly executed by other similar minds — the substrate of human culture is manufactured.

Human culture, because it rides upon a manufactured space/time self-reflective substrate, is unique. Though it shares some properties with animal culture, it is not merely a natural Darwinian extension of animal culture. It is based on constructed time/space, constructed mental relationships, constructed moral responsibilities, and constructed personal narratives — and individuals, must, at all times, justify their actions toward another on the basis of their co-constructed expectancies.

Human Consciousness seems to burst upon the evolutionary scene in something of an explosion between 40,000 and 90,000 years ago. Trading emerges, art emerges, and symboling ability emerges with a kind of intensity not noted for any previous time in the archeological record. (…)

Humans came with a propensity to alter the world around them wherever they went. We were into object manipulation in all aspects of our existence, and wherever we went we altered the landscape. We did not accept the natural world as we found it — we set about refashioning our worlds according to our own needs and desires. From the simple act of intentionally setting fires to eliminate underbrush, to the exploration of outer space, humanity manifested the view that it was here to control its own destiny, by changing the world around it, as well as by individuals’ changing their own appearances.

We put on masks and masqueraded about the world, seeking to make the world conform to our own desires, in a way no other species emulated. In brief, the kind of language that emerged between 40,000 and 90,000 years ago, riding upon the human anatomical form, changed us forever, and we began to pass that change along to future generations.

While Kanzi and family are bonobos, the kind of language they have acquired — even if they have not manifested all major components yet — is human language as you and I speak it and know it. Therefore, although their biology remains that of apes, their consciousness has begun to change as a function of the language, the marks it leaves on their minds and the epigenetic marks it leaves on the next generation. (Epigenetic: chemical markers which become attached to segments of genes during the lifetime of an individual are passed along to future generations, affecting which genes will be expressed in succeeding generations.) They explore art, they explore music, they explore creative linguistic negotiation, they have an autobiographical past and they think about the future. They don’t do all these things with human-like proficiency at this point, but they attempt them if given opportunity. Apes not so reared do not attempt to do these things.

What kind of power exists within the kind of language we humans have perfected? Does it have the power to change biology across time, if it impacts the biological form upon conception? Science has now become aware of the power of initial conditions, through chaos theory, the work of Mandelbrot with fractal geometric forms, and the work of Wolfram and the patterns that can be produced by digital reiterations of simple and only slightly different starting conditions. Within the fertilized egg lie the initial starting conditions of every human.

We also now realize that epigenetic markers from parental experience can set these initial starting conditions, determining such things as the order, timing, and patterning of gene expression profiles in the developing organism. Thus while the precise experience and learning of the parents is not passed along, the effects of those experiences, in the form of genetic markers that have the power to affect the developmental plan of the next generation during the extraordinarily sensitive conditions of embryonic development, are transmitted. Since language is the most powerful experience encountered by the human being and since those individuals who fail to acquire human language are inevitably excluded from (or somehow set apart in) the human community, it is reasonable to surmise that language will, in some form, transmit itself through epigenetic mechanisms.

When a human being enters into a group of apes and begins to participate in the rearing of offspring, different epigenetic markers have the potential to become activated. We already know, for example, that in human beings, expectancies or beliefs can affect gene activity. The most potent of the epigenetic markers would most probably arise from the major difference between human and ape infants. Human infants do not cling, ape infants do. When ape infants are carried like human infants, they begin to development eye/hand coordination from birth. This sets the developmental trajectory of the ape infant in a decidedly human direction — that of manipulating the world around it. Human mothers, unlike ape mothers, also communicate their intentions linguistically to the infant. Once an intention is communicated linguistically, it can be negotiated, so there arises an intrinsic motivation to tune into and understand such communications on the part of the ape infant. The ‘debate’ in ape language, which has centered around do they have or don’t they — has missed the point. This debate has ignored the key rearing variables that differ dramatically across the studies. Apart from Kanzi and family, all other apes in these studies are left alone at night and drilled on associative pairings during the day.”

Sue Savage-Rumbaugh, is a primatologist most known for her work with two bonobos, Kanzi and Panbanisha, investigating their use of “Great Ape language” using lexigrams and computer-based keyboards. Until recently based at Georgia State University’s Language Research Center in Atlanta.

To read full essay click Human Language—Human Consciousness, National Humanities Center, Jan 2nd, 2011

See also:

John Shotter on encounters with ‘Other’ - from inner mental representation to dialogical social practices
Do thoughts have a language of their own? The language of thought hypothesis, Lapidarium notes

Sep
12th
Mon
permalink

Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking

    

"The power of settings, the power of priming, and the power of unconscious thinking, all of those are a major change in psychology. I can’t think of a bigger change in my lifetime. You were asking what’s exciting? That’s exciting, to me."

If you want to characterize how something is done, then one of the most powerful ways of characterizing the way the mind does anything is by looking at the errors that the mind produces while it’s doing it because the errors tell you what it is doing. Correct performance tells you much less about the procedure than the errors do.

We focused on errors. We became completely identified with the idea that people are generally wrong. We became like prophets of irrationality. We demonstrated that people are not rational. (…)

That was 40 years ago, and a fair amount has happened in the last 40 years. Not so much, I would say, about the work that we did in terms of the findings that we had. Those pretty much have stood, but it’s the interpretation of them that has changed quite a bit. It is now easier than it was to speak about the mechanisms that we had very little idea about, and to speak about, to put in balance the flaws that we were talking about with the marvels of intuition. (…)

Flows

This is something that happens quite a lot, at least in psychology, and I suppose it may happen in other sciences as well. You get an impression of the relative importance of two topics by how much time is spent on them when you’re teaching them. But you’re teaching what’s happening now, you’re teaching what’s recent, what’s current, what’s considered interesting, and so there is a lot more to say about flaws than about marvels. (…)

We understand the flaws and the marvels a little better than we did. (…)

One way a thought can come to mind involves orderly computation, and doing things in stages, and remembering rules, and applying rules. Then there is another way that thoughts come to mind. You see this lady, and she’s angry, and you know that she’s angry as quickly as you know that her hair is dark. There is no sharp line between intuition and perception. You perceive her as angry. Perception is predictive. You know what she’s going to say, or at least you know something about what it’s going to sound like, and so perception and intuition are very closely linked. In my mind, there never was a very clean separation between perception and intuition. Because of the social context we’re in here, you can’t ignore evolution in anything that you do or say. But for us, certainly for me, the main thing in the evolutionary story about intuition, is whether intuition grew out of perception, whether it grew out of the predictive aspects of perception.

If you want to understand intuition, it is very useful to understand perception, because so many of the rules that apply to perception apply as well to intuitive thinking. Intuitive thinking is quite different from perception. Intuitive thinking has language. Intuitive thinking has a lot of world knowledge organized in different ways than mere perception. But some very basic characteristics that we’ll talk about of perception are extended almost directly into intuitive thinking.

What we understand today much better than what we did then is that there are, crudely speaking, two families of mental operations, and I’ll call them “Type 1” and “Type 2” for the time being because this is the cleaner language. Then I’ll adopt a language that is less clean, and much more useful.

Type 1 is automatic, effortless, often unconscious, and associatively coherent, and I’ll talk about that. And Type 2 is controlled, effortful, usually conscious, tends to be logically coherent, rule-governed. Perception and intuition are Type 1— it’s a rough and crude characterization. Practiced skill is Type 1, that’s essential, the thing that we know how to do like driving, or speaking, or understanding language and so on, they’re Type 1. That makes them automatic and largely effortless, and essentially impossible to control.

Type 2 is more controlled, slower, is more deliberate. (…) Type 2 is who we think we are. I would say that, if one made a film on this, Type 2 would be a secondary character who thinks that he is the hero because that’s who we think we are, but in fact, it’s Type 1 that does most of the work, and it’s most of the work that is completely hidden from us. (…)

'Associative coherence'

Everything reinforces everything else, and that is something that we know. You make people recoil; they turn negative. You make people shake their heads (you put earphones on people’s heads, and you tell them we’re testing those earphones for integrity, so we would like you to move your head while listening to a message, and you have them move their head this way, or move their head that way, and you give them a political message) they believe it if they’re doing “this”, and they don’t believe it if they’re doing “that”. Those are not huge effects, but they are effects. They are easily shown with a conventional number of subjects. It’s highly reliable.

The thing about the system is that it settles into a stable representation of reality, and that is just a marvelous accomplishment. That’s a marvel. This is not. That’s not a flaw, that’s a marvel. Now, coherence has its cost. Coherence means that you’re going to adopt one interpretation in general. Ambiguity tends to be suppressed. This is part of the mechanism that you have here that ideas activate other ideas and the more coherent they are, the more likely they are to activate to each other. Other things that don’t fit fall by the wayside. We’re enforcing coherent interpretation. We see the world as much more coherent than it is.

That is something that we see in perception, as well. You show people ambiguous stimuli. They’re not aware of the ambiguity. I’ll give you an example. You hear the word “bank”, and most people interpret “bank”, as a place with vaults, and money, and so on. But in the context, if you’re reading about streams and fishing, “bank” means something else. You’re not conscious when you’re getting one that you are not getting the other. If you are, you’re not conscious ever, but it’s possible that both meanings are activated, but that one gets quickly suppressed. That mechanism of creating coherent interpretations is something that happens, and subjectively what happens (I keep using the word “happens” - this is not something we do, this is something that happens to us). The same is true for perception. For Plato it was ideas sort of thrusting themselves into our eyes, and that’s the way we feel. We are passive when it comes to System 1. When it comes to System 2, and to deliberate thoughts, we are the authors of our own actions, and so the phenomenology of it is radically different. (…)

'What you see is all there is

It is a mechanism that tends not to not be sensitive to information it does not have. It’s very important to have a mechanism like that. (…) This is a mechanism that takes whatever information is available, and makes the best possible story out of the information currently available, and tells you very little about information it doesn’t have. So what you can get are people jumping to conclusions.

'Machine for jumping to conclusions'

The jumping to conclusions is immediate, and very small samples, and furthermore from unreliable information. You can give details and say this information is probably not reliable, and unless it is rejected as a lie, people will draw full inferences from it. What you see is all there is. (…)

Overconfidence

The confidence that people have in their beliefs is not a measure of the quality of evidence, it is not a judgment of the quality of the evidence but it is a judgment of the coherence of the story that the mind has managed to construct. Quite often you can construct very good stories out of very little evidence, when there is little evidence, no conflict, and the story is going to end up good. People tend to have great belief, great faith in stories that are based on very little evidence. It generates what Amos [Tversky] and I call “natural assessments”, that is, there are computations that get performed automatically. For example, we get computations of the distance between us and other objects, because that’s something that we intend to do, this is something that happens to us in the normal run of perception.

But we don’t compute everything. There is a subset of computations that we perform, and other computations we don’t.

You see this array of lines.

There is evidence among others, and my wife has collected some evidence, that people register the average length of these lines effortlessly, in one glance, while doing something else. The extraction of information about a prototype is immediate. But if you were asked, what is the sum, what is the total length of these lines? You can’t do this. You got the average for free; you didn’t get the sum for free. In order to get the sum, you’d have to get an estimate of the number, and an estimate of the average, and multiply the average by the number, and then you’ll get something. But you did not get that as a natural assessment. So there is a really important distinction between natural assessment and things that are not naturally assessed. There are questions that are easy for the organism to answer, and other questions that are difficult for the organism to answer, and that makes a lot of difference.

While I’m at it, the difference between average and sums is an important difference because there are variables that have the characteristic that I will call “sum-like.” They’re extensional. They’re sum-like variables. Economic value is a sum-like variable. (…)”

Daniel Kahneman, Israeli-American psychologist and Nobel laureate. He is notable for his work on the psychology of judgment and decision-making, behavioral economics and hedonic psychology., To see and read full lecture click The Marvels and the Flaws of Intuitive Thinking Edge Master Class 2011, Edge, Jul 17, 2011 (Illustration: Maija Hurme)

See also:

Explorations of the Mind: Intuition - The Marvels and the Flaws



Daniel Kahneman, UC Berkeley Graduate Council Lectures, Apr 2007
Daniel Kahneman on the riddle of experience vs. memory
Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’
A risk-perception: What You Don’t Know Can Kill You
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
Map–territory relation - a brief résumé, Lapidarium
The Relativity of Truth - a brief résumé, Lapidarium
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
☞ Michael Lewis, The King of Human Error, Vanity Fair, Dec 2011.
☞ Bruce Hood, The Self Illusion: How the Brain Creates Identity, May, 2012

Aug
6th
Sat
permalink

Existence: What is the self?

"Our sense of self is not an entity in its own right, but emerges from general purpose processes in the brain.

Seth Gillihan and Martha Farah of the University of Pennsylvania in Philadelphia have proposed a view of the self that has three strands: the physical self (which arises from our sense of embodiment); the psychological self (which comprises our subjective point-of-view, our autobiographical memories and the ability to differentiate between self and others); and a higher level sense of agency, which attributes the actions of the physical self to the psychological self (Psychological Bulletin, vol 131, p 76)

We are now uncovering some of the brain processes underlying these strands. For instance, Olaf Blanke of the Swiss Federal Institute of Technology in Lausanne and colleagues have shown that the physical sense of self is centred on the temporo-parietal cortex. It integrates information from your senses to create a sense of embodiment, a feeling of being located in a particular body in a particular place. That feeling can be spectacularly disrupted if the temporo-parietal cortex receives contradictory inputs, causing it to generate out-of-body experiences (New Scientist, 10 October 2009, p 34). (…)

Within the brain, it seems, the self is both everywhere and nowhere. “If you make a list [for what’s needed for a sense of self], there is hardly a brain region untouched,” says cognitive philosopher Thomas Metzinger of Johannes Gutenberg University in Mainz, Germany. Metzinger interprets this as meaning the self is an illusion. We are, he says, fooled by our brains into believing that we are substantial and unchanging. (…)

Studies have shown that each time we recall an episode from our past, we remember the details differently, thus altering ourselves (Physics of Life Reviews, vol 7, p 88).

So the self, despite its seeming constancy and solidity, is constantly changing. We are not the same person we were a year ago and we will be different tomorrow or a year from now. And the only reason we believe otherwise is because the brain does such a stellar job of pulling the wool over our eyes.”

Anil Ananthaswamy, a consultant editor of New Scientist in London, Existence: What is the self?, New Scientist, 04 August 2011

Prof. Dr. Thomas Metzinger: Brain, bodily awareness, and the emergence of a conscious self



“Brain, bodily awareness, and the emergence of a conscious self: these entities and their relations are explored by German philosopher and cognitive scientist Thomas Metzinger. Extensively working with neuroscientists he has come to the conclusion that, in fact, there is no such thing as a “self” — that a “self” is simply the content of a model created by our brain - part of a virtual reality we create for ourselves. But if the self is not “real,” he asks, why and how did it evolve?

How does the brain construct the self? In a series of fascinating virtual reality experiments, Metzinger and his colleagues have attempted to create so-called “out-of-body experiences” in the lab, in order to explore these questions. As a philosopher, he offers a discussion of many of the latest results in robotics, neuroscience, dream and meditation research, and argues that the brain is much more powerful than we have ever imagined. He shows us, for example, that we now have the first machines that have developed an inner image of their own body — and actually use this model to create intelligent behavior.

In addition, studies exploring the connections between phantom limbs and the brain have shown us that even people born without arms or legs sometimes experience a sensation that they do in fact have limbs that are not there. Experiments like the “rubber-hand illusion” demonstrate how we can experience a fake hand as part of our self and even feel a sensation of touch on the phantom hand form the basis and testing ground for the idea that what we have called the “self” in the past is just the content of a transparent self-model in our brains.

Now, as new ways of manipulating the conscious mind-brain appear on the scene, it will soon become possible to alter our subjective reality in an unprecedented manner. The cultural consequences of this, Metzinger claims, may be immense: we will need a new approach to ethics, and we will be forced to think about ourselves in a fundamentally new way.”

Thomas Metzinger, German philosopher, Department of Philosophy at the Johannes Gutenberg University of Mainz, and consciousness studies as an academic endeavour, talk at TEDxRheinMain, 2011

See also:

"I" as linguistic construct Anattā (Pāli) or anātman (Sanskrit: अनात्मन्), Wiki
☞ Bruce Hood, The Self Illusion: How the Brain Creates Identity, May, 2012

Jul
17th
Sun
permalink

David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain

   

How our brain constructs reality

The conscious mind—which is the part of you that flickers to life when you wake up in the morning: that bit of you—it’s like a stowaway on a transatlantic steamship, that’s taking credit for the journey, without acknowledging all the engineering underfoot.

I think what this means when we’re talking about knowing ourselves is exactly
what it meant when people were trying to understand our place in the cosmos,
400 years ago, when Galileo discovered the moons of Jupiter and realized that, in fact, we’re not at the center of things, but instead we’re way out on a distant edge. That’s essentially the same situation we’re in, where we’ve fallen from the center of ourselves.

But in Galileo’s case, what that caused is we now have a much more nuanced view of the cosmos. As Carl Sagan was fond of saying, it’s more wondrous and subtle than we could have ever imagined. And I think it’s exactly the same thing going on with the brain: we’re falling from the center of the brain, but what we’re discovering is that it’s much more amazing than we could have ever thought when we imagined that we were the ones sort of at the center of everything and driving the boat. (…)

As we want to go on this journey of exploring what the heck we’re made out of, the first thing to do is to recognize that what you’re seeing out there is not actually reality. You’re not sort of opening your eyes, and voila, there’s the world. Instead, your brain constructs the world. Your brain is trapped in darkness inside of your skull, and all it ever sees are electrical and chemical signals. So all the colors you see, and so on, that doesn’t really exist; that’s an interpretation by your brain. (…)

All we’re actually doing is seeing an internal model of the world; we’re not seeing what’s out there, we’re seeing just our internal model of it. And that’s why, when you move your eyes around, all you’re doing is updating that model.

And for that matter, when you blink your eyes and there are 80 milliseconds of blackness there, you don’t notice that, either. Because it’s not actually about what’s coming in the eyes; it’s about your internal construction. And, in fact, as I mention in the book, we don’t even need our eyes to see. When you are asleep and dreaming, your eyes are closed, but you’re having full, rich visual experience —because it’s the same process of running your visual cortex, and then you believe that you are seeing. (…)

Because all the brain ever sees are these electrical and chemical signals, and it doesn’t necessarily know or care which ones are coming in through the eyes, or the ears, or the fingertips, or smell, or taste. All these things get converted just to electrical signals.

And so, it turns out what the brain is really good at—and the cortex in particular —is in extracting information that has some sort of useful correlation with things in the outside world. And so, if you feed, let’s say, visual input into your ears, you will figure out how to see through your ears. Because the brain doesn’t care how it gets there; all it cares about is, Oh, there’s structure to this data that I can extract. (…)

I think it’s sort of the most amazing thing about the way brains are built, is they’re constantly reconfiguring their own circuitry. (…)

It turns out that one of the main jobs of the brain is to save energy; and the way that it does this is by predicting what is going to come next. And if it sort of has a pretty good prediction of what’s happening next, then it doesn’t need to burn a lot of energy when that thing happens, because it’s already foreseen it. (…)

So, the job of the brain is to figure out what’s coming next; and if you have successfully done it, then there’s no point in consciousness being a part of what’s going on. (…)

Time perception

You’re not passively just watching the river of time flow by. Instead, just like with visual illusions, your brain is actively constructing time. (…)

When you can predict something, not only does your consciousness not come
online, but it feels like it goes very fast. So, driving to work is very fast; but the
very first time you did it, it seemed to take very long time. And it’s because of the
novelty and the amount of energy you had to burn the first time you did it—
before you were able to predict it.

Essentially what prediction means, if it’s something you’re doing a lot, is that
you’re actually reconfiguring the circuitry of the brain. You’re actually getting
stuff down into the circuitry, which gives you speed and efficiency, but at the cost
of conscious access. (…)

It’s not only the way we see vision and time, but it’s all of our cognition: it’s our morals, it’s what we’re attracted to, it’s what we believe in. All of these things are served up from these subterranean caverns of the mind. We often don’t have any access to what’s going on down there, and why we believe the things we do, why we act the way we do. (…)

The “illusion of truth”

You give people statements to rate the truth value of, and then you bring them back a while later and you give them more statements to say whether they’re true or false, and so on. But it turns out that if you repeat some of the statements from the first time to the second time, just because the people have heard them before, whether or not it’s true and whether or not they even marked it as false last time, because they’re hearing it again— unconsciously they know they’ve heard it before—they’re more likely to rate it as true now. (…)

I think this is part of the brain toolbox that children need: to really practice and learn skepticism and critical thinking skills. (…)

Some thoughts aren’t thinkable, because of the way that thoughts are constrained by our biology

Yes. As far as thoughts that we’re not able to think, that’s an idea that I just love to explore, because there’s all kinds of stuff we can’t see. Just as an example, if you take the electromagnetic radiation spectrum, what we call visible light is just one ten-billionth of that spectrum. So, we’re only seeing a very tiny sliver of that, because we have biological receptors that are tuned to that little part of the spectrum. But radio signals, and cell phone signals, and television signals, all that stuff is going right through your body, because you happen not to have biological receptors for that part of the spectrum.

So, what that means is that there’s a particular slice of the world that you can see. And what I wanted to explore in the book is that there’s also a slice of the world that you can think. In other words, because of evolutionary pressures, our psychology has been carved to think certain thoughts—this is the field known as evolutionary psychology—and that means there are other thoughts that are just like the cell phone signals, and radio signals, and so on, that we can’t even access.

Just as an example, try being sexually attracted to something that you’re not—like a chicken or a frog. But chickens and frogs find that to be the greatest thing in the world, to look at another chicken or frog. We only find that with humans. So, different species, which have otherwise pretty similar brains, have these very specific differences about the kinds of thoughts they can think. (…)

As far as nature vs. nurture goes, the answer nowadays is always both. It’s sort of a dead question to ask—nature vs. nurture—because it is absolutely true that we do not come to the table as a blank slate; we have a lot of stuff that we come to the table with predisposed. But the whole rest of the process is an unpacking of the brain by world experience. (…)

The brain as the team of rivals. Rational vs. emotional

So, the way your brain ends up in the end is a very complicated tangle of genetics and environment. And environment includes, not only all of your childhood experiences and so on, but your in utero environment, toxins in the air, the things that you eat, experiences of abuse, and all of that stuff—and your culture; your culture has a lot to do with the way your brain gets wired up. (…)

One of the culminating issues in the book is that your brain is really like a team of rivals, where you have these different neural subpopulations that are always battling it out to control the one-output channel of your behavior; and you’ve got all these different networks that are fighting it out. And so, there are parts of your brain that can be xenophobic, and other parts of your brain that maybe decide to overwrite that, and they’re not xenophobic. And I think this gives us a much more nuanced view, in the end, of who we are, and also who other people are. (…)

When people do neuroimaging studies, you can actually find situations where it looks like you have some parts that are doing essentially a math problem in the brain, and other parts that really care about how things feel, and how they’ll make the body feel. And you can image these different networks, and you can also see when they’re fighting one another when trying to do some sort of moral decision-making.

So, probably the best way for us to look at it is that when we talk about reason vs. emotion, we’re talking about sort of a summary—sort of a shorthand way of talking about these different neural networks. And, of course, decisions can be much more complicated than that, often. But sometimes they can be essentially boiled down to that.

It’s funny; the ancient Greeks also felt that this was the right way to divide it.
Again, it’s an oversimplification, but the Greeks had this idea that life is like
you’re a charioteer, and you’re holding the white horse of reason and the black horse of passion, and they’re both always trying to pull you off the road in different directions, and your job is to keep down the middle. And that’s about right. They had some insight there into that you do have these competing networks. (…)

The field of artificial intelligence

The field of artificial intelligence has become stuck, and I’m trying to figure
out why. I think it’s because when programmers are trying to make a robot do something, they come up with solutions: like here’s how you find the block of
wood, here’s how you grip the block of wood, here’s how you stack the block of
wood, and so on. And each time they make a little subroutine to take care of a
little piece of the problem; then they say, OK, good; that part’s done.

But Mother Nature never does that. Mother Nature chronically reinvents things all the time—accidentally. Just by mutation, there are always new ways to do things, like detect motion, or control muscles, or whatever it is that it’s trying to do—pick up on new energy sources, and so on. And as a result, what you have are multiple ways of solving problems in real biological creatures.

They don’t divide up neatly into little modules, the same way that a computer
program does, but instead, for example, in the mammalian cortex it appears that Mother Nature probably came up with about three or four different ways to detect motion. And all of these act like parties in the neural parliament. They all sort of think that they know how to detect motion best, and they battle it out with the other parties.

And so, I think this is one of the main lessons that we get, when we look for it, in what happens when we see brain damage in people. You can lose aspects of your vision and not lose other aspects; or, often, you can get brain damage and you don’t see a deficit at all, even though you’ve just sort of bombed out part of what you would expect to give a deficit.

In other words, you have this very complicated interaction of these different
parties that are battling it out. And I think they, in general, don’t divide neatly
along the cortical and subcortical division, but instead, whether in lizard brains
or in our brains, these networks can be made up of subcortical and cortical parts
together. (…)

The illusion we have that we have control

The analogy of a young monarch who takes the throne of his country, and takes credit for the glory of the country without thinking about the thousands of workers who are making it all work. And that’s essentially the situation we’re in.

Take, just as an example, when you have an idea, you say, ‘Oh, I just thought of
something.’ But it wasn’t actually you that thought of it. Your brain has been
working on that behind the scenes for hours or days, consolidating information,
putting things together, and finally it serves up something to you. It serves up an
idea; and then you take credit for it. But this whole things leads to this very
interesting question about the illusion we have that we have control. (…)

What does this mean for responsibility?

I think what it means is that when we look at something like the legal system, something like blameworthiness is actually the wrong question for us to ask. I mentioned before that brains end up being an end result of a very complicated process of genes intertwining with environment. So, in the end, when there’s a brain standing in front of the judge’s bench, it doesn’t matter for us to say, OK, well, are you blameworthy; to what extent are you blameworthy; to what extent was it your biology vs. you; because it’s not clear that there’s any meaningful difference between those two things, anyway.

I’m not saying this forgives anybody. We still have to take people off the street if they’re breaking the law. But what it means is that asking the question of blameworthiness isn’t where we should be putting our time. Instead, all we need to be doing is having a forward-looking legal system, where we say what do we do with you from here?

We don’t care how you got here, because we can’t ever know. It might have been
in utero cocaine poisoning, childhood abuse, lead paint on the walls, and all of
these other things that influenced your brain development, but we can’t untangle
that. And it’s not anybody’s fault. It’s not your fault or anybody else’s. But we
can’t do anything about it.

So, all we need to do is say, given the kind of person you are now, what is the
probability of recidivism. In other words, how likely are you to transfer this
behavior to a future situation and re-offend? And then we can predicate sentence
length on that probability of re-offense. And, equally as importantly, along with
customized sentencing, we can have customized rehabilitation.

So, there are lots of things that can go wrong with people’s brains that we can
usefully address, and try to help people, instead of throwing everybody in jail. As it stands now, 30% of the prison population has mental illness. Not only is that not a humane way for us to treat our mentally ill and make a de facto healthcare system, but it’s also not cost-effective.

And it’s also criminogenic—meaning it causes more crime. Because everybody
knows when you put people in jail, that limits their employment opportunities, it
breaks their social circles, and they end up coming back to the jail, more often
than not. So, it’s very clear how the legal system should be straightening itself out, just to make itself forward-looking, and saying, OK, all we need to do is get good at assessing risk into the future. (…)

A neural parliament

One of the really amazing lessons is this bit about being a neural parliament,
and not being made up of just one thing. I think this gives us a much better view
of why we can argue with ourselves, and curse at ourselves, and contract with
ourselves, and why we can do things where we look back and we think, Wow,
how did I do that? I’m not the kind of person who would do that.

But, in fact, you are many people. As Walt Whitman said, “I am large, I contain multitudes.” So, I think this gives us a better view of ourselves, and it also tells us ways to set up our own behavior to become the kind of people we want to be, by thinking about how to structure things in our life so that the short-term parties that are interested in instant impulse gratification—so that they don’t always win the battle.”

David Eagleman, neuroscientist at Baylor College of Medicine, where he directs the Laboratory for Perception and Action and the Initiative on Neuroscience and Law, Interview with Dr. David Eagleman, Author of Incognito: The Secret Lives of the Brain, Brain Science Podcast, Episode #75, Originally Aired 7/8/2011 (transcript in pdf) (Illustration source: David Plunkert for TIME)

The brain… it makes you think. Doesn’t it?

David Eagleman: “A person is not a single entity of a single mind: a human is built of several parts, all of which compete to steer the ship of state. As a consequence, people are nuanced, complicated, contradictory. We act in ways that are sometimes difficult to detect by simple introspection. To know ourselves increasingly requires careful studies of the neural substrate of which we are composed. (…)

Raymond Tallis: Of course, everything about us, from the simplest sensation to the most elaborately constructed sense of self, requires a brain in some kind of working order. (…)

[But] we are not stand-alone brains. We are part of community of minds, a human world, that is remote in many respects from what can be observed in brains. Even if that community ultimately originated from brains, this was the work of trillions of brains over hundreds of thousands of years: individual, present-day brains are merely the entrance ticket to the drama of social life, not the drama itself. Trying to understand the community of minds in which we participate by imaging neural tissue is like trying to hear the whispering of woods by applying a stethoscope to an acorn. (…)

David Eagleman: The uses of neuroscience depend on the question being asked. Inquiries about economies, customs, or religious wars require an examination of what transpires between minds, not just within them. Indeed, brains and culture operate in a feedback loop, each influencing the other.

Nonetheless, culture does leave its signature in the circuitry of the individual brain. If you were to examine an acorn by itself, it could tell you a great deal about its surroundings – from moisture to microbes to the sunlight conditions of the larger forest. By analogy, an individual brain reflects its culture. Our opinions on normality, custom, dress codes and local superstitions are absorbed into our neural circuitry from the social forest around us. To a surprising extent, one can glimpse a culture by studying a brain. Moral attitudes toward cows, pigs, crosses and burkas can be read from the physiological responses of brains in different cultures.

Beyond culture, there are fruitful questions to be asked about individual experience. Your experience of being human – from thoughts to actions to pathologies to sensations – can be studied in your individual brain with some benefit. With such study, we can come to understand how we see the world, why we argue with ourselves, how we fall prey to cognitive illusions, and the unconscious data-streams of information that influence our opinions.

How did I become aware enough about unawareness to write about it in Incognito? It was an unlikely feat that required millennia of scientific observation by my predecessors. An understanding of the limitations of consciousness is difficult to achieve simply by consulting our intuition. It is revealed only by study.

To be clear, this limitation does not make us equivalent to automatons. But it does give a richer understanding of the wellspring of our ideas, moral intuitions, biases and beliefs. Sometimes these internal drives are genetically embedded, other times they are culturally instructed – but in all cases their mark ends up written into the fabric of the brain. (…)

Neuroscience is uncovering a bracing view of what’s happening below the radar of our conscious awareness, but that makes your life no more “helpless, ignorant, and zombie-like” than whatever your life is now. If you were to read a cardiology book to learn how your heart pumps, would you feel less alive and more despondently mechanical? I wouldn’t. Understanding the details of our own biological processes does not diminish the awe, it enhances it. Like flowers, brains are more beautiful when you can glimpse the vast, intricate, exotic mechanisms behind them.”

David Eagleman, neuroscientist at Baylor College of Medicine, where he directs the Laboratory for Perception and Action, bestselling author

Raymond Tallis, British philosopher, secular humanist, poet, novelist, cultural critic, former professor of geriatric medicine at Manchester University

The brain… it makes you think. Doesn’t it?, The Guardian, The Observer, 29 April 2012.

See also:

Time and the Brain. Eagleman: ‘Time is not just as a neuronal computation—a matter for biological clocks—but as a window on the movements of the mind’
David Eagleman on the conscious mind
David Eagleman on Being Yourselves, lecture at Conway Hall, London, 10 April 2011.
The Experience and Perception of Time, Stanford Encyclopedia of Philosophy
Your brain creates your sense of self, incognito, CultureLab, Apr 19, 2011.
Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’
Iain McGilchrist on The Divided Brain and the Making of the Western World
Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking
The Relativity of Truth - a brief résumé, Lapidarium
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
☞ David Eagleman, Your Brain Knows a Lot More Than You Realize, DISCOVER Magazine, Oct 27, 2011
☞ David Eagleman, Henry Markram, Will We Ever Understand the Brain?, California Academy of Sciences San Francisco, CA, Fora.tv video, 11.02.2011
☞ Bruce Hood, The Self Illusion: How the Brain Creates Identity, May, 2012
Mind & Brain tag on Lapidarium

Jul
15th
Fri
permalink

Thomas Metzinger on How Has The Internet Changed The Way You Think

Attention management. The ability to attend to our environment, to our own feelings, and to those of others is a naturally evolved feature of the human brain. Attention is a finite commodity, and it is absolutely essential to living a good life. We need attention in order to truly listen to others — and even to ourselves. We need attention to truly enjoy sensory pleasures, as well as for efficient learning. We need it in order to be truly present during sex, or to be in love, or when we are just contemplating nature. Our brains can generate only a limited amount of this precious resource every day. Today, the advertisement and entertainment industries are attacking the very foundations of our capacity for experience, drawing us into the vast and confusing media jungle. They are trying to rob us of as much of our scarce resource as possible, and they are doing so in ever more persistent and intelligent ways. We know all that. But here is something we are just beginning to understand — that the Internet affects our sense of selfhood, and on a deep functional level.

Consciousness is the space of attentional agency: Conscious information is exactly that information in your brain to which you can deliberately direct your attention. As an attentional agent, you can initiate a shift in attention and, as it were, direct your inner flashlight at certain targets: a perceptual object, say, or a specific feeling. In many situations, people lose the property of attentional agency, and consequently their sense of self is weakened. Infants cannot control their visual attention; their gaze seems to wander aimlessly from one object to another, because this part of their Ego is not yet consolidated. Another example of consciousness without attentional control is the non-lucid dream. In other cases, too, such as severe drunkenness or senile dementia, you may lose the ability to direct your attention — and, correspondingly, feel that your “self” is falling apart.

If it is true that the experience of controlling and sustaining your focus of attention is one of the deeper layers of phenomenal selfhood, then what we are currently witnessing is not only an organized attack on the space of consciousness per se but a mild form of depersonalization. New medial environments may therefore create a new form of waking consciousness that resembles weakly subjective states — a mixture of dreaming, dementia, intoxication, and infantilization. Now we all do this together, every day. I call it Public Dreaming.”

Thomas Metzinger, German philosopher, department of philosophy at the Johannes Gutenberg University of Mainz, answering the question "How Has The Internet Changed The Way You Think?", Edge, 2010

See also:

Nicholas Carr on what the internet is doing to our brains?
Does Google Make Us Stupid?
The extended mind – how the Internet affects our memories
☞ Neuroscientist Gary Small, Is Google Making Us Smarter?, (video) World Science Festival, June 2010
William Deresiewicz on multitasking and the value of solitude

Jun
24th
Fri
permalink

The Neurobiology of “We”. Relationship is the flow of energy and information between people, essential in our development

  

"The study of neuroplasticity is changing the way scientists think about the mind/brain connection. While they’ve known for years that the brain is the physical substrate for the mind, the central mystery of neuroscience is how the mind influences the physical structure of the brain. In the last few decades, thanks to PET and MRI imaging techniques, scientists can observe what’s actually going on in the brain while people sleep, work, make decisions, or attempt to function under limitations caused by illness, accident, or war. (…)

Dr. Daniel Siegel, Clinical Professor of Psychiatry at the UCLA School of Medicine, co-director of the Mindful Awareness Research Center, and director of the Mindsight Institute (…) convinced that the “we” connection is a little-understood, but powerful means for individual and societal transformation that should be taught in schools and churches, and even enter into politics.

Interpersonal neurobiology isn’t a form of therapy,” he told me, “but a form of integrating a range of scientific research into a picture of the nature of human reality. It’s a phrase I invented to account for the human effort to understand truth. We can define the mind. We can define mental health. We can base everything on science, but I want to base it on all the sciences. We’re looking for what we call ‘consilience.’ If you think of the neuroscientist as a blind man looking at only one part of an elephant, we are trying to discover the ‘whole-elephant’ view of reality.” (…)

“We is what me is!”

Our nervous system has two basic modes: it fires up or quiets down. When we’re in a reactive state, our brainstem signals the need for fight or flight. This means we’re unable to open ourselves to another person, and even neutral comments may be taken as fighting words. On the other hand, an attitude of receptivity activates a different branch of the brainstem as it sends messages to relax the muscles of the face and vocal chords, and normalizes blood pressure and heart rate. “A receptive state turns on the social engagement system that connects us to others,” Siegel explains in his recent book, Mindsight. “Receptivity is our experience of being safe and seen; reactivity is our fight-flight-freeze survival reflex.” (…)

He describes the brain as part of “an embodied nervous system, a physical mechanism through which both energy and information flow to influence relationship and the mind.” He defines relationship as “the flow of energy and information between people.” Mind is “an embodied and relational process that regulates the flow of energy and information, consciousness included. Mind is shared between people. It isn’t something you own; we are profoundly interconnected. We need to make maps of we because we is what me is!” (…)

[Siegel]: “We now know that integration leads to health and harmony. We can re-envision the DSM symptoms as examples of syndromes filled with chaos and rigidity, conditions created when integration is impaired. So we can define mental health as the ability to monitor ourselves and modify our states so that we integrate our lives. Then things that appeared unchangeable can actually be changed.” (…)

Relationships, mind and brain aren’t different domains of reality—they are each about energy and information flow. The mechanism is the brain; subjective impressions and consciousness are mind. The regulation of energy and information flow is a function of mind as an emergent process emanating from both relationships and brain. Relationships are the way we share this flow. In this view, the emergent process we are calling “mind” is located in the body (nervous system) and in our relationships. Interpersonal relationships that are attuned promote the growth of integrative fibers in the brain. It is these regulatory fibers that enable the embodied brain to function well and for the mind to have a deep sense of coherence and well-being. Such a state also creates the possibility of a sense of being connected to a larger world. The natural outcome of integration is compassion, kindness, and resilience.” (…)

“Everything we experience, memory or emotion or thought, is part of a process, not a place in the brain! Energy is the capacity to do stuff. There’s nothing that’s not energy, even ‘mass.’ Remember E=MC squared? Information is literally a swirl of energy in a certain pattern that has a symbolic meaning; it stands for something other than itself. Information should be a verb; mind, too—as in minding or informationing. And the mind is an embodied and relational emergent process that regulates the flow of energy and information.”

“We can be both an ‘I’ and part of an ‘us’”

[Siegel]: “Certain neurons can fire when someone communicates with you. They dissolve the border between you and others. These mirror neurons are a hardwired system designed for us to see the mind-state of another person. That means we can learn easily to dance, but also to empathize with another. They automatically and spontaneously pick up information about the intentions and feelings of those around us, creating emotional resonance and behavioral imitation as they connect our internal state with those around us, even without the participation of our conscious mind.” And in Mindsight: “Mirror neurons are the antennae that pick up information about the intentions and feelings of others.… Right hemisphere signals (are those) the mirror neuron system uses to simulate the other within ourselves and to construct a neural map of our interdependent sense of a ‘self.’ It’s how we can be both an ‘I’ and part of an ‘us.’” (…)

So how can we re-shape our brain to become more open and receptive to others? We already know the brain receives input from the senses and gives it meaning, he points out. That’s how blind people find ways to take in information and map out their world. According to Siegel, they do this on secondary pathways rather than the main highways of the brain. That’s a major key to how we can bring about change: “You can take an adult brain in whatever state it’s in and change a person’s life by creating new pathways,” he affirms. “Since the cortex is extremely adaptable and many parts of the brain are plastic, we can unmask dormant pathways we don’t much use and develop them. A neural stem cell is a blob, an undifferentiated cell in the brain that divides into two every twenty-four hours. In eight–ten weeks, it will become established as a specialized neural cell and exist as a part of an interconnected network. How we learn has everything to do with linking wide areas of the brain with each other.”

He calls the prefrontal cortex “the portal through which interpersonal relations are established.” He demonstrates, by closing his hand over his thumb, how this little tiny piece of us (the last joint of the two middle fingers) is especially important because it touches all three major parts of our brain: the cortex, limbic area, and brainstem as well as the body-proper. “It’s the middle prefrontal fibers which map out the internal states of others,” he adds. “And they do this not only within one brain, mine, but also between two brains, mine and yours, and even among many brains. The brain is exquisitely social, and emotions are its fundamental language. Through them we become integrated and develop an emergent resonance with the internal state of the other.” (…)

“Relationship is key,” he emphasizes. “When we work with relationship, we work with brain structure. Relationship stimulates us and is essential in our development. People rarely mention relationship in brain studies, but it provides vital input to the brain. Every form of psychotherapy that works, works because it creates healthier brain function and structure.… In approaching our lives, we can ask where do we experience the chaos or rigidity that reveal where integration is impaired. We can then use the focus of our attention to integrate both our brain and our relationships. Ultimately we can learn to be open in an authentic way to others, and to ourselves. The outcome of such an integrative presence is not only a sense of deep well-being and compassion for ourselves and others, but also an opening of the doors of awareness to a sense of the interdependence of everything. ‘We’ are indeed a part of an interconnected whole.””

— Patty de Llosa, author, The Neurobiology of “We”, Parabola Magazine, 2011, Daniel Siegel, Clinical Professor of Psychiatry at the UCLA School of Medicine, co-director of the Mindful Awareness Research Center. (Illustration source)

Jun
16th
Thu
permalink

The Future of Science…Is Art?

                              
   Leonardo Da Vinci, Study for an angel’s face from The Virgin of the Rocks, ca. 1483

"This pencil study stunningly illustrates for me a key parallel between science and the arts: They strive for representation and expression, to capture some essential truth about a chosen subject with simplicity and economy. My equations and diagrams are no more the world I’m trying to describe than the artist’s pencil strokes are the woman he drew. However, it shows what’s possible, despite that limitation. The woman that emerges from the simple pencil strokes is so alive that she stares into your soul. In attempting to capture the universe, I mustn’t confuse my equations with the real thing, but from them some essential truths about nature will spring forth, transcending the mathematics and coming to life."

— Clifford Johnson, Physicist, University of Southern California © Alinari Archives/Corbis

"Physics is a form of insight, and as such, it’s a form of art."

David Bohm, American-born British quantum physicist who made contributions in the fields of theoretical physics, philosophy and neuropsychology, and to the Manhattan Project (1917-1992)

Science needs the arts. We need to find a place for the artist within the experimental process, to rediscover what Bohr observed when he looked at those cubist paintings. The current constraints of science make it clear that the breach between our two cultures is not merely an academic problem that stifles conversation at cocktail parties. Rather, it is a practical problem, and it holds back science’s theories. If we want answers to our most essential questions, then we will need to bridge our cultural divide. By heeding the wisdom of the arts, science can gain the kinds of new insights and perspectives that are the seeds of scientific progress. (…)

Since its inception in the early 20th century, neuroscience has succeeded in becoming intimate with the brain. Scientists have reduced our sensations to a set of discrete circuits. They have imaged our cortex as it thinks about itself, and calculated the shape of ion channels, which are machined to subatomic specifications.

And yet, despite this vast material knowledge, we remain strangely ignorant of what our matter creates. We know the synapse, but don’t know ourselves. In fact, the logic of reductionism implies that our self-consciousness is really an elaborate illusion, an epiphenomenon generated by some electrical shudder in the frontal cortex. There is no ghost in the machine; there is only the vibration of the machinery. Your head contains 100 billion electrical cells, but not one of them is you, or knows or cares about you. In fact, you don’t even exist. The brain is nothing but an infinite regress of matter, reducible to the callous laws of physics. (…)

Neuroscience excels at unraveling the mind from the bottom up. But our self-consciousness seems to require a top-down approach. As the novelist Richard Powers wrote, “If we knew the world only through synapses, how could we know the synapse?” The paradox of neuroscience is that its astonishing progress has exposed the limitations of its paradigm, as reductionism has failed to solve our emergent mind. Much of our experiences remain outside its range.

This world of human experience is the world of the arts. The novelist and the painter and the poet embrace those ephemeral aspects of the mind that cannot be reduced, or dissected, or translated into the activity of an acronym. They strive to capture life as it’s lived. As Virginia Woolf put it, the task of the novelist is to “examine for a moment an ordinary mind on an ordinary day…[tracing] the pattern, however disconnected and incoherent in appearance, which each sight or incident scores upon the consciousness.” She tried to describe the mind from the inside.

Neuroscience has yet to capture this first-person perspective. Its reductionist approach has no place for the “I” at the center of everything. It struggles with the question of qualia. Artists like Woolf, however, have been studying such emergent phenomena for centuries, and have amassed a large body of knowledge about such mysterious aspects of the mind. They have constructed elegant models of human consciousness that manage to express the texture of our experience, distilling the details of real life into prose and plot. That’s why their novels have endured: because they feel true. And they feel true because they capture a layer of reality that reductionism cannot.”

Jonah Lehrer, American science journalist, The Future of Science…Is Art?, SEED, Jan 16, 2008

See also:

Werner Herzog and Lawrence Krauss on Connecting Science and Art
Richard Feynman and Jirayr Zorthian on science, art and beauty
Piet Hein on Art and Science
Art and Science tag on Lapidarium

May
27th
Fri
permalink

Nicholas Humphrey on the mystery of private conscious qualia

Nicholas Humphrey, School Professor, London School of Economics; Professor of Psychology, School for Social Research; Author, The Color Red, answering the question ‘What is your formula? Your equation, algorithm? in Formulae for the 21st century, Edge, Oct 13, 2007