Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Oct
28th
Mon
permalink

Douglas Hofstadter: The Man Who Would Teach Machines to Think

"All the limitative theorems of metamathematics and the theory of computation suggest that once the ability to represent your own structure has reached a certain critical point, that is the kiss of death: it guarantees that you can never represent yourself totally. Gödel’s Incompleteness Theorem, Church’s Undecidability Theorem, Turing’s Halting Theorem, Tarski’s Truth Theorem — all have the flavour of some ancient fairy tale which warns you that “To seek self-knowledge is to embark on a journey which … will always be incomplete, cannot be charted on any map, will never halt, cannot be described.”

Douglas Hofstadter, 1979, cited in Vinod K. Wadhawan, Complexity Explained: 17. Epilogue, Nirmukta, 04 April 2010.

image M. C. Escher, Print Gallery. Hofstadter calls this Escher work a “pictorial parable for Godel’s Incompleteness Theorem.” Why? Look to the center of the painting, is there any way logical way to complete it? — source (Click picture to enlarge)

"On [Douglas] Hofstadter's office wall is a somewhat tattered reproduction of one of his favorite mind-twisting M. C. Escher prints, The Print Gallery.” In it, a boy stands looking at a print depicting a town where a woman looks down from her window at the roof of a gallery in which - yes - the boy stands looking at the print. We appreciate the paradox without being thrown by it, because we are outside looking in. Something like that creates our own unfathomable feelings of self. The self, subject and object, perceiver and perceived, is everywhere in the paradox.

It is a ”circling back,” the Tortoise tells Achilles, ”of a complex representation of the system together with its representations of all the rest of the world.”

”It is just so hard, emotionally,” Achilles tells the Tortoise, ”to acknowledge that a ‘soul’ emerges from so physical a system.” (…)

But philosophers like  [Daniel] Dennett believe, with Hofstadter, that scientific answers can be found without cheating by reducing the question to a manageable scale. (…) [T]he danger of looking only at the lowest biological level is in losing sight of the essential humanity that, in Hofstadter’s view, exists in the pattern and in the paradox. ”There seems to be no alternative to accepting some sort of incomprehensible quality to existence,” as Hofstadter puts it. ”Take your pick.” 

James Gleick, on Douglas R. Hofstadter in Exploring the Labyrinth of the Mind, The New York Times, August 21, 1983.

"In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself. Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.” (…)

“Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines as “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes” and a young man’s style as “hipsterish” and on and on ceaselessly throughout your day. That’s what it means to understand. But how does understanding work? For three decades, Hofstadter and his students have been trying to find out, trying to build “computer models of the fundamental mechanisms of thought.”

“At every moment,” Hofstadter writes in Surfaces and Essences, his latest book (written with Emmanuel Sander), “we are simultaneously faced with an indefinite number of overlapping and intermingling situations.” It is our job, as organisms that want to live, to make sense of that chaos. We do it by having the right concepts come to mind. This happens automatically, all the time. Analogy is Hofstadter’s go-to word. The thesis of his new book, which features a mélange of A’s on its cover, is that analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.

“Look at your conversations,” he says. “You’ll see over and over again, to your surprise, that this is the process of analogy-making.” Someone says something, which reminds you of something else; you say something, which reminds the other person of something else—that’s a conversation. It couldn’t be more straightforward. But at each step, Hofstadter argues, there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.

“Beware,” he writes, “of innocent phrases like ‘Oh, yeah, that’s exactly what happened to me!’ … behind whose nonchalance is hidden the entire mystery of the human mind.” (…)

[Hofstadter] spends most of his time in his study, two rooms on the top floor of his house, carpeted, a bit stuffy, and messier than he would like. His study is the center of his world. He reads there, listens to music there, studies there, draws there, writes his books there, writes his e‑mails there. (Hofstadter spends four hours a day writing e‑mail. “To me,” he has said, “an e‑mail is identical to a letter, every bit as formal, as refined, as carefully written … I rewrite, rewrite, rewrite, rewrite all of my e‑mails, always.”) He lives his mental life there, and it shows. Wall-to-wall there are books and drawings and notebooks and files, thoughts fossilized and splayed all over the room. It’s like a museum for his binges, a scene out of a brainy episode of Hoarders.

“Anything that I think about becomes part of my professional life,” he says. Daniel Dennett, who co-edited The Mind’s I with him, has explained that “what Douglas Hofstadter is, quite simply, is a phenomenologist, a practicing phenomenologist, and he does it better than anybody else. Ever.” He studies the phenomena—the feelings, the inside actions—of his own mind. “And the reason he’s good at it,” Dennett told me, “the reason he’s better than anybody else, is that he is very actively trying to have a theory of what’s going on backstage, of how thinking actually happens in the brain.” (…)

He makes photocopies of his notebook pages, cuts them up with scissors, and stores the errors in filing cabinets and labeled boxes around his study.

For Hofstadter, they’re clues. “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.” Correct speech isn’t very interesting; it’s like a well-executed magic trick—effective because it obscures how it works. What Hofstadter is looking for is “a tip of the rabbit’s ear … a hint of a trap door.

As the wind tunnel was to the Wright brothers, so the computer is to FARG. The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited. In Hofstadter’s view, this is the great opportunity of artificial intelligence. Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why. “I have always felt that the only hope of humans ever coming to fully understand the complexity of their minds,” Hofstadter has written, “is by modeling mental processes on computers and learning from the models’ inevitable failures.” (…)

But very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.

The modern era of mainstream AI—an era of steady progress and commercial success that began, roughly, in the early 1990s and continues to this day—is the long unlikely springtime after a period, known as the AI Winter, that nearly killed off the field.

It came down to a basic dilemma. On the one hand, the software we know how to write is very orderly; most computer programs are organized like a well-run army, with layers of commanders, each layer passing instructions down to the next, and routines that call subroutines that call subroutines. On the other hand, the software we want to write would be adaptable—and for that, a hierarchy of rules seems like just the wrong idea. Hofstadter once summarized the situation by writing, “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”

Machine learning

The “expert systems” that had once been the field’s meal ticket were foundering because of their brittleness. Their approach was fundamentally broken. Take machine translation from one language to another, long a holy grail of AI. The standard attack involved corralling linguists and translators into a room and trying to convert their expertise into rules for a program to follow. The standard attack failed for reasons you might expect: no set of rules can ever wrangle a human language; language is too big and too protean; for every rule obeyed, there’s a rule broken.

If machine translation was to survive as a commercial enterprise—if AI was to survive—it would have to find another way. Or better yet, a shortcut.

The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence. One such device, of course, is the human brain—but the whole point is to avoid grappling with the brain’s complexity. So what you do instead is start with a machine so simple, it almost doesn’t work: a machine, say, that randomly spits out French words for the English words it’s given.

Imagine a box with thousands of knobs on it. Some of these knobs control general settings: given one English word, how many French words, on average, should come out? And some control specific settings: given jump, what is the probability that shot comes next? The question is, just by tuning these knobs, can you get your machine to convert sensible English into sensible French?

It turns out that you can. What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.) You proceed one pair at a time. After you’ve entered a pair, take the English half and feed it into your machine to see what comes out in French. If that sentence is different from what you were expecting—different from the known correct translation—your machine isn’t quite right. So jiggle the knobs and try again. After enough feeding and trying and jiggling, feeding and trying and jiggling again, you’ll get a feel for the knobs, and you’ll be able to produce the correct French equivalent of your English sentence.

By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable result. And the beauty is that you never needed to program the machine explicitly; you never needed to know why the knobs should be twisted this way or that. (…)

Google has projects that gesture toward deeper understanding: extensions of machine learning inspired by brain biology; a “knowledge graph” that tries to map words, like Obama, to people or places or things. But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself. It’s like an enormous Rosetta Stone, the calcified hieroglyphics of minds once at work. (…)

Ever since he was about 15, Hofstadter has read The Catcher in the Rye once every 10 years. In the fall of 2011, he taught an undergraduate seminar called “Why Is J. D. Salinger’s The Catcher in the Rye a Great Novel?” He feels a deep kinship with Holden Caulfield. When I mentioned that a lot of the kids in my high-school class didn’t like Holden—they thought he was a whiner—Hofstadter explained that “they may not recognize his vulnerability.” You imagine him standing like Holden stood at the beginning of the novel, alone on the top of a hill, watching his classmates romp around at the football game below. “I have too many ideas already,” Hofstadter tells me. “I don’t need the stimulation of the outside world.” (…)

“Ars longa, vita brevis,” Hofstadter likes to say. “I just figure that life is short. I work, I don’t try to publicize. I don’t try to fight.”

There’s an analogy he made for me once. Einstein, he said, had come up with the light-quantum hypothesis in 1905. But nobody accepted it until 1923. “Not a soul,” Hofstadter says. “Einstein was completely alone in his belief in the existence of light as particles—for 18 years.

“That must have been very lonely.” “

— James Somers, to read the full article click The Man Who Would Teach Machines to Think, The Atlantic, Oct 23 2013

Douglas Hofstadter, is an American professor of cognitive science whose research focuses on the sense of “I”, consciousness, analogy-making, artistic creation, literary translation, and discovery in mathematics and physics. He is best known for his book Gödel, Escher, Bach: an Eternal Golden Braid, first published in 1979. It won both the Pulitzer Prize for general non-fiction.

See also:

The Mathematical Art of M.C. Escher, Lapidarium notes

Jul
1st
Mon
permalink

Why It’s Good To Be Wrong. David Deutsch on Fallibilism

image

"That human beings can be mistaken in anything they think or do is a proposition known as fallibilism. (…)

The trouble is that error is a subject where issues such as logical paradox, self-reference, and the inherent limits of reason rear their ugly heads in practical situations, and bite.

Paradoxes seem to appear when one considers the implications of one’s own fallibility: A fallibilist cannot claim to be infallible even about fallibilism itself. And so, one is forced to doubt that fallibilism is universally true. Which is the same as wondering whether one might be somehow infallible—at least about some things. For instance, can it be true that absolutely anything that you think is true, no matter how certain you are, might be false?

What? How might we be mistaken that two plus two is four? Or about other matters of pure logic? That stubbing one’s toe hurts? That there is a force of gravity pulling us to earth? Or that, as the philosopher René Descartes argued, “I think, therefore I am”?

When fallibilism starts to seem paradoxical, the mistakes begin. We are inclined to seek foundations—solid ground in the vast quicksand of human opinion—on which one can try to base everything else. Throughout the ages, the false authority of experience and the false reassurance of probability have been mistaken for such foundations: “No, we’re not always right,” your parents tell you, “just usually.” They have been on earth longer and think they have seen this situation before. But since that is an argument for “therefore you should always do as we say,” it is functionally a claim of infallibility after all. Moreover, look more closely: It claims literal infallibility too. Can anyone be infallibly right about the probability that they are right? (…)

The fact is, there’s nothing infallible about “direct experience” (…). Indeed, experience is never direct. It is a sort of virtual reality, created by our brains using sketchy and flawed sensory clues, given substance only by fallible expectations, explanations, and interpretations. Those can easily be more mistaken than the testimony of the passing hobo. If you doubt this, look at the work of psychologists Christopher Chabris and Daniel Simons, and verify by direct experience the fallibility of your own direct experience. Furthermore, the idea that your reminiscences are infallible is also heresy by the very doctrine that you are faithful to

I’ll tell you what really happened. You witnessed a dress rehearsal. The real ex cathedra ceremony was on the following day. In order not to make the declaration a day early, they substituted for the real text (which was about some arcane theological issue, not gravity) a lorem-ipsum-type placeholder that they deemed so absurd that any serious listener would immediately realize that that’s what it was. 

And indeed, you did realize this; and as a result, you reinterpreted your “direct experience,” which was identical to that of witnessing an ex cathedra declaration, as not being one. Precisely by reasoning that the content of the declaration was absurd, you concluded that you didn’t have to believe it. Which is also what you would have done if you hadn’t believed the infallibility doctrine.

You remain a believer, serious about giving your faith absolute priority over your own “unaided” reason (as reason is called in these contexts). But that very seriousness has forced you to decide first on the substance of the issue, using reason, and only then whether to defer to the infallible authority. This is neither fluke nor paradox. It is simply that if you take ideas seriously, there is no escape, even in dogma and faith, from the obligation to use reason and to give it priority over dogma, faith, and obedience. (…)

It is hard to contain reason within bounds. If you take your faith sufficiently seriously you may realize that it is not only the printers who are fallible in stating the rules for ex cathedra, but also the committee that wrote down those rules. And then that nothing can infallibly tell you what is infallible, nor what is probable. It is precisely because you, being fallible and having no infallible access to the infallible authority, no infallible way of interpreting what the authority means, and no infallible means of identifying an infallible authority in the first place, that infallibility cannot help you before reason has had its say. 

A related useful thing that faith tells you, if you take it seriously enough, is that the great majority of people who believe something on faith, in fact believe falsehoods. Hence, faith is insufficient for true belief. As the Nobel-Prize-winning biologist Peter Medawar said: “the intensity of the conviction that a hypothesis is true has no bearing on whether it is true or not.”

You know that Medawar’s advice holds for all ideas, not just scientific ones, and, by the same argument, to all the other diverse things that are held up as infallible (or probable) touchstones of truth: holy books; the evidence of the senses; statements about who is probably right; even true love. (…)

This logic of fallibility, discovered and rediscovered from time to time, has had profound salutary effects in the history of ideas. Whenever anything demands blind obedience, its ideology contains a claim of infallibility somewhere; but wherever someone believes seriously enough in that infallibility, they rediscover the need for reason to identify and correctly interpret the infallible source. Thus the sages of ancient Judaism were led, by the assumption of the Bible’s infallibility, to develop their tradition of critical discussion. And in an apparently remote application of the same logic, the British constitutional doctrine of “parliamentary sovereignty” was used by 20th-century judges such as Lord Denning to develop an institution of judicial review similar to that which, in the United States, had grown out of the opposite doctrine of “separation of powers.”

Fallibilism has practical consequences for the methodology and administration of science, and in government, law, education, and every aspect of public life. The philosopher Karl Poppe elaborated on many of these. He wrote:

The question about the sources of our knowledge … has always been asked in the spirit of: ‘What are the best sources of our knowledge—the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist—no more than ideal rulers—and that all ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’

It’s all about error. We used to think that there was a way to organize ourselves that would minimize errors. This is an infallibilist chimera that has been part of every tyranny since time immemorial, from the “divine right of kings” to centralized economic planning. And it is implemented by many patterns of thought that protect misconceptions in individual minds, making someone blind to evidence that he isn’t Napoleon, or making the scientific crank reinterpret peer review as a conspiracy to keep falsehoods in place. (…)

Popper’s answer is: We can hope to detect and eliminate error if we set up traditions of criticism—substantive criticism, directed at the content of ideas, not their sources, and directed at whether they solve the problems that they purport to solve. Here is another apparent paradox, for a tradition is a set of ideas that stay the same, while criticism is an attempt to change ideas. But there is no contradiction. Our systems of checks and balances are steeped in traditions—such as freedom of speech and of the press, elections, and parliamentary procedures, the values behind concepts of contract and of tort—that survive not because they are deferred to but precisely because they are not: They themselves are continually criticized, and either survive criticism (which allows them to be adopted without deference) or are improved (for example, when the franchise is extended, or slavery abolished). Democracy, in this conception, is not a system for enforcing obedience to the authority of the majority. In the bigger picture, it is a mechanism for promoting the creation of consent, by creating objectively better ideas, by eliminating errors from existing ones.

“Our whole problem,” said the physicist John Wheeler, “is to make the mistakes as fast as possible.” This liberating thought is more obviously true in theoretical physics than in situations where mistakes hurt. A mistake in a military operation, or a surgical operation, can kill. But that only means that whenever possible we should make the mistakes in theory, or in the laboratory; we should “let our theories die in our place,” as Popper put it. But when the enemy is at the gates, or the patient is dying, one cannot confine oneself to theory. We should abjure the traditional totalitarian assumption, still lurking in almost every educational system, that every mistake is the result of wrongdoing or stupidity. For that implies that everyone other than the stupid and the wrongdoers is infallible. Headline writers should not call every failed military strike “botched;” courts should not call every medical tragedy malpractice, even if it’s true that they “shouldn’t have happened” in the sense that lessons can be learned to prevent them from happening again. “We are all alike,” as Popper remarked, “in our infinite ignorance.” And this is a good and hopeful thing, for it allows for a future of unbounded improvement.

Fallibilism, correctly understood, implies the possibility, not the impossibility, of knowledge, because the very concept of error, if taken seriously, implies that truth exists and can be found. The inherent limitation on human reason, that it can never find solid foundations for ideas, does not constitute any sort of limit on the creation of objective knowledge nor, therefore, on progress. The absence of foundation, whether infallible or probable, is no loss to anyone except tyrants and charlatans, because what the rest of us want from ideas is their content, not their provenance: If your disease has been cured by medical science, and you then become aware that science never proves anything but only disproves theories (and then only tentatively), you do not respond “oh dear, I’ll just have to die, then.” (…)

The theory of knowledge is a tightrope that is the only path from A to B, with a long, hard drop for anyone who steps off on one side into “knowledge is impossible, progress is an illusion” or on the other side into “I must be right, or at least probably right.” Indeed, infallibilism and nihilism are twins. Both fail to understand that mistakes are not only inevitable, they are correctable (fallibly). Which is why they both abhor institutions of substantive criticism and error correction, and denigrate rational thought as useless or fraudulent. They both justify the same tyrannies. They both justify each other.

I must now apologize for trying to trick you earlier: All the ideas that I suggested we might know infallibly are in fact falsehoods. “Two plus two” of course isn’t “four” as you’d discover if you wrote “2+2” in an arithmetic test when asked to add two and two. If we were infallible about matters of pure logic, no one would ever fail a logic test either. Stubbing your toe does not always hurt if you are focused on some overriding priority like rescuing a comrade in battle. And as for knowing that “I” exist because I think—note that your knowledge that you think is only a memory of what you did think, a second or so ago, and that can easily be a false memory. (For discussions of some fascinating experiments demonstrating this, see Daniel Dennett’s book Brainstorms.) Moreover, if you think you are Napoleon, the person you think must exist because you think, doesn’t exist.

And the general theory of relativity denies that gravity exerts a force on falling objects. The pope would actually be on firm ground if he were to concur with that ex cathedra. Now, are you going to defer to my authority as a physicist about that? Or decide that modern physics is a sham? Or are you going to decide according to whether that claim really has survived all rational attempts to refute it?”

David Deutsch, a British physicist at the University of Oxford. He is a non-stipendiary Visiting Professor in the Department of Atomic and Laser Physics at the Centre for Quantum Computation (CQC) in the Clarendon Laboratory of the University of Oxford, Why It’s Good To Be Wrong, Nautilus, 2013. (Illustration by Gérard DuBois)

See also:

David Deutsch: A new way to explain explanation, TED, 2009
David Deutsch on knowledge as crafted self-similarity
David Deutsch on Artificial Intelligence

Mar
27th
Wed
permalink

Hilary Putnam - ‘A philosopher in the age of science’

                  

"Imagine two scientists are proposing competing theories about the motion of the moon. One scientist argues that the moon orbits the earth at such and such a speed due to the effects of gravity and other Newtonian forces. The other, agreeing to the exact same observations, argues that behind Newtonian forces there are actually undetectable space-aliens who are using sophisticated tractor beams to move every object in the universe. No amount of observation will resolve this conflict. They agree on every observation and measurement. One just has a more baroque theory than the other. Reasonably, most of us think the simpler theory is better.

But when we ask why this theory is better, we find ourselves resorting to things that are patently non-factual. We may argue that theories which postulate useless entities are worse than simpler ones—citing the value of simplicity. We may argue that the space-alien theory contradicts too many other judgements—citing the value of coherence. We can give a whole slew of reasons why one theory is better than another, but there is no rulebook out there for scientists to point to which resolves the matter objectively. Even appeals to the great pragmatic value of the first theory or arguments that point out the lack of explanatory and predictive power of the space-alien theory, are still appeals to a value. No amount of observation will tell you why being pragmatic makes one theory better—it is something for which you have to argue. No matter what kind of fact we are trying to establish, it is going to be inextricably tied to the values we hold. (…)

In [Hilary Putnam’s] view, there is no reason to suppose that a complete account of reality can be given using a single set of concepts. That is, it is not possible to reduce all types of explanation to one set of objective concepts. Suppose I say, “Keith drove like a maniac” and you ask me why. We would usually explain the event in terms of value-laden concepts like intention, emotion, and so on—“Keith was really stressed out”—and this seems to work perfectly fine. Now we can also take the exact same event and describe it using an entirely different set of scientific concepts— say “there was a chain of electrochemical reactions from this brain to this foot” or “there was x pressure on the accelerator which caused y torque on the wheels.” These might be true descriptions, but they simply don’t give us the whole or even a marginally complete picture of Keith driving like a maniac. We could describe every single relevant physical detail of that event and still have no explanation. Nor, according to Putnam, should we expect there to be. The full scope of reality is simply too complex to be fully described by one method of explanation.

The problem with all of this, and one that Putnam has struggled with, is what sort of picture of reality we are left with once we accept these three central arguments: the collapse of the fact-value dichotomy, the truth of semantic externalism and conceptual relativity. (…)

We could—like Putnam before the 1970s—become robust realists and simply accept that values and norms are no less a part of the world than elementary particles and mathematical objects. We could—like Putnam until the 1990s—become “internal realists” and, in a vaguely Kantian move define reality in terms of mind-dependent concepts and idealised rational categories. Or we could adopt Putnam’s current position—a more modest realism which argues that there is a mind-independent world out there and that it is compatible with our ordinary human values. Of course Putnam has his reasons for believing what he does now, and they largely derive from his faith in our ability to represent reality correctly. But the strength of his arguments convincing us to be wary of the scientific stance leave us with little left of trust in it.”

, A philosopher in the age of science, Prospect, March, 14, 2013. [Hilary Putnam — American philosopher, mathematician and computer scientist who has been a central figure in analytic philosophy since the 1960s, currently Cogan University Professor Emeritus at Harvard University.]

Aug
20th
Mon
permalink

The Human Condition by René Magritte (1933)

       

“If one looks at a thing with the intention of trying to discover what it means, one ends up no longer seeing the thing itself, but thinking of the question that has been raised. The mind sees in two different senses: (1) sees, as with the eyes; and (2) sees a question (no eyes).”

— René Magritte, cited in Humanist, Volume 84, Issues 1-6, Rationalist Press Association Ltd., Jan 1, 1969, p.176.

“I have found a new potential inherent in things — their ability to gradually become something else. This seems to me to be something quite different from a composite object, since there is no break between the two substances.”

— René Magritte, cited in Art History. About.com

“We are surrounded by curtains. We only perceive the world behind a curtain of semblance. At the same time, an object needs to be covered in order to be recognized at all.”

— René Magritte, cited in Art History. About.com

“An object is not so attached to its name that we cannot find another one that would suit it better.”

— René Magritte, cited in La Révolution surréaliste, 1927

René Magritte in his letter to A. Chavee (Sept. 30, 1960) said about the painting:

"In front of a window seen from inside a room, I placed a painting representing exactly that portion of the landscape covered by the painting. Thus, the tree in the picture hid the tree behind it, outside the room. For the spectator, it was both inside the room within the painting and outside in the real landscape.

Which is how we see the world, namely, outside of us; although having only one representation of it within us. Similarly we sometimes remember a past event as being in the present. Time and space lose meaning and our daily experience becomes paramount.

Questions such as ‘What does this picture mean, what does it represent?’ are possible only if one is incapable of seeing a picture in all its truth, only if one automaically understands that a very precise image does not show precisely what it is. It’s like believing that the implied meaning (if there is one?) is worth more than the overt meaning. (…)

How can anyone enjoy interpreting symbols? They are ‘substitutes’ that are only useful to a mind that is incapable of knowing the things themselves. A devotee of interpretation cannot see a bird; he only sees it as a symbol. Although this manner of knowing the ‘world’ may be useful in treating mental illness, it would be silly to confuse it with a mind that can be applied to any kind of thinking at all.”

***

"Magritte was heavily influenced by the writings of Immanuel Kant, who proposed that humans can rationalize situations but can not comprehend the “things-in-themselves.” As it applies to Magritte’s work, he is simply creating a variation upon his over-arching philosophy: A painting of a scene is not the same as a scene. Ceci n’est pas une pipe.

Magritte plays with this philosophy by exploiting the flatness of two-dimensional space in his painting by depicting three-dimensional space outside and a two-dimensional painting that have the same imagery. The title refers to the inherent grappling that all humans go through when viewing his mind-bending painting.”

René Magritte, Belgian surrealist artist (1890-1967), One Surrealist a Day. (Illustration: René Magritte, The Human Condition (1933), Oil on canvas, National Gallery of Art, Washington DC)

See also:

Map–territory relation- a brief résumé, Lapidarium notes

Jul
23rd
Mon
permalink

S. Hawking, L. Mlodinow on why is there something rather than nothing and why are the fundamental laws as we have described them

                         

According to the idea of model-dependent realism, our brains interpret the input from our sensory organs by making a model of the outside world. We form mental concepts of our home, trees, other people, the electricity that flows from wall sockets, atoms, molecules, and other universes. These mental concepts are the only reality we can know. There is no modelindependent test of reality. It follows that a well-constructed model creates a reality of its own. An example that can help us think about issues of reality and creation is the Game of Life, invented in 1970 by a young mathematician at Cambridge named John Conway.

The word “game” in the Game of Life is a misleading term. There are no winners and losers; in fact, there are no players. The Game of Life is not really a game but a set of laws that govern a two dimensional universe. It is a deterministic universe: Once you set up a starting configuration, or initial condition, the laws determine what happens in the future.

The world Conway envisioned is a square array, like a chessboard, but extending infinitely in all directions. Each square can be in one of two states: alive (shown in green) or dead (shown in black). Each square has eight neighbors: the up, down, left, and right neighbors and four diagonal neighbors. Time in this world is not continuous but moves forward in discrete steps. Given any arrangement of dead and live squares, the number of live neighbors determine what happens next according to the following laws:

1. A live square with two or three live neighbors survives (survival).
2. A dead square with exactly three live neighbors becomes a live cell (birth).
3. In all other cases a cell dies or remains dead. In the case that a live square has zero or one neighbor, it is said to die of loneliness; if it has more than three neighbors, it is said to die of overcrowding.

That’s all there is to it: Given any initial condition, these laws generate generation after generation. An isolated living square or two adjacent live squares die in the next generation because they don’t have enough neighbors. Three live squares along a diagonal live a bit longer. After the first time step the end squares die, leaving just the middle square, which dies in the following generation. Any diagonal line of squares “evaporates” in just this manner. But if three live squares are placed horizontally in a row, again the center has two neighbors and survives while the two end squares die, but in this case the cells just above and below the center cell experience a birth. The row therefore turns into a column. Similarly, the next generation the column back turns into a row, and so forth. Such oscillating configurations are called blinkers.

If three live squares are placed in the shape of an L, a new behavior occurs. In the next generation the square cradled by the L will give birth, leading to a 2 × 2 block. The block belongs to a pattern type called the still life because it will pass from generation to generation unaltered. Many types of patterns exist that morph in the early generations but soon turn into a still life, or die, or return to their original form and then repeat the process. There are also patterns called gliders, which morph into other shapes and, after a few generations, return to their original form, but in a position one square down along the diagonal. If you watch these develop over time, they appear to crawl along the array. When these gliders collide, curious behaviors can occur, depending on each glider’s shape at the moment of collision.

What makes this universe interesting is that although the fundamental “physics” of this universe is simple, the “chemistry” can be complicated. That is, composite objects exist on different scales. At the smallest scale, the fundamental physics tells us that there are just live and dead squares. On a larger scale, there are gliders, blinkers, and still-life blocks. At a still larger scale there are even more complex objects, such as glider guns: stationary patterns that periodically give birth to new gliders that leave the nest and stream down the diagonal. (…)

If you observed the Game of Life universe for a while on any particular scale, you could deduce laws governing the objects on that scale. For example, on the scale of objects just a few squares across you might have laws such as “Blocks never move,” “Gliders move diagonally,” and various laws for what happens when objects collide. You could create an entire physics on any level of composite objects. The laws would entail entities and concepts that have no place among the original laws. For example, there are no concepts such as “collide” or “move” in the original laws. Those describe merely the life and death of individual stationary squares. As in our universe, in the Game of Life your reality depends on the model you employ.

Conway and his students created this world because they wanted to know if a universe with fundamental rules as simple as the ones they defined could contain objects complex enough to replicate. In the Game of Life world, do composite objects exist that, after merely following the laws of that world for some generations, will spawn others of their kind? Not only were Conway and his students able to demonstrate that this is possible, but they even showed that such an object would be, in a sense, intelligent! What do we mean by that? To be precise, they showed that the huge conglomerations of squares that self-replicate are “universal Turing machines.” For our purposes that means that for any calculation a computer in our physical world can in principle carry out, if the machine were fed the appropriate input—that is, supplied the appropriate Game of Life world environment—then some generations later the machine would be in a state from which an output could be read that would correspond to the result of that computer calculation. (…)

In the Game of Life, as in our world, self-reproducing patterns are complex objects. One estimate, based on the earlier work of mathematician John von Neumann, places the minimum size of a selfreplicating pattern in the Game of Life at ten trillion squares—roughly the number of molecules in a single human cell. One can define living beings as complex systems of limited size that are stable and that reproduce themselves.

The objects described above satisfy the reproduction condition but are probably not stable: A small disturbance from outside would probably wreck the delicate mechanism. However, it is easy to imagine that slightly more complicated laws would allow complex systems with all the attributes of life. Imagine a entity of that type, an object in a Conway-type world. Such an object would respond to environmental stimuli, and hence appear to make decisions. Would such life be aware of itself? Would it be self-conscious? This is a question on which opinion is sharply divided. Some people claim that self-awareness is something unique to humans. It gives them free will, the ability to choose between different courses of action.

How can one tell if a being has free will?

If one encounters an alien, how can one tell if it is just a robot or it has a mind of its own? The behavior of a robot would be completely determined, unlike that of a being with free will. Thus one could in principle detect a robot as a being whose actions can be predicted. (…) This may be impossibly difficult if the being is large and complex. We cannot even solve exactly the equations for three or more particles interacting with each other. Since an alien the size of a human would contain about a thousand trillion trillion particles even if the alien were a robot, it would be impossible to solve the equations and predict what it would do. We would therefore have to say that any complex being has free will—not as a fundamental feature, but as an effective theory, an admission of our inability to do the calculations that would enable us to predict its actions.

The example of Conway’s Game of Life shows that even a very simple set of laws can produce complex features similar to those of intelligent life. There must be many sets of laws with this property. What picks out the fundamental laws (as opposed to the apparent laws) that govern our universe? As in Conway’s universe, the laws of our universe determine the evolution of the system, given the state at any one time. In Conway’s world we are the creators—we choose the initial state of the universe by specifying objects and their positions at the start of the game. (…)

If the total energy of the universe must always remain zero, and it costs energy to create a body, how can a whole universe be created from nothing? That is why there must be a law like gravity. Because gravity is attractive, gravitational energy is negative: One has to do work to separate a gravitationally bound system, such as the earth and moon. This negative energy can balance the positive energy needed to create matter, but it’s not quite that simple. The negative gravitational energy of the earth, for example, is less than a billionth of the positive energy of the matter particles the earth is made of. A body such as a star will have more negative gravitational energy, and the smaller it is (the closer the different parts of it are to each other), the greater this negative gravitational energy will be. But before it can become greater than the positive energy of the matter, the star will collapse to a black hole, and black holes have positive energy. That’s why empty space is stable. Bodies such as stars or black holes cannot just appear out of nothing. But a whole universe can.

Because gravity shapes space and time, it allows space-time to be locally stable but globally unstable. On the scale of the entire universe, the positive energy of the matter can be balanced by the negative gravitational energy, and so there is no restriction on the creation of whole universes. Because there is a law like gravity, the universe can and will create itself from nothing. (…) Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist. It is not necessary to invoke God to light the blue touch paper and set the universe going.

Why are the fundamental laws as we have described them?

The ultimate theory must be consistent and must predict finite results for quantities that we can measure. We’ve seen that there must be a law like gravity, and we saw in Chapter 5 that for a theory of gravity to predict finite quantities, the theory must have what is called supersymmetry between the forces of nature and the matter on which they act. M-theory is the most general supersymmetric theory of gravity. For these reasons M-theory is the only candidate for a complete theory of the universe. If it is finite—and this has yet to be proved—it will be a model of a universe that creates itself. We must be part of this universe, because there is no other consistent model.

M-theory is the unified theory Einstein was hoping to find. The fact that we human beings—who are ourselves mere collections of fundamental particles of nature—have been able to come this close to an understanding of the laws governing us and our universe is a great triumph. But perhaps the true miracle is that abstract considerations of logic lead to a unique theory that predicts and describes a vast universe full of the amazing variety that we see. If the theory is confirmed by observation, it will be the successful conclusion of a search going back more than 3,000 years. We will have found the grand design.”

Stephen Hawking, British theoretical physicist and author, Leonard Mlodinow, The Grand Design, Random House, 2010.

See also:

Stephen Hawking on the universe’s origin
☞ Tim Maudlin, What Happened Before the Big Bang? The New Philosophy of Cosmology
Vlatko Vedral: Decoding Reality: the universe as quantum information
The Concept of Laws. The special status of the laws of mathematics and physics
Raphael Bousso: Thinking About the Universe on the Larger Scales
Lisa Randall on the effective theory

Jun
20th
Wed
permalink

The crayola-fication of the world: How we gave colors names, and it messed with our brains

     

"We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement to organize it in this way—an agreement that holds throughout our speech community and is codified in the patterns of our language (…) all observers are not led by the same physical evidence to the same picture of the universe, unless their linguistic backgrounds are similar”

Benjamin Whorf, Science and linguistics, first published in 1940 in MIT Technology Review [See also: linguistic relativity]

"The implication is that language may affect how we see the world. Somehow, the linguistic distinction between blue and green may heighten the perceived difference between them. (…)

If you have a word to distinguish two colors, does that make you any better at telling them apart? More generally, does the linguistic baggage that we carry effect how we perceive the world? This study was designed to address Whorf’s idea head on.

As it happens, Whorf was right. Or rather, he was half right.

The researchers found that there is a real, measurable difference in how we perform on these two tasks. In general, it takes less time to identify that odd blue square compared to the odd green.  This makes sense to anyone who’s ever tried looking for a tennis ball in the grass. It’s not that hard, but I’d rather the ball be blue. In once case you are jumping categories (blue versus green), and in the other, staying with a category (green versus green).

However, and this is where things start to get a bit odd, this result only holds if the differently colored square was in the right half of the circle. If it was in the left half (…), then there’s no difference in reaction times – it takes just as long to spot the odd blue as the odd green.  It seems that color categories only matter in the right half of your visual field! (…)

It’s easier to tell apart colors with different names, but only if they are to your right. Keep in mind that this is a very subtle effect, the difference in reaction time is a few hundredths of a second.

So what’s causing this lopsidedness?  Well, if you know something about how the brain works, you might have already guessed. The crucial point is that everything that we see in the right half of our vision is processed in the left hemisphere of our brain, and everything we see in the left half is processed by the right hemisphere. And for most of us, the left brain is stronger at processing language. So perhaps the language savvy half of our brain is helping us out.

    

It’s not just English speakers that show this asymmetry. Koreans are familiar with the colors yeondu and chorok. An English speaker would call them both green (yeondu perhaps being a more yellowish green). But in Korean it’s not a matter of shade, they are both basic colors. There is no word for green that includes both yeondu and chorok.

      
To the left of the dotted line is yeondu, and to the right chorok. Is it still as easy to spot the odd square in the circle?

And so imagine taking the same color ID test, but this time with yeondu and chorok instead of blue and green. A group of researchers ran this experiment. They discovered that among those who were the fastest at identifying the odd color, English speakers showed no left brain / right brain distinction, whereas Korean speakers did. It’s plausible that their left brain was attuned to the distinction between yeondu and chorok.

But how do we know that language is the key here? Back to the previous study. The researchers repeated the color circle experiment, but this time threw in a verbal distraction. The subjects were asked to memorize a word before each color test. The idea was to keep their language circuits distracted. And at the same time, other subjects were shown an image to memorize, not a word. In this case, it’s a visual distraction, and the language part of the brain needn’t be disturbed.

They found that when you’re verbally distracted, it suddenly becomes harder to separate blue from green (you’re slower at straddling color categories). In fact the results showed that people found this more difficult then separating two shades of green. However, if the distraction is visual, not verbal, things are different. It’s easy to spot the blue among green, so you’re faster at straddling categories.

All of this is only true for your left brain. Meanwhile, your right brain is rather oblivious to these categories (until, of course, the left brain bothers to inform it). The conclusion is that language is somehow enhancing your left brain’s ability to discern different colors with different names. Cultural forces alter our perception in ever so subtle a way, by gently tugging our visual leanings in different directions. Oddly enough, Whorf was right, but only when it comes to half your brain.

Imagine a world without color names. You lived in such a world once, when you were an infant. Do you remember what it was like? Anna Franklin is a psychologist who is particularly interested in where color categories come from. She studies color recognition in infants, as a window into how the brain organizes color.

Here she is discussing her work in this incredible clip from a BBC Horizon documentary called ‘Do you see what I see?‘. (…) It starts off with infants, and then cuts to the Himba tribe who have a highly unusual color naming system. You’ll see them taking the color wheel test, with very surprising results.

Surprisingly, many children take a remarkably long time to learn their color names. By the time they can name dozens of objects, they still struggle with basic colors. A two year old may know that a banana is yellow or an apple is red, but if you show them a blue cup, odds are even that they’ll call it red. And this confusion can persist even after encountering hundreds of examples, until as late as the age of four. There have been studies that show that very young sighted children are as likely to identify a color correctly as blind children of the same age. They rely on their experience, rather than recognize the color outright. (…)

The big question is when children learn their color words, does their perception of the world change? Anna Franklin (who we met in the video above) and colleagues took on this question. Working with toddlers aged two to four, they split them into two groups. There were the namers, who could reliably distinguish blue from green, and the politely-named learners, who couldn’t. The researchers repeated the color circle experiment on these children. Rather than have them press a button (probably not a good idea), they tracked the infants’ eyes to see how long it took them to spot the odd square. (…)

As toddlers learn the names of colors, a remarkable transformation is taking place inside their heads. Before they learn their color names, they are better at distinguishing color categories in their right brain (Left Visual Field). In a sense, their right brain understands the difference between blue and green, even before they have the words for it. But once they acquire words for blue and green, this ability jumps over to the left brain (Right Visual Field).

Think about what that means. As infant brains are rewiring themselves to absorb our visual language, the seat of categorical processing jumps hemispheres from the right brain to the left. And it stays here throughout adulthood. Their brains are furiously re-categorizing the world, until mysteriously, something finally clicks into place. So the next time you see a toddler struggling with their colors, don’t be like Darwin, and cut them some slack. They’re going through a lot.”

Aatish Bhatia, Ph.D. at Rutgers University, The crayola-fication of the world: How we gave colors names, and it messed with our brains (part II), Empirical Zeal, June 11, 2012. (Illustration by Scott Campbell).

See also:

☞ Regier, T., & Kay, P. (2009). Language, thought, and color: Whorf was half right Trends in Cognitive Sciences, Trends in Cognitive Sciences, 13 (10), 439-446 
☞ Gilbert AL, Regier T, Kay P, & Ivry RB (2006), Whorf hypothesis is supported in the right visual field but not the left, Proceedings of the National Academy of Sciences of the United States of America, 103 (2), 489-94
Aatish Bhatia, The crayola-fication of the world: How we gave colors names, and it messed with our brains (part I)

"Why is the color getting lost in translation? This visual conundrum has its roots in the history of language.  (…) What really is a color? Just like the crayons, we’re taking something that has no natural boundaries – the frequencies of visible light – and dividing into convenient packages that we give a name. (…) Languages have differing numbers of color words, ranging from two to about eleven. Yet after looking at 98 different languages, they saw a pattern. It was a pretty radical idea, that there is a certain fixed order in which these color names arise. This was a common path that languages seem to follow, a road towards increasing visual diversity. (…)

Cultures are quite different in how their words paint the world. (…) For the 110 cultures, you can see how many basic words they use for colors. To the Dani people who live in the highlands of New Guiniea, objects comes in just two shades. There’s mili for the cooler shades, from blues and greens to black, and mola for the lighter shades, like reds, yellows and white. Some languages have just three basic colors, others have 4, 5, 6, and so on. (…)

If you were a mantis shrimp, your rainbow would be unimaginably rich, with thousands, maybe tens of thousands of colors that blend together, stretching from deep reds all the way to the ultraviolet. To a mantis shrimp, our visual world is unbearably dull. (Another Radiolab plug: in their episode on Color, they use a choir to convey this idea through sound. A visual spectrum becomes a musical one. It’s one of those little touches that makes this show genius.”

Color words in different languages, Fathom, Nov 8, 2012.

May
27th
Sun
permalink

Science Is Not About Certainty. Science is about overcoming our own ideas and a continuous challenge of common sense

       

“At the core of all well-founded belief lies belief that is unfounded.”

Ludwig Wittgenstein, On Certainty, #253, J. & J. Harper Editions, New York, 1969. 

"The value of philosophy is, in fact, to be sought largely in its very uncertainty. The man who has no tincture of philosophy goes through life imprisoned in the prejudices derived from common sense, from the habitual beliefs of his age or his nation, and from convictions which have grown up in his mind without the co-operation or consent of his deliberate reason. To such a man the world tends to become definite, finite, obvious; common objects rouse no questions, and unfamiliar possibilities are contemptuously rejected. As soon as we begin to philosophize, on the contrary, we find that even the most everyday things lead to problems to which only very incomplete answers can be given.

Philosophy, though unable to tell us with certainty what is the true answer to the doubts it raises, is able to suggest many possibilities which enlarge our thoughts and free them from the tyranny of custom. Thus, while diminishing our feeling of certainty as to what things are, it greatly increases our knowledge as to what they may be; it removes the somewhat arrogant dogmatism of those who have never traveled into the region of liberating doubt, and it keeps alive our sense of wonder by showing familiar things in an unfamiliar aspect.”

Bertrand RussellThe Problems of Philosophy (1912), Cosimo, Inc, 2010, p. 113-114.

We say that we have some theories about science. Science is about hypothetico-deductive methods, we have observations, we have data, data require to be organized in theories.  So then we have theories. These theories are suggested or produced from the data somehow, then checked in terms of the data. Then time passes, we have more data, theories evolve, we throw away a theory, and we find another theory which is better, a better understanding of the data, and so on and so forth. This is a standard idea of how science works, which implies that science is about empirical content, the true interesting relevant content of science is its empirical content. Since theories change, the empirical content is the solid part of what science is. Now, there’s something disturbing, for me as a theoretical scientist, in all this. I feel that something is missing. Something of the story is missing. I’ve been asking to myself what is this thing missing? (…)

This is particularly relevant today in science, and particularly in physics, because if I’m allowed to be polemical, in my field, in fundamental theoretical physics, it is 30 years that we fail. There hasn’t been a major success in theoretical physics in the last few decades, after the standard model, somehow. Of course there are ideas. These ideas might turn out to be right. Loop quantum gravity might turn out to be right, or not. String theory might turn out to be right, or not. But we don’t know, and for the moment, nature has not said yes in any sense.

I suspect that this might be in part because of the wrong ideas we have about science, and because methodologically we are doing something wrong, at least in theoretical physics, and perhaps also in other sciences.

Anaximander. Changing something in the conceptual structure that we have in grasping reality

Let me tell you a story to explain what I mean. The story is an old story about my latest, greatest passion outside theoretical physics: an ancient scientist, or so I would say, even if often je is called a philosopher: Anaximander. I am fascinated by this character, Anaximander. I went into understanding what he did, and to me he’s a scientist. He did something that is very typical of science, and which shows some aspect of what science is. So what is the story with Anaximander? It’s the following, in brief:

Until him, all the civilizations of the planet, everybody around the world, thought that the structure of the world was: the sky over our heads and the earth under our feet. There’s an up and a down, heavy things fall from the up to the down, and that’s reality. Reality is oriented up and down, heaven’s up and earth is down. Then comes Anaximander and says: no, is something else. ‘The earth is a finite body that floats in space, without falling, and the sky is not just over our head; it is all around.’

How he gets it? Well obviously he looks at the sky, you see things going around, the stars, the heavens, the moon, the planets, everything moves around and keeps turning around us. It’s sort of reasonable to think that below us is nothing, so it seems simple to get to this conclusion. Except that nobody else got to this conclusion. In centuries and centuries of ancient civilizations, nobody got there. The Chinese didn’t get there until the 17th century, when Matteo Ricci and the Jesuits went to China and told them. In spite of centuries of Imperial Astronomical Institute which was studying the sky. The Indians only learned this when the Greeks arrived to tell them. The Africans, in America, in Australia… nobody else got to this simple realization that the sky is not just over our head, it’s also under our feet. Why?

Because obviously it’s easy to suggest that the earth sort of floats in nothing, but then you have to answer the question: why doesn’t it fall? The genius of Anaximander was to answer this question. We know his answer, from Aristotle, from other people. He doesn’t answer this question, in fact. He questions this question. He says why should it fall? Things fall toward the earth. Why the earth itself should fall? In other words, he realizes that the obvious generalization from every small heavy object falling, to the earth itself falling, might be wrong. He proposes an alternative, which is that objects fall towards the earth, which means that the direction of falling changes around the earth.

This means that up and down become notions relative to the earth. Which is rather simple to figure out for us now: we’ve learned this idea. But if you think of the difficulty when we were children, to understand how people in Sydney could live upside-down, clearly requires some changing in something structural in our basic language in terms of which we understand the world. In other words, up and down means something different before and after Anaximander’s revolution.

He understands something about reality, essentially by changing something in the conceptual structure that we have in grasping reality. In doing so, he is not doing a theory; he understands something which in some precise sense is forever. It’s some uncovered truth, which to a large extent is a negative truth. He frees ourselves from prejudice, a prejudice that was ingrained in the conceptual structure we had for thinking about space.

Why I think this is interesting?  Because I think that this is what happens at every major step, at least in physics; in fact, I think this is what happened at every step, even not major. When I give a thesis to students, most of the time the problem I give for a thesis is not solved. It’s not solved because the solution of the question, most of the time, is not solving in the question, it’s just questioning the question itself. Is realizing that in the way the problem was formulated, there was some implicit prejudice assumption that was the one to be dropped.   

If this is so, the idea that we have data and theories, and then we have a rational agent that constructs theories from the data using his rationality, his mind, his intelligence, his conceptual structure, and juggles theories and data, doesn’t make any sense, because what is being challenged at every step is not the theory, it’s the conceptual structure used in constructing theories and interpreting the data. In other words, it’s not changing theories that we go ahead, but changing the way we think about the world.

The prototype of this way of thinking, I think the example that makes it more clear, is Einstein's discovery of special relativity. On the one hand there was Newtonian mechanics, which was extremely successful with its empirical content. On the other hand there was Maxwell’s theory, with its empirical content, which was extremely successful, too. But there was a contradiction between the two.

If Einstein had gone to school to learn what science is, if he had read Kuhn, and the philosopher explaining what science is, if he was any one of my colleagues today who are looking for a solution of the big problem of physics today, what would he do?

He would say, okay, the empirical content is the strong part of the theory. The idea in classical mechanics that velocity is relative: forget about it. The Maxwell equations, forget about them. Because this is a volatile part of our knowledge. The theories themselves have to be changed, okay? What we keep solid is the data, and we modify the theory so that it makes sense coherently, and coherently with the data.

That’s not at all what Einstein does. Einstein does the contrary. He takes the theories very seriously. He believes the theory. He says, look, classical mechanics is so successful that when it says that velocity is relative, we should take it seriously, and we should believe it. And the Maxwell equations are so successful that we should believe the Maxwell equations. He has so much trust in the theory itself, in the qualitative content of the theory, that qualitative content that Kuhn says changes all the time, that we learned not to take too seriously, and so much faith in this, confidence in that, that he’s ready to do what? To force coherence between these two, the two theories, by challenging something completely different, which is something that is in our head, which is how we think about time.

He’s changing something in common sense, something about the elementary structure in terms of which we think of the world, on the basis of the trust of the past results in physics. This is exactly the opposite of what is done today in physics. If you read Physical Review today, it’s all about theories that challenge completely and deeply the content of previous theories: so theories in which there is no Lorentz invariance, which are not relativistic, which are not general covariant, quantum mechanics might be wrong…

Every physicist today is immediately ready to say, okay, all of our past knowledge about the world is wrong. Let’s randomly pick some new idea. I suspect that this is not a small component of the long-term lack of success of theoretical physics. You understand something new about the world, either from new data that arrive, or from thinking deeply on what we have already learned about the world. But thinking means also accepting what we’ve learned, challenging what we think, and knowing that in some of the things that we think, there may be something to modify and to change.

Science is not about the data, but about the tools that we use

What are then the aspects of doing science that I think are under-evaluated, and should come up-front? First, science is about constructing visions of the world, about rearranging our conceptual structure, about creating new concepts which were not there before, and even more, about changing, challenging the a-priori that we have. So it’s nothing to do about the assembly of data and the way of organizing the assembly of data. It has everything to do about the way we think, and about our mental vision of the world. Science is a process in which we keep exploring ways of thinking, and changing our image of the world, our vision of the world, to find new ones that work a little bit better.

In doing that, what we have learned in the past is our main ingredient, especially the negative things we have learned. If we have learned that the earth is not flat, there will be no theory in the future in which the earth is ‘flat.’ If we have learned that the earth is not at the center of the universe, that’s forever. We’re not going to go back on this. If you have learned that simultaneity is relative, with Einstein, we’re not going back to absolute simultaneity, like many people think. This means that when an experiment measures neutrinos going faster than light, we should be very suspicious, and of course check and see whether there is something very deep that is happening. But it is absurd that everybody jumps and says okay, Einstein was wrong, just for a little anomaly that shows so. It never works like that in science.

The past knowledge is always with us, and it’s our main ingredient for understanding. The theoretical ideas which are based on ‘let’s imagine that this may happen because why not’ are not taking us anywhere.

I seem to be saying two things that contradict each other. On the one hand, we trust the knowledge, and on the other hand, we are always ready to modify in-depth part of our conceptual structure about the world. There is no contradiction between the two, because the idea of the contradiction comes from what I see as the deepest misunderstanding about science, which is the idea that science is about certainty

Science is not about certainty. Science is about finding the most reliable way of thinking, at the present level of knowledge. Science is extremely reliable; it’s not certain. In fact, not only it’s not certain, but it’s the lack of certainty that grounds it. Scientific ideas are credible not because they are sure, but because they are the ones that have survived all the possible past critiques, and they are the most credible because they were put on the table for everybody’s criticism.

The very expression ‘scientifically proven’ is a contradiction in terms. There is nothing that is scientifically proven. The core of science is the deep awareness that we have wrong ideas, we have prejudices. We have ingrained prejudices. In our conceptual structure for grasping reality there might be something not appropriate, something we may have to revise to understand better. So at any moment, we have a vision of reality that is effective, it’s good, it’s the best we have found so far. It’s the most credible we have found so far, its mostly correct.

But at the same time it’s not taken for certain, and any element of it is a priori open for revision. Why do we have this continuous…? On the one hand, we have this brain, and it has evolved for millions of years. It has evolved for us, for basically running the savannah and run after and eat deer and try not to be eaten by the lions. We have a brain that is tuned to meters and hours, which is not particularly well-tuned to think about atoms and galaxies. So we have to get out of that.  

At the same time I think we have been selected for going out of the forest, perhaps, going out of Africa, for being as smart as possible, as animals that escape lions. This continuous effort that is part of us to change our own way of thinking, to readapt, is a very part of our nature. We are not changing our mind away from nature; it is our natural history that continues to change that.      

If I can make a final comment about this way of thinking about science, or two final comments: One is that science is not about the data. The empirical content of scientific theory is not what is relevant. The data serves to suggest the theory, to confirm the theory, to disconfirm the theory, to prove the theory wrong. But these are the tools that we use. What interests us is the content of the theory. What interests us is what the theory says about the world. General relativity says space-time is curved. The data of general relativity are that Mercury perihelion moves 43 degrees per century, with respect to that computed with Newtonian mechanics.    

Who cares? Who cares about these details? If that was the content of general relativity, general relativity would be boring. General relativity is interesting not because of its data, but because it tells us that as far as we know today, the best way of conceptualizing space-time is as a curved object. It gives us a better way of grasping reality than Newtonian mechanics, because it tells us that there can be black holes, because it tells us there’s a Big Bang. This is the content of the scientific theory.

All living beings on earth have common ancestors. This is a content of scientific theory, not the specific data used to check the theory. So the focus of scientific thinking, I believe, should be on the content of the theory, the past theory, the previous theories, try to see what they hold concretely and what they suggest to us for changing in our conceptual frame themselves.  

Scientific thinking vs religious thinking

The final consideration regards just one comment about this understanding of science and this long conflict that has crossed the centuries between scientific thinking and religious thinking. I think often it is misunderstood. The question is, why can’t we live happily together, and why can’t people pray to their gods and study the universe without this continuous clash? I think that this continuous clash is a little bit unavoidable, for the opposite reason from the one often presented. It’s unavoidable not because science pretends to know the answers. But it’s the other way around, because if scientific thinking is this, then it is a constant reminder to ourselves that we don’t know the answers.

In religious thinking, often this is unacceptable. What is unacceptable is not a scientist that says I know, but it’s a scientist that says I don’t know, and how could you know? Based, at least in many religions, in some religions, or in some ways of being religious, an idea that there should be truth that one can hold and not be questioned. This way of thinking is naturally disturbed by a way of thinking which is based on continuous revision, not of the theories, of even the core ground of the way in which we think.     

The core of science is not certainty, it’s continuous uncertainty

So summarizing, I think science is not about data; it’s not about the empirical content, about our vision of the world. It’s about overcoming our own ideas, and about going beyond common sense continuously. Science is a continuous challenge of common sense, and the core of science is not certainty, it’s continuous uncertainty. I would even say the joy of taking what we think, being aware that in everything we think, there are probably still an enormous amount of prejudices and mistakes, and try to learn to look a little bit larger, knowing that there is always a larger point of view that we’ll expect in the future.    

We are very far from the final theory of the world, in my field, in physics, I think extremely far. Every hope of saying, well we are almost there, we’ve solved all the problems, is nonsense. And we are very wrong when we discard the value of theories like quantum mechanics, general relativity or special relativity, for that matter. And throw them away, trying something else randomly. On the basis of what we know, we should learn something more, and at the same time we should somehow take our vision for what it is, a vision that is the best vision that we have, but then continuous evolving the vision. (…) 

String theory's a beautiful theory. It might work, but I suspect it's not going to work. I suspect it's not going to work because it's not sufficiently grounded in everything we know so far about the world, and especially in what I think or perceive as the main physical content of general relativity.  

String theory’s a big guesswork. I think physics has never been a guesswork; it has been a way of unlearning how to think about something, and learning about how to think a little bit different by reading the novelty into the details of what we already know. Copernicus didn’t have any new data, any major new idea, he just took Ptolemy, in the details of Ptolemy, and he read in the details of Ptolemy the fact that the equants, the epicycles, the deferents were in certain proportions between them, the way to look at the same construction from a slightly different perspective and discover the earth is not the center of the universe.

Einstein, as I said, took seriously Maxwell’s theory and classical mechanics to get special relativity. So loop quantum gravity is an attempt to do the same thing: take seriously general relativity, take seriously quantum mechanics, and out of that, bring them together, even if this means a theory where there’s no time, no fundamental time, so we have rethink the world without basic time. The theory, on the one hand, is very conservative, because it’s based on what we know. But it’s totally radical because it forces us to change something big in our way of thinking.

String theorists think differently. They say well, let’s go out to infinity, where somehow the full covariance of general relativity is not there. There we know what is time, we know what is space, because we’re at asymptotic distances, at large distances. The theory’s wilder, more different, more new, but in my opinion, it’s more based on the old conceptual structure. It’s attached to the old conceptual structure, and not attached to the novel content of the theories that have proven empirically successful. That’s how my way of reading science matches with the specifics of the research work that I do, and specifically of loop quantum gravity.

Of course we don’t know. I want to be very clear. I think that string theory’s a great attempt to go ahead, done by great people. My only polemical attitude with string theory is when I hear, but I hear less and less now, when I hear ‘oh, we know the solution already, certain it’s string theory.’ That’s certainly wrong and false. What is true is that that’s a good set of ideas; loop quantum gravity is another good set of ideas. We have to wait and see which one of the theories turns out to work, and ultimately to be empirically confirmed.    

Should a scientist think about philosophy, or not?

This may take me to another point, which is should a scientist think about philosophy, or not? It’s sort of the fashion today to discard philosophy, to say now we have science, we don’t need philosophy. I find this attitude very naïve for two reasons. One is historical. Just look back. Heisenberg would have never done quantum mechanics without being full of philosophy. Einstein would have never done relativity without having read all the philosophers and have a head full of philosophy. Galileo would never have done what he had done without having a head full of Plato. Newton thought of himself as a philosopher, and started by discussing this with Descartes, and had strong philosophical ideas.

But even Maxwell, Boltzmann, I mean, all the major steps of science in the past were done by people who were very aware of methodological, fundamental, even metaphysical questions being posed. When Heisenberg does quantum mechanics, he is in a completely philosophical mind. He says in classical mechanics there’s something philosophically wrong, there’s not enough emphasis on empiricism. It is exactly this philosophical reading of him that allows him to construct this fantastically new physical theory, scientific theory, which is quantum mechanics.  

             
Paul Dirac and Richard Feynman. From The Strangest Man. Photograph by A. John Coleman, courtesy AIP Emilio Segre Visual Archives, Physics Today collection

The divorce between this strict dialogue between philosophers and scientists is very recent, and somehow it’s after the war, in the second half of the 20th century. It has worked because in the first half of the 20thcentury, people were so smart. Einstein and Heisenberg and Dirac and company put together relativity and quantum theory and did all the conceptual work. The physics of the second half of the century has been, in a sense, a physics of application of the great ideas of the people of the ’30s, of the Einsteins and the Heisenbergs.

When you want to apply thes ideas, when you do atomic physics, you need less conceptual thinking. But now we are back to the basics, in a sense. When we do quantum gravity it’s not just application. I think that the scientists who say I don’t care about philosophy, it’s not true they don’t care about philosophy, because they have a philosophy. They are using a philosophy of science. They are applying a methodology. They have a head full of ideas about what is the philosophy they’re using; just they’re not aware of them, and they take them for granted, as if this was obvious and clear. When it’s far from obvious and clear. They are just taking a position without knowing that there are many other possibilities around that might work much better, and might be more interesting for them.

I think there is narrow-mindedness, if I might say so, in many of my colleague scientists that don’t want to learn what is being said in the philosophy of science. There is also a narrow-mindedness in a lot of probably areas of philosophy and the humanities in which they don’t want to learn about science, which is even more narrow-minded. Somehow cultures reach, enlarge. I’m throwing down an open door if I say it here, but restricting our vision of reality today on just the core content of science or the core content of humanities is just being blind to the complexity of reality that we can grasp from a number of points of view, which talk to one another enormously, and which I believe can teach one another enormously.”

Carlo Rovelli, Italian theoretical physicist, working on quantum gravity and on foundations of spacetime physics. He is professor of physics at the University of the Mediterranean in Marseille, France and member of the Intitut Universitaire de France. To see the whole video and read the transcript, click Science Is Not About Certainty: A Philosophy Of Physics, Edge, May 24, 2012. (Illustration source)

See also:

Raphael Bousso: Thinking About the Universe on the Larger Scales
David Deutsch: A new way to explain explanation
Galileo and the relationship between the humanities and the sciences
The Relativity of Truth - a brief résumé, Lapidarium notes
Philosophy vs science: which can answer the big questions of life?
☞ ‘Cognition, perception, relativity’ tag on Lapidarium notes

Apr
25th
Wed
permalink

Waking Life animated film focuses on the nature of dreams, consciousness, and existentialism



Waking Life is an American animated film (rotoscoped based on live action), directed by Richard Linklater and released in 2001. The entire film was shot using digital video and then a team of artists using computers drew stylized lines and colors over each frame.

The film focuses on the nature of dreams, consciousness, and existentialism. The title is a reference to philosopher George Santayana's maxim: “Sanity is a madness put to good uses; waking life is a dream controlled.”

Waking Life is about an unnamed young man in a persistent dream-like state that eventually progresses to lucidity. He initially observes and later participates in philosophical discussions of issues such as reality, free will, the relationship of the subject with others, and the meaning of life. Along the way the film touches on other topics including existentialism, situationist politics, posthumanity, the film theory of André Bazin, and lucid dreaming itself. By the end, the protagonist feels trapped by his perpetual dream, broken up only by unending false awakenings. His final conversation with a dream character reveals that reality may be only a single instant which the individual consciousness interprets falsely as time (and, thus, life) until a level of understanding is achieved that may allow the individual to break free from the illusion.

Ethan Hawke and Julie Delpy reprise their characters from Before Sunrise in one scene. (Wiki)

Eamonn Healy speaks about telescopic evolution and the future of humanity

We won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). (…) The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially).

So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the twentieth century saw only about 25 years of progress (again at today’s rate of progress) since we have been speeding up to current rates. So the twenty-first century will see almost a thousand times greater technological change than its predecessor.

Ray Kurzweil, American author, scientist, inventor and futurist, The Law of Accelerating Returns, KurzweilAI, March 7, 2001.

"If we’re looking at the highlights of human development, you have to look at the evolution of the organism and then at the development of its interaction with the environment. Evolution of the organism will begin with the evolution of life perceived through the hominid coming to the evolution of mankind. Neanderthal and Cro-Magnon man. Now, interestingly, what you’re looking at here are three strings: biological, anthropological — development of the cities — and cultural, which is human expression.

Now, what you’ve seen here is the evolution of populations, not so much the evolution of individuals. And in addition, if you look at the time scales that are involved here — two billion years for life, six million years for the hominid, 100,000 years for mankind as we know it — you’re beginning to see the telescoping nature of the evolutionary paradigm. And then when you get to agricultural, when you get to scientific revolution and industrial revolution, you’re looking at 10,000 years, 400 years, 150 years. Uou’re seeing a further telescoping of this evolutionary time. What that means is that as we go through the new evolution, it’s gonna telescope to the point we should be able to see it manifest itself within our lifetime, within this generation.

The new evolution stems from information, and it stems from two types of information: digital and analog. The digital is artificial intelligence. The analog results from molecular biology, the cloning of the organism. And you knit the two together with neurobiology. Before on the old evolutionary paradigm, one would die and the other would grow and dominate. But under the new paradigm, they would exist as a mutually supportive, noncompetitive grouping. Okay, independent from the external.

And what is interesting here is that evolution now becomes an individually centered process, emanating from the needs and desires of the individual, and not an external process, a passive process where the individual is just at the whim of the collective. So, you produce a neo-human, okay, with a new individuality and a new consciousness. But that’s only the beginning of the evolutionary cycle because as the next cycle proceeds, the input is now this new intelligence. As intelligence piles on intelligence, as ability piles on ability, the speed changes. Until what? Until we reach a crescendo in a way could be imagined as an enormous instantaneous fulfillment of human? human and neo-human potential. It could be something totally different. It could be the amplification of the individual, the multiplication of individual existences. Parallel existences now with the individual no longer restricted by time and space.

And the manifestations of this neo-human-type evolution, manifestations could be dramatically counter-intuitive. That’s the interesting part. The old evolution is cold. It’s sterile. It’s efficient, okay? And its manifestations of those social adaptations. We’re talking about parasitism, dominance, morality, okay? Uh, war, predation, these would be subject to de-emphasis. These will be subject to de-evolution. The new evolutionary paradigm will give us the human traits of truth, of loyalty, of justice, of freedom. These will be the manifestations of the new evolution. And that is what we would hope to see from this. That would be nice.”

Eamonn Healy, professor of chemistry at St. Edward’s University in Austin, Texas, where his research focuses on the design of structure-activity probes to elucidate enzymatic activity. He appears in Richard Linklater's 2001 film Waking Life discussing concepts similar to a technological singularity and explaining “telescopic evolution.”, Eamonn Healy speaks about telescopic evolution and the future of humanity from Brandon Sergent, Transcript

See also:

Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

Mar
21st
Wed
permalink

Richard Doyle on Creativity, evolution of mind and the rhetorical membrane between humans and an informational universe

              

Q [Jason Silva]: The Jesuit Priest and scientist Pierre Teilhard de Chardin spoke of the Noosphere very early on. A profile in WIRED Magazine article said, 

"Teilhard imagined a stage of evolution characterized by a complex membrane of information enveloping the globe and fueled by human consciousness”.. Teilhard saw the Net coming more than half a century before it arrived. He believed this vast thinking membrane would ultimately coalesce into “the living unity of a single tissue” containing our collective thoughts and experiences."  Teilhard wrote, "The living world is constituted by consciousness clothed in flesh and bone.

He argued that the primary vehicle for increasing complexity consciousness among living organisms was the nervous system. The informational wiring of a being, he argued - whether of neurons or electronics - gives birth to consciousness. As the diversification of nervous connections increases, evolution is led toward greater consciousness… thoughts?

Richard Doyle: Yes, he also called it this process of the evolution of consciousness “Omega Point”. The noosphere imagined here relied on a change in our relationship to  consciousness as much to any technological change and was part of evolution’s epic quest for self awareness. Here Teilhard is in accord with Julian Huxley (Aldous’ brother, a biologist) and Carl Sagan when they observed that “we are a way for the cosmos to know itself.” Sri Aurobindo’s The Life Divine traces out this evolution of consciousness as well through the greek and Sanskrit traditions as well as Darwinism and (relatively) modern philosophy. All are describing evolution’s slow and dynamic quest towards understanding itself.

         

I honestly think we are still grappling with the fact that our minds are distributed across a network by technology, and have been in a feedback loop between our brains and technologies at least since the invention of writing. As each new “mutation” occurs in the history of evolution of information technology, the very character of our minds shifts. McLuhan's Understanding Media is instructive here as well (he parsed it as the Global Village), and of course McLuhan was the bard who advised Leary on "Tune in, Turn on, Drop Out" and very influential on Terence McKenna.

One difference between now and Plato’s time is the infoquake through which we are all living. This radical increase in quantity no doubt has qualitative effects - it changes what it feels like to think and remember. Plato was working through the effect of one new information technology – writing – whereas today we “upgrade” every six months or so…Teilhard observes the correlative of this evolutionary increase in information - and the sudden thresholds it crosses - in the evolution of complexity and nervous systemsThe noosphere is a way of helping us deal with this “phase transition” of consciousness that may well be akin to the phase transition between liquid water and water vapor - a change in degree that effects a change in kind.

Darwin’s Pharmacy suggests that ecodelics were precisely such a mutation in information technology that increased sexually selective fitness through the capacity to process greater amounts of information, and that they are “extraordinarily sensitive to initial rhetorical traditions.” What this means is that because ecodelic experiences are so sensitive to the context in which we experience them, they can help make us aware of the effect of language and music etc on our consciousness, and thereby offer an awareness of our ability to effect our own consciousness through our linguistic and creative choices. This can be helpful when trying to browse the infoquake. Many other practices do so as well - meditation is the most well established practice for noticing the effects we can have on our own consciousness, and Sufi dervishes demonstrate this same outcome for dancing. I do the same on my bicycle, riding up a hill and chanting.

One problem I have with much of the discourse of “memes" is that it is often highly reductionistic - it often forgets that ideas have an ecology too, they must be "cultured." Here I would argue that drawing on Lawrence Lessig's work on the commons, the “brain” is a necessary but insufficient “spawning” ground for ideas that becomes actual. The commons is the spawning ground of ideas; brains are pretty obviously social as well as individual. Harvard biologist Richard Lewontin notes that there is no such thing as “self replicating” molecules, since they always require a context to be replicated. This problem goes back at last to computer scientist John Von Neumann's 1947 paper on Self reproducing automata.

I think Terence McKenna described the condition as "language is loose on planet three", and its modern version probably occurs first in the work of writer William S. Burroughs, whose notion of the "word virus" predates the "meme" by at least a decade. Then again this notion of "ideas are real" goes back to cosmologies that begin with the priority of consciousness over matter, as in "In the beginning was the word, and the word was god, and the word was with god." So even Burroughs could get a late pass for his idea. (…)

Q: Richard Dawkin's definition of a meme is quite powerful: 

“I think that a new kind of replicator has recently emerged on this very planet, […] already achieving evolutionary change at a rate that leaves the old gene panting far behind.” [the replicator is] human culture; the vector of transmission is language, and the spawning ground is the brain.”  

This notion that the ”the vector of transmission is language" is very compelling.. It seems to suggest that just as in biological evolution the vector of transmission has been the DNA molecule, in the noosphere, the next stage up, it is LANGUAGE that has become a major player in the transfer of information towards achieving evolutionary change.. Kind of affects how you think about the phrase “words have power”. This insight reminds me of a quote that describes, in words, the subjective ecstasy that a mind feels when upon having a transcendent realization that feels as if it advances evolution: 

"A universe of possibilities,

Grey infused by color,

The invisible revealed,

The mundane blown away

by awe” 

Is this what you mean by ‘the ecstasy of language’?

Richard Doyle: Above, I noted that ecodelics can make us aware of the feedback loops between our creative choices – should I eat mushrooms in a box? - Should I eat them with a fox? - and our consciousness. In other words, they can make us aware of the tremendous freedom we have in creating our own experience. Leary called this “internal freedom.” Becoming aware of the practically infinite choices we have to compose our lives, including the words we use to map them, can be overwhelming – we feel in these instances the “vertigo of freedom.” What to do? In ecodelic experience we can perceive the power of our maps. That moment in which we can learn to abide the tremendous creative choice we have, and take responsibility for it, is what I mean by the “ecstasy of language.” 

I would point out, though, that for those words you quote to do their work, they have to be read. The language does not do it "on its own" but as a result of the highly focused attention of readers. This may seem trivial but it is often left out, with some serious consequences. And “reading” can mean “follow up with interpretation”. I cracked up when I googled those lines above and found them in a corporate blog about TED, for example. Who knew that neo-romantic poetry was the emerging interface of the global corporate noosphere? (…)

Q: Buckminster Fuller described humans as "pattern integrities", Ray Kurzweil says we are "patterns of information". James Gleick's new book, The Information, says that “information may be more primary than matter”..  what do you make of this? And if we indeed are complex patterns, how can we hack the limitations of biology and entropy to preserve our pattern integrity indefinitely? 

Richard Doyle: First: It is important to remember that the history of the concept and tools of “information” is full of blindspots – we seem to be constantly tempted to underestimate the complexity of any given system needed to make any bit of information meaningful or useful. Caitlin, Kolmogorov Stephan Wolfram and John Von Neumann each came independently to the conclusion that information is only meaningful when it is “run” - you can’t predict the outcome of even many trivial programs without running the program. So to say that “information may be more primary than matter” we have to remember that “information” does not mean “free from constraints.” Thermodynamics – including entropy – remains.

Molecular and informatic reductionism – the view that you can best understand the nature of a biological system by cutting it up into the most significant bits, e.g. DNA – is a powerful model that enables us to do things with biological systems that we never could before. Artist Eduardo Kac collaborated with a French scientist to make a bioluminescent bunny. That’s new! But sometimes it is so powerful that we forget its limitations. The history of the human genome project illustrates this well. AND the human genome is incredibly interesting. It’s just not the immortality hack many thought it would be.

In this sense biology is not a limitation to be “transcended” (Kurzweil), but a medium of exploration whose constraints are interesting and sublime. On this scale of ecosystems, “death” is not a “limitation” but an attribute of a highly dynamic interactive system. Death is an attribute of life. Viewing biology as a “limitation” may not be the best way to become healthy and thriving beings.

Now, that said, looking at our characteristics as “patterns of information” can be immensely powerful, and I work with it at the level of consciousness as well as life. Thinking of ourselves as “dynamic patterns of multiply layered and interconnected self transforming information” is just as accurate of a description of human beings as “meaningless noisy monkeys who think they see god”, and is likely to have much better effects. A nice emphasis on this “pattern” rather than the bits that make it up can be found in Carl Sagan’s “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.”

Q: Richard Dawkins declared in 1986 that ”What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions, […] If you want to understand life,” Dawkins wrote, “don’t think about vibrant, throbbing gels and oozes, think about information technology.” How would you explain the relationship between information technology and the reality of the physical world?

Richard Doyle: Again, information is indeed physical. We can treat a sequence of information as abstraction and take it out of its context – like a quotation or a jellyfish gene spliced into a rabbit to enable it to glow. We can compress information, dwindling the resources it takes to store or process it. But “Information, words, instructions” all require physical instantiation to even be “information, words, instructions.” Researcher Rolf Landauer showed back in the 1960s that even erasure is physical. So I actually think throbbing gels and oozes and slime mold and bacteria eating away at the garbage gyre are very important when we wish to “understand” life. I actually think Dawkins gets it wrong here – he is talking about “modeling” life, not “understanding” it. Erwin Schrödinger, the originator of the idea of the genetic code and therefore the beginning of the “informatic” tradition of biology that Dawkins speaks in here, knew this very well and insisted on the importance of first person experience for understanding.

So while I find these metaphors useful, that is exactly what they are: metaphors. There is a very long history to the attempt to model words and action together: Again, John 1:1 is closer to Dawkin’s position here than he may be comfortable with: “In the Beginning was the word, and the word was god, and the word was with god” is a way of working with this capacity of language to bring phenomena into being. It is really only because we habitually think of language as “mere words” that we continually forget that they are a manifestation of a physical system and that they have very actual effects not limited to the physics of their utterance – the words “I love you” can have an effect much greater than the amount of energy necessary to utter them. Our experiences are highly tuneable by the language we use to describe them.

Q: Talk about the mycelial archetype. Author Paul Stamet compares the pattern of the mushroom mycelium with the overlapping information-sharing systems that comprise the Internet, with the networked neurons in the brain, and with a computer model of dark matter in the universe. All share this densely intertwingled filamental structure…. what is the connection? what is the pattern that connects here? 

Richard Doyle: First things first: Paul Stamets is a genius and we should listen to his world view carefully and learn from it. Along with Lynn Margulis and Dorion Sagan, whose work I borrow from extensively in Darwin’s Pharmacy (as well as many others), Stamets is asking us to contemplate and act on the massive interconnection between all forms of life. This is a shift in worldview that is comparable to the Copernican shift from a geocentric cosmos – it is a shift toward interconnection and consciousness of interconnection. And I like how you weave in Gregory Bateson's phrase “the pattern that connects” here, because Bateson (whose father, William Bateson, was one of the founders of modern genetics) continuously pointed toward the need to develop ways of perceiving the whole. The “mycelial archetype”, as you call it, is a reliable and rather exciting way to recall the whole: What we call “mushrooms” are really the fruiting bodies of an extensive network of cross connection.

That fuzz growing in an open can of tomato paste in your fridge – mycelium. So even opening our refrigerator – should we be lucky enough to have one, with food in it - can remind us that what we take to be reality is is an actuality only appearance – a sliver, albeit a significant one for our world, of the whole. That fuzz can remind us that (1) appearance and reality or not the same thing at all and (2) beyond appearance there is a massive interconnection in unity. This can help remind us who and what we really are. 

With the word ‘archetype”, you of course invoke the psychologist Carl Jung who saw archetypes as templates for understanding, ways of organizing our story of the world. There are many archetypes – the Hero, the Mother, the Trickster, the sage. They are very powerful because they help stitch together what can seem to be a chaotic world – that is both their strength and their weakness. It is a weakness because most of the time we are operating within an archetype and we don’t even know it, and we don’t know therefore that we can change our archetype

By experimenting with a different archetype – imagining, for example, the world through the lens of a 2400 year old organism that is mostly invisible to a very short lived and recent species becoming aware of its creative responsibility in altering the planet – is incredibly powerful, and in Darwin’s Pharmacy I am trying to offer a way to experiment with the idea of plant planet as well as “mycelium” archetype. One powerful aspect of the treating the mycelium as our archetype as humanity is that it is “distributed” - it does not operate via a center of control but through cross connection “distributed” over a space.

Anything we can do to remember both our individuation and our interconnection is timely – we experience the world as individuals, and our task is to discover our nature within the larger scale reality of our dense ecological interconnection. In the book I point to the Upanishad’s “Tat Tvam Asi as a way of comprehending how we can both be totally individual and an aspect of the whole.

Q: You’ve talked about the ecstasy of language and the role of rhetoric in shaping reality.. These notions echo some of Terence McKenna's ideas about language… He calls language an “ecstatic activity of signification”… and says that for the “inspired one, it is almost as if existence is uttering itself through him”… Can you expand on this? How does language create reality?? 

Richard Doyle: It’s incredibly fun and insightful to echo Terence McKenna. He’s really in this shamanic bard tradition that goes all the back to Empedocles at least, and is distributed widely across the planet. He’s got a bit of Whitman in him with his affirmation of the erotic aspects of enlightenment. He was Emerson speaking to a Lyceum crowd remixed through rave culture. Leary and McKenna were resonating with the irish bard archetype. And Terrence was echoing Henry Munn, who was echoing Maria Sabina, whose chants and poetics can make her seem like Echo herself – a mythological story teller and poet (literally “sound”) who so transfixes Hera (Zeus’s wife) that Zeus can consort with nymphs. Everywhere we look there are allegories of sexual selection’s role in the evolution of poetic & shamanic language! 

And Terrence embodies the spirit of eloquence, helping translate our new technological realities (e.g. virtual reality, a fractal view of nature, radical ecology) and the states of mind that were likely to accompany them. Merlin Donald writes of the effects of “external symbolic storage” on human culture – as a onetime student of McLuhan’s, Donald was following up on Plato’s insights I mentioned above that writing changes how we think, and therefore, who we are

Human culture is going through a fantastic “reality crisis” wherein we discover the creative role we play in nature. Our role in global climate change – not to mention our role in dwindling biodiversity – is the “shadow” side of our increasing awareness that humans have a radical creative responsibility for their individual and collective lives. And our lives are inseparable from the ecosystems with which we are enmeshed. THAT is reality. To the extent that we can gather and focus our attention on retuning our relation towards ecosystems in crisis, language can indeed shape reality. We’ll get the future we imagine, not necessarily the one we deserve.

Q: Robert Anton Wilson spoke about “reality tunnels”…. These ‘constructs’ can limit our perspectives and perception of reality, they can trap us, belittle us, enslave us, make us miserable or set us free… How can we hack our reality tunnel?  Is it possible to use rhetoric and/or psychedelics to “reprogram” our reality tunnel? 

Richard Doyle: We do nothing but program and reprogram our reality tunnelsSeriously, the Japanese reactor crisis follows on the BP oil spill as a reminder that we are deeply interconnected on the level of infrastructure – technology is now planetary in scale, so what happens here effects somebody, sometimes Everybody, there. These infrastructures – our food sheds, our energy grid, our global media - run on networks, protocols, global standards, agreements: language, software, images, databases and their mycelial networks.

The historian Michel Foucault called these “discourses”, but we need to connect these discourses to the nonhuman networks with which they are enmeshed, and globalization has been in part about connecting discourses to each other across the planet. Ebola ends up in Virginia, Starbucks in Hong Kong. This has been true for a long time, of course – Mutual Assured Destruction was planetary in scale and required a communication and control structure linking, for example, a Trident submarine under the arctic ice sheet – remember that? - to a putatively civilian political structure Eisenhower rightly warned us about: the military industrial complex. The moon missions illustrate this principle as well – we remember what was said as much as what else was done, and what was said, for a while, seem to induce a sense of truly radical and planetary possibility.

So if we think of words as a description of reality rather than part of the infrastructure of reality, we miss out on the way different linguistic patterns act as catalysts for different realities. I call these “rhetorical softwares”. In my first two books, before I really knew about Wilson’s work or had worked through Korzybski with any intensity, I called these “rhetorical softwares.”

Now the first layer of our reality tunnel is our implicit sense of self – this is the only empirical reality any of us experiences – what we subjectively experience. RAW was a brilliant analyst of the ways experience is shaped by the language we use to describe it. One of my favorite examples from his work is his observation that in English, “reality” is a noun, so we start to treat it as a “thing”, when in fact reality, this cosmos, is also quite well mapped as an action – a dynamic unfolding for 13.7 billion years. That is a pretty big mismatch between language and reality, and can give us a sense that reality is inert, dead, lifeless, “concrete”, and thus not subject to change. By experimenting with what Wilson, following scientist John Lilly, called “metaprograms”, we can change the maps that shape the reality we inhabit. (…)

Q: The film Inception explored the notion that our inner world can be a vivid, experiential dimension, and that we can hack it, and change our reality… what do you make of this? 

Richard Doyle: The whole contemplative tradition insists on this dynamic nature of consciousness. “Inner” and “outer” are models for aspects of reality – words that map the world only imperfectly. Our “inner world” - subjective experience – is all we ever experience, so if we change it obviously we will see a change in what we label “external” reality it is of course part of and not separable from. One of the maps we should experiment with, in my view, is this “inner” and “outer” one – this is why one of my aliases is “mobius.” A mobius strip helps makes clear that “inside” and “outside” are… labels. As you run your finger along a mobius strip, the “inside” becomes “outside” and the “outside” becomes “inside.”.

Q: Can we give put inceptions out into the world?

Richard Doyle: We do nothing but! And, it is crucial to add, so too does the rest of our ecosystem. Bacteria engage in quorum sensing, begin to glow, and induce other bacteria to glow – this puts their inceptions into the world. Thanks to the work of scientists like Anthony Trewavas, we know that plants engage in signaling behavior between and across species and even kingdoms: orchids “throw” images of female wasps into the world, attracting male wasps, root cells map the best path through the soil. The whole blooming confusion of life is signaling, mapping and informing itself into the world. The etymology of “inception” is “to begin, take in hand” - our models and maps are like imagined handholds on a dynamic reality.

Q: What is the relationship between psychedelics and information technology? How are ipods, computers and the internet related to LSD? 

Richard Doyle: This book is part of a trilogy on the history of information in the life sciences. So, first: psychedelics and biology. It turns out that molecular biology and psychedelics were important contexts for each other. I first started noticing this when I found that many people who had taken LSD were talking about their experiences in the language of molecular biology – accessing their DNA and so forth. When I learned that psychedelic experience was very sensitive to “set and setting” - the mindset and context of their use - I wanted to find out how this language of molecular biology was effecting people’s experiences of the compounds. In other words, how did the language affect something supposedly caused by chemistry? 

Tracking the language through thousands of pages, I found that both the discourse of psychedelics and molecular biology were part of the “informatic vision” that was restructuring the life sciences as well as the world, and found common patterns of language in the work of Timothy Leary (the Harvard psychologist) and Francis Crick (who won the Nobel prize with James Watson and Maurice Wilkins for determining the structure of DNA in 1954), so in 2002 I published an article describing the common “language of information” spoken by Leary and Crick. I had no idea that Crick had apparently been using LSD when he was figuring out the structure of DNA. Yes, that blew my mind when it came out in 2004. I feel like I read that between the lines of Crick’s papers, which gave me confidence to write the rest of the book about the feedback between psychedelics and the world we inhabit.

The paper did hone in on the role that LSD played in the invention of PCR (polymerase chain reaction) – Kary Mullis, who won the Nobel prize for the invention of this method of making copies of a sequence of DNA, talked openly of the role that LSD played in the process of invention. Chapter 4 of the book looks to use of LSD in “creative problem solving” studies of the 1960s. These studies – hard to imagine now, 39 years into the War on Drugs, but we can Change the Archetype - suggest that used with care, psychedelics can be part of effective training in remembering how to discern the difference between words and things, maps and territories.

In short, this research suggested that psychedelics were useful for seeing the limitations of words as well as their power, perhaps occasioned by the experience of the linguistic feedback loops between language and psychedelic experiences that themselves could never be satisfactorily described in language. I argue that Mullis had a different conception of information than mainstream molecular biology – a pragmatic concept steeped in what you can do with words rather than in what they mean. Mullis seems to have thought of information as “algorithms” - recipes of code, while the mainsteam view was thinking of it as implicitly semantically, as “words with meaning.”

Ipods, Internet, etc: Well, in some cases there are direct connections. Perhaps Bill Joy said it best when he said that there was a reason that LSD and Unix were both from BerkeleyWhat the Doormouse Said by John Markoff came out after I wrote my first paper on Mullis and I was working on the book, and it was really confirmation of a lot of what I seeing indicated by my conceptual model of what is going on, which is as follows: Sexual selection is a good way to model the evolution of information technology. It yields bioluminescence – the most common communication strategy on the planet – chirping insects, singing birds, Peacocks fanning their feathers, singing whales, speaking humans, and humans with internet access. These are all techniques of information production, transformation or evaluation. I am persuaded by Geoffrey Miller’s update of Charles Darwin’s argument that language and mind are sexually selected traits, selected not simply for survival or even the representation of fitness, but for their sexiness. Leary: “Intelligence is the greatest aphrodisiac.”

I offer the hypothesis that psychedelics enter the human toolkit as “eloquence adjuncts” - tools and techniques for increasing the efficacy of language to seemingly create reality – different patterns of language ( and other attributes of set and setting) literally causes different experiences. The informatic revolution is about applying this ability to create reality with different “codes” to the machine interface. Perhaps this is one of the reason people like Mitch Kapor (a pioneer of computer spreadsheets), Stewart Brand (founder of a pre-internet computer commons known as the Well) and Bob Wallace (one of the original Microsoft seven and an early proponent of shareware), Mark Pesce were or are all psychonauts.

Q: Cyborg Anthropologist Amber Case has written about Techno-social wormholes.. the instant compression of time and space created every time we make a telephone call…  What do you make of this compression of time and space made possible by the engineering “magic” of technology? 

Richard Doyle:  It’s funny the role that the telephone call plays as an example in the history of our attempts to model the effects of information technologies. William Gibson famously defined cyberspace as the place where a telephone call takes place. (Gibson’s coinage of the term “cyberspace” is a good example of an “inception”) Avital Ronell wrote about Nietzsche’s telephone call to the beyond and interprets the history of philosophy according to a “telephonic logic”. When I was a child my father once threw our telephone into the atlantic ocean – that was what he made of the magic of that technology, at least in one moment of anger. This was back in the day when Bell owned your phone and there was some explaining to do. This magic of compression has other effects – my dad got phone calls all day at work, so when was at home he wanted to turn it off. The only way he knew to turn it off was to rip it out of the wall – there was no modular plug, just a wire into the wall - and throw it into the ocean.

So there is more than compression going on here: Deleuze and Guattari, along with the computer scientist Pierre Levy after them, call it “deterritorialization”. The differences between “here” and “there” are being constantly renegotiated as our technologies of interaction develop. Globalization is the collective effect of these deterritorializations and reterritorializations at any given moment.

And the wormhole example is instructive: the forces that enable such collapse of space and time as the possibility of time travel would likely tear us to smithereens. The tensions and torsions of this deterritorialization at part of what is at play in the Wikileaks revolutions, this compression of time and space offers promise for distributed governance as well as turbulence. Time travel through wormholes, by the way, is another example of an inception – Carl Sagan was looking for a reasonable way to transport his fictional aliens in Contact, called Cal Tech physicist Skip Thorne for help, and Thorne came up with the idea.

Q: The film Vanilla Sky explored the notion of a scientifically-induced lucid dream where we can live forever and our world is built out of our memories and ”sculpted moment to moment and lived with the romantic abandon of a summer day or the feeling of a great movie or a pop song you always loved”. Can we sculpt ‘real’ reality as if it were a “lucid dream”

Richard Doyle:Some traditions model reality as a lucid dream. The Diamond Sutra tells us that to be enlightened we must view reality as “a phantom, a dew drop, a bubble.”  This does not mean, of course, that reality does not exist, only that appearance has no more persistence than a dream and that what we call “reality” is our map of reality. When we wake up, the dream that had been so compelling is seen to be what it was: a dream, nothing more or less. Dreams do not lack reality – they are real patterns of information. They just aren’t what we usually think they are. Ditto for “ordinary” reality. Lucid dreaming has been practiced by multiple traditions for a long time – we can no doubt learn new ways of doing so. In the meantime, by recognizing and acting according to the practice of looking beyond appearances, we can find perhaps a smidgeon more creative freedom to manifest our intentions in reality.

Q: Paola Antonelli, design curator of MoMa, has written about Existenz Maximum, the ability of portable music devices like the ipod to create”customized realities”, imposing a soundtrack on the movie of our own life. This sounds empowering and godlike- can you expand on this notion? How is technology helping us design every aspect of both our external reality as well as our internal, psychological reality?

Richard Doyle: Well, the Upanishads and the Book of Luke both suggest that we “get our inner Creator on”, the former by suggesting that “Tat Tvam Asi” - there is an aspect of you that is connected to Everything, and the latter by recommending that we look not here or there for the Kingdom of God, but “within.” So if this sounds “god like”, it is part of a long and persistent tradition. I personally find the phrase “customized realities” redundant given the role of our always unique programs and metaprograms. So what we need to focus on his: to which aspect of ourselves do we wish to give this creative power? These customized realities could be enpowering and god like for corporations that own the material, or they could enpower our planetary aspect that unites all of us, and everything in between. It is, as always, the challenge of the magus and the artist to decide how we want to customize reality once we know that we can.

Q: The Imaginary Foundation says that "to understand is to perceive patterns"… Some advocates of psychedelic therapy have said that certain chemicals heighten our perception of patterns..They help! us “see more”.  What exactly are they helping us understand? 

Richard Doyle: Understanding! One of the interesting bits of knowledge that I found in my research was some evidence that psychonauts scored better on the Witkin Embedded Figure test, a putative measure of a human subject’s ability to “distinguish a simple geometrical figure embedded in a complex colored figure.” When we perceive the part within the whole, we can suddenly get context, understanding.

Q: An article pointing to the use of psychedelics as catalysts for breakthrough innovation in silicon valley says that users …

"employ these cognitive catalysts, de-condition their thinking periodically and come up with the really big connectivity ideas arrived at wholly outside the linear steps of argument. These are the gestalt-perceiving, asterism-forming “aha’s!” that connect the dots and light up the sky with a new archetypal pattern."

This seems to echo what other intellectuals have been saying for ages.  You referred to Cannabis as “an assassin of referentiality, inducing a butterfly effect in thought. Cannabis induces a parataxis wherein sentences resonate together and summon coherence in the bardos between one statement and another.”

Baudelaire also wrote about cannabis as inducing an artificial paradise of thought:  

“…It sometimes happens that people completely unsuited for word-play will improvise an endless string of puns and wholly improbable idea relationships fit to outdo the ablest masters of this preposterous craft. […and eventually]… Every philosophical problem is resolved. Every contradiction is reconciled. Man has surpassed the gods.”

Anthropologist Henry Munn wrote that:

"Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth… At times… the words leap to mind, one after another, of themselves without having to be searched for: a phenomenon similar to the automatic dictation of the surrealists except that here the flow of consciousness, rather than being disconnected, tends to be coherent: a rational enunciation of meanings.  The spontaneity they liberate is not only perceptual, but linguistic, the spontaneity of speech, of fervent, lucid discourse, astonishing. […] For the inspired one, it is as if existence were uttering itself through him […]

Can you expand a bit on how certain ecodelics (as well as marijuana) can help us de-condition our thinking, have creative breakthroughs as well as intellectual catharsis? How is it that “intoxication” could, under certain conditions, actually improve our cognition and creativity and contribute to the collective intelligence of the species?

Richard Doyle: I would point, again, to Pahnke's description of ego death. This is by definition an experience when our maps of the world are humbled. In the breakdown of our ordinary worldview - such as when a (now formerly)  secular being such as myself finds himself  feeling unmistakably sacred - we get a glimpse of reality without our usual filters. It is just not possible to use the old maps, so we get even an involuntary glimpse of reality. This is very close to the Buddhist practice of exhausting linguistic reference through chanting or Koans - suddenly we see the world through something besides our verbal mind.

Ramana Maharshi says that in the silence of the ego we perceive reality - reality IS the breakdown of the ego. Aldous Huxley, who was an extraordinarily adroit and eloquent writer with knowledge of increasingly rare breadth and depth, pointed to a quote by William Blake when trying to sum up his experience: the doors of perception were cleansed. This is a humble act, if you think about it: Huxley, faced with the beauty and grandeur of his mescaline experience, offers the equivalent of ‘What he said!”. Huxley also said that psychedelics offered a respite from “the throttling embrace of the self”, suggesting that we see the world without the usual filters of our egoic self. (…)

And if you look carefully at the studies by pioneers such as Myron Stolaroff and Willis Harman that you reference, as I do in the book, you will see that great care was taken to compose the best contexts for their studies. Subjects, for example, were told not to think about personal problems but to focus on their work at hand, and, astonishingly enough, it seems to have worked. These are very sensitive technologies and we really need much more research to explore their best use. This means more than studying their chemical function - it means studying the complex experiences human beings have with them. Step one has to be accepting that ecodelics are and always have been an integral part of human culture for some subset of the population. (…)

Q: Kevin Kelly refers to technological evolution as following the momentum begun at the big bang - he has stated:

"…there is a continuum, a connection back all the way to the Big Bang with these self-organizing systems that make the galaxies, stars, and life, and now is producing technology in the same way. The energies flowing through these things are, interestingly, becoming more and more dense. If you take the amount of energy that flows through one gram per second in a galaxy, it is increased when it goes through a star, and it is actually increased in life…We don’t realize this. We think of the sun as being a hugely immense amount of energy. Yet the amount of energy running through a sunflower per gram per second of the livelihood, is actually greater than in the sun. Actually, it’s so dense that when it’s multiplied out, the sunflower actually has a higher amount of energy flowing through it. "..

Animals have even higher energy usage than the plant, and a jet engine has even higher than an animal. The most energy-dense thing that we know about in the entire universe is the computer chip in your computer. It is sending more energy per gram per second through that than anything we know. In fact, if it was to send it through any faster, it would melt or explode. It is so energy-dense that it is actually at the edge of explosion.”…  

Can you comment on the implications of what he’s saying here?

Richard Doyle: I think maps of “continuity” are crucial and urgently needed. We can model the world as either “discrete” - made up of parts - or “continuous” - composing a whole - to powerful effect. Both are in this sense true. This is not “relativism” but a corollary of that creative freedom to choose our models that seems to be an attribute of consciousness. The mechanistic worldview extracts, separates and reconnects raw materials, labor and energy in ways that produce astonishing order as well as disorder (entropy).

By mapping the world as discrete – such as the difference between one second and another – and uniform – to a clock, there is no difference between one second and another – we have transformed the planet. Consciousness informed by discrete maps of reality has been an actual geological force in a tiny sliver of time. In so doing, we have have transformed the biosphere. So you can see just how actual this relation between consciousness, its maps, and earthly reality is. This is why Vernadsky, a geophysicist, thought we needed a new term for the way consciousness functions as a geological force: noosphere.

These discrete maps of reality are so powerful that we forget that they are maps. Now if the world can be cut up into parts, it is only because it forms a unity. A Sufi author commented that the unity of the world was both the most obvious and obscure fact. It is obvious because our own lives and the world we inhabit can be seen to continue without any experienced interruption – neither the world nor our lives truly stops and starts. This unity can be obscure because in a literal sense we can’t perceive it with our senses – this unity can only be “perceived” by our minds. We are so effective as separate beings that we forget the whole for the part.

The world is more than a collection of parts, and we can quote Carl Sagan: “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.” Equally beautiful is what Sagan follows up with: “The cosmos is also within us. We are made of star stuff.” Perhaps this is why models such as Kelly’s feel so powerful: reminding ourselves that there is a continuity between the Big Bang and ourselves means we are an aspect of something unfathomably grand, beautiful, complex and unbroken. This is perhaps the “grandeur” Darwin was discussing. And when we experience that grandeur it can help us think and act in aways appropriate to a geological force.

I am not sure about the claims for energy that Kelly is making – I would have to see the context and the source of his data – but I do know that when it comes to thermodynamics, what he is saying rings true. We are dissipative structures far from equilibrium, meaning that we fulfill the laws of thermodynamics. Even though biological systems such as ourselves are incredibly orderly – and we export that order through our maps onto and into the world – we also yield more entropy than our absence. Living systems, according to an emerging paradigm of Stanley Salthe, Rob Swenson, the aforementioned Margulis and Sagan, Eric Schneider, James J. kay and others, maximize entropy, and the universe is seeking to dissipate ever greater amounts of entropy.

Order is a way to dissipate yet more energy. We’re thermodynamic beings, so we are always on the prowl for new ways to dissipate energy as heat and create uncertainty (entropy), and consciousness helps us find ever new ways to do so. (In case you are wondering, Consciousness is the organized effort to model reality that yields ever increasing spirals of uncertainty in Deep Time. But you knew that.) It is perhaps in this sense that, again following Carl Sagan, “ We are a way for the cosmos to know itself.” That is pretty great map of continuity.

What I don’t understand in Kelly’s work, and I need to look at with more attention, is the discontinuity he posits between biology and technology. In my view our maps have made us think of technology as different in kind from biology, but the global mycelial web of fungi suggests otherwise, and our current view of technology seems to intensify this sense of separation even as we get interconnected through technology. I prefer Noosphere to what Kelly calls the Technium because it reminds us of the ways we are biologically interconnected with our technosocial realities. Noosphere sprouts from biosphere.

Q: There is this notion of increasing complexity… Yet in a universe where entropy destroys almost everything, here we are, the cutting edge of evolution, taking the reigns and accelerating this emergent complexity.. Kurzweil says that this makes us “very important”: 

“…It turns out that we are central, after all.  Our ability to create models—virtual realities—in our brains, combined with ou modest-looking thumbs, has been sufficient to usher in another form of evolution: technology. That development enabled the persistence of the accelerating pace that started with biological evolution. It will continue until the entire universe is at our fingertips.”   

What do you think?

Richard Doyle: Well, I think from my remarks already you can see that I agree with Kurzweil here and can only suggest that it is for this very reason that we must be very creative, careful and cunning with our models. Do we model the technologies that we are developing according to the effects they will have on the planetary whole? Only rarely, though this is what we are trying to do at the Penn State Center for Nanofutures, as are lots of people involved in Science, Technology and Society as well as engineering education. When we develop technologies - and that is the way psychedelics arrived in modern culture, as technologies -  we must model their effects not only on the individuals who use them, but on the whole of our ecosystem and planetary society.

If our technological models are based on the premise that this is a dead planet – and most of them very much are, one is called all kinds of names if you suggest otherwise - animist, vitalist, Gaian intelligence agent, names I wear with glee – then we will end up with a asymptotically dead planet. Consciousness will, of course, like the Terminator, “Be Back” should we perish, but let us hope that it learns to experiment better with its maps and learns to notice reality just a little bit more. I am actually an optimist on this front and think that a widespread “aha” moment is occurring where there is a collective recognition of the feedback loops that make up our technological & biological evolution.

Again, I don’t know why Kurzweil seems to think that technological evolution is discontinuous with biological evolution – technology is nested within the network of “wetwares” that make it work, and our wetwares are increasingly interconnected with our technological infrastructure, as the meltdowns in Japan demonstrate along with the dependence of many of us – we who are more bacterial than human by dry weight - upon a network of pharmaceuticals and electricity for continued life. The E. coli outbreak in Europe is another case in point – our biological reality is linked with the technological reality of supply chain management. Technological evolution is biological evolution enabled by the maps of reality forged by consciousness. (…)

Whereas technology for many promised the “disenchantment” of the world –the rationalization of this world of the contemplative spirit as everything became a Machine – here was mystical contemplative experience manifesting itself directly within what sociologist Max Weber called the “iron cage of modernity”, Gaia bubbling up through technological “Babylon.”

Now many contemplatives have sought to share their experiences through writing – pages and pages of it. As we interconnect through information technology, we perhaps have the opportunity to repeat this enchanted contemplative experience of radical interconnection on another scale, and through other means. Just say Yes to the Noosphere!”

Richard Doyle, Professor of English Affiliate Faculty, Information Science and Technology at Pennsylvania State University, in conversation with Jason Silva, Creativity, evolution of mind and the “vertigo of freedom”, Big Think, June 21, 2011. (Illustrations: 1) Randy Mora, Artífices del sonido, 2) Noosphere)

See also:

☞ RoseRose, Google and the Myceliation of Consciousness
Kevin Kelly on Why the Impossible Happens More Often
Luciano Floridi on the future development of the information society
Luciano Floridi on The Digital Revolution as a Fourth Revolution: “P2P does’t mean Pirate to Pirate but Platonist to Platonist”
The Rise of the Conversation Society: information, communication and collaboration
Keen On… James Gleick: Why Cyberspace, As a Mode of Being, Will Never Go Away (TCTV), (video) TechCrunch, Jun 23, 2011
Timothy Leary on cybernetics and a new global culture
Mark Changizi on Humans, Version 3.0.
Cyberspace tag on Lapidarium

Mar
7th
Wed
permalink

Is The World An Idea?

                              
                                                  Plato, Hulton Archive/Getty Images

Plato was one that made the divide between the world of ideas and the world of the senses explicit. In his famous Allegory of the Cave, he imagined a group of prisoners who had been chained to a cave all their lives; all they could see were shadows projected on a wall, which they conceived as their reality. Unbeknownst to them, a fire behind them illuminated objects and created the shadows they saw, which could be manipulated to deceive them. In contrast, the philosopher could see reality as it truly is, a manifestation of ideas freed from the deception of the senses. In other words, if we want to understand the true nature of reality, we shouldn’t rely on our senses; only ideas are truly pure, freed from the distortions caused by our limited perception of reality.

Plato thus elevated the human mind to a god-like status, given that it can find truth through reason, in particular through the rational construction of ideal “Forms,” which are the essence of all objects we see in reality. For example, all tables share the Form of “tableness,” even if every table is different. The Form is an ideal and, thus, a blueprint of perfection. If I ask you to imagine a circle, the image of a circle you hold in your head is the only perfect circle: any representation of that circle, on paper or on a blackboard, will be imperfect. To Plato, intelligence was the ability to grasp the world of Forms and thus come closer to truth.

Due to its connection with the search for truth, it’s no surprise that Plato’s ideas influenced both scientists and theologians. If the world is made out of Forms, say geometrical forms, reality may be described mathematically, combining the essential forms and their relations to describe the change we see in the world. Thus, by focusing on the essential elements of reality as mathematical objects and their relations we could, perhaps, grasp the ultimate nature of reality and so come closer to timeless truths.

The notion that mathematics is a portal to final truths holds tremendous intellectual appeal and has influenced some of the greatest names in the history of science, from Copernicus, Kepler, Newton, and Einstein to many present-day physicists searching for a final theory of nature based upon a geometrical scaffolding, such as superstring theories. (…)

Taken in context, we can see where modern scientific ideas that relate the ultimate nature of reality to geometry come from. If it’s not God the Geometer anymore, Man the Geometer persists. That this vision offers a major drive to human creativity is undeniable.

We do imagine the universe in our minds, with our minds, and many scientific successes are a byproduct of this vision. Perhaps we should take Nicholas of Cusa,’s advice to heart and remember that whatever we achieve with our minds will be an expression of our own creativity, having little or nothing to do with ultimate truths.”

Marcelo Gleiser, Brazilian Professor of Natural Philosophy, Physics and Astronomy at Dartmouth College, USA, Is The World An Idea?, NPR, March 7, 2012.

See also:

Cognition, perception, relativity tag on Lapidarium notes

Nov
2nd
Wed
permalink

Daniel Kahneman on thinking ‘Fast And Slow’: How We Aren’t Made For Making Decisions

                        

We’re not in control because our preferences come from a lot of places that we don’t know about and, second, there are really some characteristics of the way a mind works that are incompatible with perfect decision-making. In particular, we have a very narrow view of what is going on and that narrow view - we take decisions as if that were the only decision that we’re facing. We don’t see very far in the future. We are very focused on one idea at a time, one problem at a time and all these are incompatible with full rationality as economic theory assumes it.”

Fast And Slow’: Pondering The Speed Of Thought, October 27, 2011 (transcript)

"Take for example the study out of the National Academy of Sciences, which found that Israeli parole judges — known for turning down parole applications — were more likely to award parole in cases they heard immediately after taking a meal break.

"Presumably they are hungry, but certainly they are tired, they’re depleted," Kahneman says of the judges’ state when they are a few hours away from a meal. “When you’re depleted, you tend to fall back on default actions, and the default action in that case is apparently to deny parole. So yes, people are strongly influenced by the level of glucose in the brain.”

The implications of such a study are tremendous: If democratic society is based on people making decisions, what does it mean when all it takes to influence those decisions is a little bit of glucose?”

— Robert Siegel interview with Daniel Kahneman, 'Fast And Slow': Pondering The Speed Of Thought, NPR, October 27, 2011

"Crucial policy decisions are often based on statistical inferences, but as Mr. Kahneman notes, we “pay more attention to the content of messages than to information about their reliability.” The effect is “a view of the world around us that is simpler and more coherent than the data justify.”

One major effect of the work of Messrs. Kahneman and Tversky has been to overturn the assumption that human beings are rational decision-makers who weigh all the relevant factors logically before making choices. When the two men began their research, it was understood that, as a finite device with finite time, the brain had trouble calculating the costs and benefits of every possible course of action and that, separately, it was not very good at applying rules of logical inference to abstract situations. What Messrs. Kahneman and Tversky showed went far beyond this, however. They argued that, even when we have all the information that we need to arrive at a correct decision, and even when the logic is simple, we often get it drastically wrong. (…)

This "conjunction fallacy" (like the focusing illusion) illustrates a broader pattern—of human reasoning being distorted by systematic biases. To understand one source of such errors, Mr. Kahneman divides the mind into two broad components. "System 1" makes rapid, intuitive decisions based on associative memory, vivid images and emotional reactions. "System 2" monitors the output of System 1 and overrides it when the result conflicts with logic, probability or some other decision-making rule. Alas, the second system is a bit lazy—we must make a special effort to pay attention, and such effort consumes time and energy.

You can get an idea of the two-system distinction by trying to solve this simple problem, from the work of the psychologist Shane Frederick: “If it takes 5 machines 5 minutes to make 5 widgets, how many minutes does it take 100 machines to make 100 widgets?” The answer “100 minutes” leaps to mind (System 1 at work), but it is wrong. But a bit of reflective thought (by System 2) leads to “five minutes,” the right answer.

The divided mind is evident in other situations where we are not as “rational” as we might assume. Most people require a larger expected outcome to take a risk when a sure thing is available as an alternative (risk aversion), and they dislike losses much more than they like gains of equivalent size (loss aversion). These now-commonplace concepts are central to prospect theory, perhaps the most influential legacy of Messrs. Kahneman and Tversky.

Mr. Kahneman notes that we harbor two selves when it comes to happiness, too: one self that experiences pain and pleasure from moment to moment and another that remembers the emotions associated with complete events and episodes. The remembering self does not seem to care how long an experience was if it was getting better toward the end—so a longer colonoscopy that ended with decreasing pain will be seen later as preferable to a shorter procedure that involved less total pain but happened to end at a very painful point. Complications like this should make us wary of letting simplistic measures of happiness determine national policy and social goals.

Mr. Kahneman stresses that he is just as susceptible as the rest of us to the cognitive illusions he has discovered. He tries to recognize situations when mistakes are especially likely to occur—such as when he is starting a big project or making a forecast—and then act to rethink his System 1 inclinations. The tendency to underestimate the costs of future projects, he notes, is susceptible to taking an “outside view”: looking at your own project as an outsider would. To avoid overconfidence, Mr. Kahneman recommends an exercise called the “premortem,” developed by the psychologist Gary Klein: Before finalizing a decision, imagine that, a year after it has been made, it has turned out horribly, then write a history of how it went wrong and why. (…)

Mr. Kahneman’s stated goals are minimalist: to “enrich the vocabulary that people use” when they talk about decisions, so that his readers benefit from his work at the “proverbial watercooler, where opinions are shared and gossip is exchanged.”

— Christopher F. Chabris, Why the Grass Seems Greener, WSJ.com, Oct 22, 2011 (Illustration source)

Daniel Kahneman is Israeli-American psychologist and Nobel laureate, notable for his work on the psychology of judgment, decision-making, behavioral economics and hedonic psychology.

See also:

Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking
Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’
Daniel Kahneman: How cognitive illusions blind us to reason, The Observer, 30 October 2011 
Daniel Kahneman on the riddle of experience vs. memory
The irrational mind - David Brooks on the role of emotions in politics, policy, and life
The Argumentative Theory: ‘Reason evolved to win arguments, not seek truth’
A risk-perception: What You Don’t Know Can Kill You

Oct
3rd
Mon
permalink

Time and the Brain. Eagleman: ‘Time is not just as a neuronal computation—a matter for biological clocks—but as a window on the movements of the mind’

                               

"Instead of reality being passively recorded by the brain, it is actively constructed by it."

David Eagleman, Incognito: The Secret Lives of the Brain, Pantheon Books, 2011

Clocks offer at best a convenient fiction, [David Eagleman] says. They imply that time ticks steadily, predictably forward, when our experience shows that it often does the opposite: it stretches and compresses, skips a beat and doubles back.”

Just how many clocks we contain still isn’t clear. The most recent neuroscience papers make the brain sound like a Victorian attic, full of odd, vaguely labelled objects ticking away in every corner. The circadian clock, which tracks the cycle of day and night, lurks in the suprachiasmatic nucleus, in the hypothalamus. The cerebellum, which governs muscle movements, may control timing on the order of a few seconds or minutes. The basal ganglia and various parts of the cortex have all been nominated as timekeepers, though there’s some disagreement on the details.

The standard model, proposed by the late Columbia psychologist John Gibbon in the nineteen-seventies, holds that the brain has “pacemaker” neurons that release steady pulses of neurotransmitters. More recently, at Duke, the neuroscientist Warren Meck has suggested that timing is governed by groups of neurons that oscillate at different frequencies. At U.C.L.A., Dean Buonomano believes that areas throughout the brain function as clocks, their tissue ticking with neural networks that change in predictable patterns. “Imagine a skyscraper at night,” he told me. “Some people on the top floor work till midnight, while some on the lower floors may go to bed early. If you studied the patterns long enough, you could tell the time just by looking at which lights are on.”

Time isn’t like the other senses, Eagleman says. Sight, smell, touch, taste, and hearing are relatively easy to isolate in the brain. They have discrete functions that rarely overlap: it’s hard to describe the taste of a sound, the color of a smell, or the scent of a feeling. (Unless, of course, you have synesthesia—another of Eagleman’s obsessions.) But a sense of time is threaded through everything we perceive. It’s there in the length of a song, the persistence of a scent, the flash of a light bulb. “There’s always an impulse toward phrenology in neuroscience—toward saying, ‘Here is the spot where it’s happening,’ ” Eagleman told me. “But the interesting thing about time is that there is no spot. It’s a distributed property. It’s metasensory; it rides on top of all the others.”

The real mystery is how all this is coördinated. When you watch a ballgame or bite into a hot dog, your senses are in perfect synch: they see and hear, touch and taste the same thing at the same moment. Yet they operate at fundamentally different speeds, with different inputs. Sound travels more slowly than light, and aromas and tastes more slowly still. Even if the signals reached your brain at the same time, they would get processed at different rates. The reason that a hundred-metre dash starts with a pistol shot rather than a burst of light, Eagleman pointed out, is that the body reacts much more quickly to sound. Our ears and auditory cortex can process a signal forty milliseconds faster than our eyes and visual cortex—more than making up for the speed of light. It’s another vestige, perhaps, of our days in the jungle, when we’d hear the tiger long before we’d see it.

In Eagleman’s essay “Brain Time,” published in the 2009 collection “What’s Next? Dispatches on the Future of Science,” he borrows a conceit from Italo Calvino’s “Invisible Cities.” The brain, he writes, is like Kublai Khan, the great Mongol emperor of the thirteenth century. It sits enthroned in its skull, “encased in darkness and silence,” at a lofty remove from brute reality. Messengers stream in from every corner of the sensory kingdom, bringing word of distant sights, sounds, and smells. Their reports arrive at different rates, often long out of date, yet the details are all stitched together into a seamless chronology. The difference is that Kublai Khan was piecing together the past. The brain is describing the present—processing reams of disjointed data on the fly, editing everything down to an instantaneous now. (…)

[Eagleman] thought of time not just as a neuronal computation—a matter for biological clocks—but as a window on the movements of the mind. (…)

You feel it now—not in half a second. But perception and reality are often a little out of register, as the saccade experiment showed. If all our senses are slightly delayed, we have no context by which to measure a given lag. Reality is a tape-delayed broadcast, carefully censored before it reaches us.

“Living in the past may seem like a disadvantage, but it’s a cost that the brain is willing to pay,” Eagleman said. “It’s trying to put together the best possible story about what’s going on in the world, and that takes time.” Touch is the slowest of the senses, since the signal has to travel up the spinal cord from as far away as the big toe. That could mean that the over-all delay is a function of body size: elephants may live a little farther in the past than hummingbirds, with humans somewhere in between. The smaller you are, the more you live in the moment. (…)

[T]ime and memory are so tightly intertwined that they may be impossible to tease apart.

One of the seats of emotion and memory in the brain is the amygdala, he explained. When something threatens your life, this area seems to kick into overdrive, recording every last detail of the experience. The more detailed the memory, the longer the moment seems to last. “This explains why we think that time speeds up when we grow older,” Eagleman said—why childhood summers seem to go on forever, while old age slips by while we’re dozing. The more familiar the world becomes, the less information your brain writes down, and the more quickly time seems to pass. (…)

“Time is this rubbery thing,” Eagleman said. “It stretches out when you really turn your brain resources on, and when you say, ‘Oh, I got this, everything is as expected,’ it shrinks up.” The best example of this is the so-called oddball effect—an optical illusion that Eagleman had shown me in his lab. It consisted of a series of simple images flashing on a computer screen. Most of the time, the same picture was repeated again and again: a plain brown shoe. But every so often a flower would appear instead. To my mind, the change was a matter of timing as well as of content: the flower would stay onscreen much longer than the shoe. But Eagleman insisted that all the pictures appeared for the same length of time. The only difference was the degree of attention that I paid to them. The shoe, by its third or fourth appearance, barely made an impression. The flower, more rare, lingered and blossomed, like those childhood summers. (…)”

Burkhard Bilger speaking about David Eagleman, neuroscientist at Baylor College of Medicine, where he directs the Laboratory for Perception and Action and the Initiative on Neuroscience and Law, The Possibilian, The New Yorker, Aprill 25, 2011 (Illustration source)

See also:

David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
☞ David Eagleman, Brain Time, Edge, June 24, 2009 
David Eagleman on the conscious mind
The Experience and Perception of Time, Stanford Encyclopedia of Philosophy
Time tag on Lapidarium notes

Sep
22nd
Thu
permalink

Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’

                                 

"In the mid 1970’s, Tim Wilson and Dick Nisbett opened the basement door with their landmark paper entitled “Telling More Than We Can Know,” [pdf] in which they reported a series of experiments showing that people are often unaware of the true causes of their own actions, and that when they are asked to explain those actions, they simply make stuff up. People don’t realize they are making stuff up, of course; they truly believe the stories they are telling about why they did what they did.  But as the experiments showed, people are telling more than they can know. The basement door was opened by experimental evidence, and the unconscious took up permanent residence in the living room. Today, psychological science is rife with research showing the extraordinary power of unconscious mental processes. (…)

At the center of all his work lies a single enigmatic insight: we seem to know less about the worlds inside our heads than about the world our heads are inside.

The Torah asks this question: “Is not a flower a mystery no flower can explain?” Some scholars have said yes, some scholars have said no. Wilson has said, “Let’s go find out.” He has always worn two professional hats — the hat of the psychologist and the hat of the methodologist. He has written extensively about the importance of using experimental methods to solve real world problems, and in his work on the science of psychological change — he uses a scientific flashlight to chase away a whole host of shadows by examining the many ways in which human beings try to change themselves — from self-help to psychotherapy — and asking whether these things really work, and if so, why? His answers will surprise many people and piss off the rest. I predict that this new work will be the center of a very interesting storm.”

Daniel Gilbert, Harvard College Professor of Psychology at Harvard University; Director of Harvard’s Hedonic Psychology Laboratory; Author, Stumbling on Happiness.

It’s not the objective environment that influences people, but their constructs of the world. You have to get inside people’s heads and see the world the way they do. You have to look at the kinds of narratives and stories people tell themselves as to why they’re doing what they’re doing. What can get people into trouble sometimes in their personal lives, or for more societal problems, is that these stories go wrong. People end up with narratives that are dysfunctional in some way.

We know from cognitive behavioral therapy and clinical psychology that one way to change people’s narratives is through fairly intensive psychotherapy. But social psychologists have suggested that, for less severe problems, there are ways to redirect narratives more easily that can have amazingly powerful long-term effects. This is an approach that I’ve come to call story editing. By giving people little prompts, suggestions about the ways they might reframe a situation, or think of it in a slightly different way, we can send them down a narrative path that is much healthier than the one they were on previously. (…)

This little message that maybe it’s not me, it’s the situation I’m in, and that that can change, seemed to alter people’s stories in ways that had dramatic effects down the road. Namely, people who got this message, as compared to a control group that did not, got better grades over the next couple of years and were less likely to drop out of college. Since then, there have been many other demonstrations of this sort that show that little ways of getting people to redirect their narrative from one path down another is a powerful tool to help people live better lives. (…)

Think back to the story editing metaphor: What these writing exercises do is make us address problems that we haven’t been able to make sense of and put us through a sense-making process of reworking it in such a way that we gain a new perspective and find some meaning, so that we basically come up with a better story that allows us to put that problem behind us. This is a great example of a story editing technique that can be quite powerful. (…)

Social psychology is a branch of psychology that began in the 1950s, mostly by immigrants from Germany who were escaping the Nazi regime — Kurt Lewin being the most influential ones. What they had to offer at that time was largely an alternative to behaviorism. Instead of looking at behavior as solely the product of our objective reinforcement environment, Lewin and others said you have to get inside people’s heads and look at the world as they perceive it. These psychologists were very influenced by Gestalt psychologists who were saying the same thing about perception, and they applied this lesson to the way the mind works in general. (…) But to be honest, the field is a little hard to define.  What is social psychology?  Well, the social part is about interactions with other people, and topics such as conformity are active areas of research. (…)

Most economists don’t take the social psychological approach of trying to get inside the heads of people and understanding how they interpret the world. (…)

My dream is that policymakers will become more familiar with this approach and be as likely to call upon a social psychologist as an economist to address social issues. (…)

Another interesting question is the role of evolutionary theory in psychology, and social psychology in particular.  (…)

Evolutionary psychology has become a dominant force in the field. There are many who use it as their primary theoretical perspective, as a way to understand why we do what we do. (…)

There are some striking parallels between psychoanalytic theory and evolutionary theory. Both theories, at some general level are true. Evolutionary theory, of course, shows how the forces of natural selection operated on human beings. Psychoanalytic theory argues that our childhood experiences mold us in certain ways and give us outlooks on the world. Our early relationships with our parents lead to unconscious structures that can be very powerful. (…)

One example where evolutionary psychology led to some interesting testable hypotheses is work by Jon Haidt, my colleague at the University of Virginia. He has developed a theory of moral foundations that says that all human beings endorse the same list of moral values, but that people of different political stripes believe some of these values are more important than others. In other words, liberals may have somewhat different moral foundations than conservatives. Jon has persuasively argued that one reason that political discourse has become so heated and divisive in our country is that there is a lack of understanding in one camp of the moral foundations that the other camp is using to interpret and evaluate the world. If we can increase that understanding, we might lower the heat and improve the dialogue between people on opposite ends of the political spectrum.

Another way in which evolutionary theory has been used is to address questions about the origins of religion. This is not a literature I have followed that closely, to be honest, but there’s obviously a very interesting discourse going on about group selection and the origins and purpose of religion. The only thing I’ll add is, back to what I’ve said before about the importance of having narratives and stories to give people a sense of meaning and purpose, well, religion is obviously one very important source of such narratives. Religion gives us a sense that there is a purpose and a meaning to life, the sense that we are important in the universe, and that our lives aren’t meaningless specks like a piece of sand on a beach. That can be very powerful for our well-being. I don’t think religion is the only way to accomplish that; there are many belief systems that can give us a sense of meaning and purpose other than religion. But religion can fill that void.”

Timothy D. Wilson, is the Sherrell J. Aston Professor of Psychology at the University of Virginia and a researcher of self-knowledge and affective forecasting., The Social Psychological Narrative — or — What Is Social Psychology, Anyway?, Edge, 6 July 2011 (video and full transcript) (Illustration: Hope Kroll, Psychological 3-D narrative)

See also:

Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking
Iain McGilchrist on The Divided Brain and the Making of the Western World
Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
David Deutsch: A new way to explain explanation
Cognition, perception, relativity tag on Lapidarium notes

Sep
12th
Mon
permalink

Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking

    

"The power of settings, the power of priming, and the power of unconscious thinking, all of those are a major change in psychology. I can’t think of a bigger change in my lifetime. You were asking what’s exciting? That’s exciting, to me."

If you want to characterize how something is done, then one of the most powerful ways of characterizing the way the mind does anything is by looking at the errors that the mind produces while it’s doing it because the errors tell you what it is doing. Correct performance tells you much less about the procedure than the errors do.

We focused on errors. We became completely identified with the idea that people are generally wrong. We became like prophets of irrationality. We demonstrated that people are not rational. (…)

That was 40 years ago, and a fair amount has happened in the last 40 years. Not so much, I would say, about the work that we did in terms of the findings that we had. Those pretty much have stood, but it’s the interpretation of them that has changed quite a bit. It is now easier than it was to speak about the mechanisms that we had very little idea about, and to speak about, to put in balance the flaws that we were talking about with the marvels of intuition. (…)

Flows

This is something that happens quite a lot, at least in psychology, and I suppose it may happen in other sciences as well. You get an impression of the relative importance of two topics by how much time is spent on them when you’re teaching them. But you’re teaching what’s happening now, you’re teaching what’s recent, what’s current, what’s considered interesting, and so there is a lot more to say about flaws than about marvels. (…)

We understand the flaws and the marvels a little better than we did. (…)

One way a thought can come to mind involves orderly computation, and doing things in stages, and remembering rules, and applying rules. Then there is another way that thoughts come to mind. You see this lady, and she’s angry, and you know that she’s angry as quickly as you know that her hair is dark. There is no sharp line between intuition and perception. You perceive her as angry. Perception is predictive. You know what she’s going to say, or at least you know something about what it’s going to sound like, and so perception and intuition are very closely linked. In my mind, there never was a very clean separation between perception and intuition. Because of the social context we’re in here, you can’t ignore evolution in anything that you do or say. But for us, certainly for me, the main thing in the evolutionary story about intuition, is whether intuition grew out of perception, whether it grew out of the predictive aspects of perception.

If you want to understand intuition, it is very useful to understand perception, because so many of the rules that apply to perception apply as well to intuitive thinking. Intuitive thinking is quite different from perception. Intuitive thinking has language. Intuitive thinking has a lot of world knowledge organized in different ways than mere perception. But some very basic characteristics that we’ll talk about of perception are extended almost directly into intuitive thinking.

What we understand today much better than what we did then is that there are, crudely speaking, two families of mental operations, and I’ll call them “Type 1” and “Type 2” for the time being because this is the cleaner language. Then I’ll adopt a language that is less clean, and much more useful.

Type 1 is automatic, effortless, often unconscious, and associatively coherent, and I’ll talk about that. And Type 2 is controlled, effortful, usually conscious, tends to be logically coherent, rule-governed. Perception and intuition are Type 1— it’s a rough and crude characterization. Practiced skill is Type 1, that’s essential, the thing that we know how to do like driving, or speaking, or understanding language and so on, they’re Type 1. That makes them automatic and largely effortless, and essentially impossible to control.

Type 2 is more controlled, slower, is more deliberate. (…) Type 2 is who we think we are. I would say that, if one made a film on this, Type 2 would be a secondary character who thinks that he is the hero because that’s who we think we are, but in fact, it’s Type 1 that does most of the work, and it’s most of the work that is completely hidden from us. (…)

'Associative coherence'

Everything reinforces everything else, and that is something that we know. You make people recoil; they turn negative. You make people shake their heads (you put earphones on people’s heads, and you tell them we’re testing those earphones for integrity, so we would like you to move your head while listening to a message, and you have them move their head this way, or move their head that way, and you give them a political message) they believe it if they’re doing “this”, and they don’t believe it if they’re doing “that”. Those are not huge effects, but they are effects. They are easily shown with a conventional number of subjects. It’s highly reliable.

The thing about the system is that it settles into a stable representation of reality, and that is just a marvelous accomplishment. That’s a marvel. This is not. That’s not a flaw, that’s a marvel. Now, coherence has its cost. Coherence means that you’re going to adopt one interpretation in general. Ambiguity tends to be suppressed. This is part of the mechanism that you have here that ideas activate other ideas and the more coherent they are, the more likely they are to activate to each other. Other things that don’t fit fall by the wayside. We’re enforcing coherent interpretation. We see the world as much more coherent than it is.

That is something that we see in perception, as well. You show people ambiguous stimuli. They’re not aware of the ambiguity. I’ll give you an example. You hear the word “bank”, and most people interpret “bank”, as a place with vaults, and money, and so on. But in the context, if you’re reading about streams and fishing, “bank” means something else. You’re not conscious when you’re getting one that you are not getting the other. If you are, you’re not conscious ever, but it’s possible that both meanings are activated, but that one gets quickly suppressed. That mechanism of creating coherent interpretations is something that happens, and subjectively what happens (I keep using the word “happens” - this is not something we do, this is something that happens to us). The same is true for perception. For Plato it was ideas sort of thrusting themselves into our eyes, and that’s the way we feel. We are passive when it comes to System 1. When it comes to System 2, and to deliberate thoughts, we are the authors of our own actions, and so the phenomenology of it is radically different. (…)

'What you see is all there is

It is a mechanism that tends not to not be sensitive to information it does not have. It’s very important to have a mechanism like that. (…) This is a mechanism that takes whatever information is available, and makes the best possible story out of the information currently available, and tells you very little about information it doesn’t have. So what you can get are people jumping to conclusions.

'Machine for jumping to conclusions'

The jumping to conclusions is immediate, and very small samples, and furthermore from unreliable information. You can give details and say this information is probably not reliable, and unless it is rejected as a lie, people will draw full inferences from it. What you see is all there is. (…)

Overconfidence

The confidence that people have in their beliefs is not a measure of the quality of evidence, it is not a judgment of the quality of the evidence but it is a judgment of the coherence of the story that the mind has managed to construct. Quite often you can construct very good stories out of very little evidence, when there is little evidence, no conflict, and the story is going to end up good. People tend to have great belief, great faith in stories that are based on very little evidence. It generates what Amos [Tversky] and I call “natural assessments”, that is, there are computations that get performed automatically. For example, we get computations of the distance between us and other objects, because that’s something that we intend to do, this is something that happens to us in the normal run of perception.

But we don’t compute everything. There is a subset of computations that we perform, and other computations we don’t.

You see this array of lines.

There is evidence among others, and my wife has collected some evidence, that people register the average length of these lines effortlessly, in one glance, while doing something else. The extraction of information about a prototype is immediate. But if you were asked, what is the sum, what is the total length of these lines? You can’t do this. You got the average for free; you didn’t get the sum for free. In order to get the sum, you’d have to get an estimate of the number, and an estimate of the average, and multiply the average by the number, and then you’ll get something. But you did not get that as a natural assessment. So there is a really important distinction between natural assessment and things that are not naturally assessed. There are questions that are easy for the organism to answer, and other questions that are difficult for the organism to answer, and that makes a lot of difference.

While I’m at it, the difference between average and sums is an important difference because there are variables that have the characteristic that I will call “sum-like.” They’re extensional. They’re sum-like variables. Economic value is a sum-like variable. (…)”

Daniel Kahneman, Israeli-American psychologist and Nobel laureate. He is notable for his work on the psychology of judgment and decision-making, behavioral economics and hedonic psychology., To see and read full lecture click The Marvels and the Flaws of Intuitive Thinking Edge Master Class 2011, Edge, Jul 17, 2011 (Illustration: Maija Hurme)

See also:

Explorations of the Mind: Intuition - The Marvels and the Flaws



Daniel Kahneman, UC Berkeley Graduate Council Lectures, Apr 2007
Daniel Kahneman on the riddle of experience vs. memory
Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’
A risk-perception: What You Don’t Know Can Kill You
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
Map–territory relation - a brief résumé, Lapidarium
The Relativity of Truth - a brief résumé, Lapidarium
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
☞ Michael Lewis, The King of Human Error, Vanity Fair, Dec 2011.
☞ Bruce Hood, The Self Illusion: How the Brain Creates Identity, May, 2012

Aug
7th
Sun
permalink

The relativity of now


source