Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Pensieri a caso
Photography
A Box Of Stories
Reading Space
Homepage

Twitter
Facebook

Contact

Archive

Oct
28th
Mon
permalink

Douglas Hofstadter: The Man Who Would Teach Machines to Think

"All the limitative theorems of metamathematics and the theory of computation suggest that once the ability to represent your own structure has reached a certain critical point, that is the kiss of death: it guarantees that you can never represent yourself totally. Gödel’s Incompleteness Theorem, Church’s Undecidability Theorem, Turing’s Halting Theorem, Tarski’s Truth Theorem — all have the flavour of some ancient fairy tale which warns you that “To seek self-knowledge is to embark on a journey which … will always be incomplete, cannot be charted on any map, will never halt, cannot be described.”

Douglas Hofstadter, 1979, cited in Vinod K. Wadhawan, Complexity Explained: 17. Epilogue, Nirmukta, 04 April 2010.

image M. C. Escher, Print Gallery. Hofstadter calls this Escher work a “pictorial parable for Godel’s Incompleteness Theorem.” Why? Look to the center of the painting, is there any way logical way to complete it? — source (Click picture to enlarge)

"On [Douglas] Hofstadter's office wall is a somewhat tattered reproduction of one of his favorite mind-twisting M. C. Escher prints, The Print Gallery.” In it, a boy stands looking at a print depicting a town where a woman looks down from her window at the roof of a gallery in which - yes - the boy stands looking at the print. We appreciate the paradox without being thrown by it, because we are outside looking in. Something like that creates our own unfathomable feelings of self. The self, subject and object, perceiver and perceived, is everywhere in the paradox.

It is a ”circling back,” the Tortoise tells Achilles, ”of a complex representation of the system together with its representations of all the rest of the world.”

”It is just so hard, emotionally,” Achilles tells the Tortoise, ”to acknowledge that a ‘soul’ emerges from so physical a system.” (…)

But philosophers like  [Daniel] Dennett believe, with Hofstadter, that scientific answers can be found without cheating by reducing the question to a manageable scale. (…) [T]he danger of looking only at the lowest biological level is in losing sight of the essential humanity that, in Hofstadter’s view, exists in the pattern and in the paradox. ”There seems to be no alternative to accepting some sort of incomprehensible quality to existence,” as Hofstadter puts it. ”Take your pick.” 

James Gleick, on Douglas R. Hofstadter in Exploring the Labyrinth of the Mind, The New York Times, August 21, 1983.

"In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself. Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.” (…)

“Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines as “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes” and a young man’s style as “hipsterish” and on and on ceaselessly throughout your day. That’s what it means to understand. But how does understanding work? For three decades, Hofstadter and his students have been trying to find out, trying to build “computer models of the fundamental mechanisms of thought.”

“At every moment,” Hofstadter writes in Surfaces and Essences, his latest book (written with Emmanuel Sander), “we are simultaneously faced with an indefinite number of overlapping and intermingling situations.” It is our job, as organisms that want to live, to make sense of that chaos. We do it by having the right concepts come to mind. This happens automatically, all the time. Analogy is Hofstadter’s go-to word. The thesis of his new book, which features a mélange of A’s on its cover, is that analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.

“Look at your conversations,” he says. “You’ll see over and over again, to your surprise, that this is the process of analogy-making.” Someone says something, which reminds you of something else; you say something, which reminds the other person of something else—that’s a conversation. It couldn’t be more straightforward. But at each step, Hofstadter argues, there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.

“Beware,” he writes, “of innocent phrases like ‘Oh, yeah, that’s exactly what happened to me!’ … behind whose nonchalance is hidden the entire mystery of the human mind.” (…)

[Hofstadter] spends most of his time in his study, two rooms on the top floor of his house, carpeted, a bit stuffy, and messier than he would like. His study is the center of his world. He reads there, listens to music there, studies there, draws there, writes his books there, writes his e‑mails there. (Hofstadter spends four hours a day writing e‑mail. “To me,” he has said, “an e‑mail is identical to a letter, every bit as formal, as refined, as carefully written … I rewrite, rewrite, rewrite, rewrite all of my e‑mails, always.”) He lives his mental life there, and it shows. Wall-to-wall there are books and drawings and notebooks and files, thoughts fossilized and splayed all over the room. It’s like a museum for his binges, a scene out of a brainy episode of Hoarders.

“Anything that I think about becomes part of my professional life,” he says. Daniel Dennett, who co-edited The Mind’s I with him, has explained that “what Douglas Hofstadter is, quite simply, is a phenomenologist, a practicing phenomenologist, and he does it better than anybody else. Ever.” He studies the phenomena—the feelings, the inside actions—of his own mind. “And the reason he’s good at it,” Dennett told me, “the reason he’s better than anybody else, is that he is very actively trying to have a theory of what’s going on backstage, of how thinking actually happens in the brain.” (…)

He makes photocopies of his notebook pages, cuts them up with scissors, and stores the errors in filing cabinets and labeled boxes around his study.

For Hofstadter, they’re clues. “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.” Correct speech isn’t very interesting; it’s like a well-executed magic trick—effective because it obscures how it works. What Hofstadter is looking for is “a tip of the rabbit’s ear … a hint of a trap door.

As the wind tunnel was to the Wright brothers, so the computer is to FARG. The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited. In Hofstadter’s view, this is the great opportunity of artificial intelligence. Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why. “I have always felt that the only hope of humans ever coming to fully understand the complexity of their minds,” Hofstadter has written, “is by modeling mental processes on computers and learning from the models’ inevitable failures.” (…)

But very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.

The modern era of mainstream AI—an era of steady progress and commercial success that began, roughly, in the early 1990s and continues to this day—is the long unlikely springtime after a period, known as the AI Winter, that nearly killed off the field.

It came down to a basic dilemma. On the one hand, the software we know how to write is very orderly; most computer programs are organized like a well-run army, with layers of commanders, each layer passing instructions down to the next, and routines that call subroutines that call subroutines. On the other hand, the software we want to write would be adaptable—and for that, a hierarchy of rules seems like just the wrong idea. Hofstadter once summarized the situation by writing, “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”

Machine learning

The “expert systems” that had once been the field’s meal ticket were foundering because of their brittleness. Their approach was fundamentally broken. Take machine translation from one language to another, long a holy grail of AI. The standard attack involved corralling linguists and translators into a room and trying to convert their expertise into rules for a program to follow. The standard attack failed for reasons you might expect: no set of rules can ever wrangle a human language; language is too big and too protean; for every rule obeyed, there’s a rule broken.

If machine translation was to survive as a commercial enterprise—if AI was to survive—it would have to find another way. Or better yet, a shortcut.

The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence. One such device, of course, is the human brain—but the whole point is to avoid grappling with the brain’s complexity. So what you do instead is start with a machine so simple, it almost doesn’t work: a machine, say, that randomly spits out French words for the English words it’s given.

Imagine a box with thousands of knobs on it. Some of these knobs control general settings: given one English word, how many French words, on average, should come out? And some control specific settings: given jump, what is the probability that shot comes next? The question is, just by tuning these knobs, can you get your machine to convert sensible English into sensible French?

It turns out that you can. What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.) You proceed one pair at a time. After you’ve entered a pair, take the English half and feed it into your machine to see what comes out in French. If that sentence is different from what you were expecting—different from the known correct translation—your machine isn’t quite right. So jiggle the knobs and try again. After enough feeding and trying and jiggling, feeding and trying and jiggling again, you’ll get a feel for the knobs, and you’ll be able to produce the correct French equivalent of your English sentence.

By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable result. And the beauty is that you never needed to program the machine explicitly; you never needed to know why the knobs should be twisted this way or that. (…)

Google has projects that gesture toward deeper understanding: extensions of machine learning inspired by brain biology; a “knowledge graph” that tries to map words, like Obama, to people or places or things. But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself. It’s like an enormous Rosetta Stone, the calcified hieroglyphics of minds once at work. (…)

Ever since he was about 15, Hofstadter has read The Catcher in the Rye once every 10 years. In the fall of 2011, he taught an undergraduate seminar called “Why Is J. D. Salinger’s The Catcher in the Rye a Great Novel?” He feels a deep kinship with Holden Caulfield. When I mentioned that a lot of the kids in my high-school class didn’t like Holden—they thought he was a whiner—Hofstadter explained that “they may not recognize his vulnerability.” You imagine him standing like Holden stood at the beginning of the novel, alone on the top of a hill, watching his classmates romp around at the football game below. “I have too many ideas already,” Hofstadter tells me. “I don’t need the stimulation of the outside world.” (…)

“Ars longa, vita brevis,” Hofstadter likes to say. “I just figure that life is short. I work, I don’t try to publicize. I don’t try to fight.”

There’s an analogy he made for me once. Einstein, he said, had come up with the light-quantum hypothesis in 1905. But nobody accepted it until 1923. “Not a soul,” Hofstadter says. “Einstein was completely alone in his belief in the existence of light as particles—for 18 years.

“That must have been very lonely.” “

— James Somers, to read the full article click The Man Who Would Teach Machines to Think, The Atlantic, Oct 23 2013

Douglas Hofstadter, is an American professor of cognitive science whose research focuses on the sense of “I”, consciousness, analogy-making, artistic creation, literary translation, and discovery in mathematics and physics. He is best known for his book Gödel, Escher, Bach: an Eternal Golden Braid, first published in 1979. It won both the Pulitzer Prize for general non-fiction.

See also:

The Mathematical Art of M.C. Escher, Lapidarium notes

Mar
30th
Wed
permalink

Kevin Kelly on the Satisfaction Paradox

“What if you lived in a world where everything around you was just what you wanted? And there was tons of it. How would you make a choice since all of it — 100% — was just what you liked?

What if you lived in a world where every great movie, book, song that was ever produced was at your fingertips as if “for free”, and your filters and friends had weeded out the junk, the trash, and anything that would remotely bore you. The only choices would be the absolute cream of the cream, the things your best friend would recommend. What would you watch or read or listen to next?

What if you lived in a miraculous world where the only works you ever saw were ones you absolutely loved, including the ones that were randomly thrown in? In other words, you could only watch things perfectly matched to you at that moment. But the problem is that in this world there are a thousand times as many works as you have time in your long life to see. How would you choose? Or would you? (…)

The paradox is that not-choosing may not be satisfying!

We may need to make choices in order to be satisfied, even if those choices lead to less than satisfying experiences.
But of course this would be less than optimal satisfaction. Thus, there may be a psychological dilemma or paradox that ultimate satisfaction may ultimately be unsatisfying.

This is the psychological problem of dealing with abundance rather than scarcity. It is not quite the same problem of abundance articulated by the Paradox of Choice, the theory that we find too many choices paralyzing. That if we are given 57 different mustards to choose from at the supermarket, we often leave without choosing any.

The paradox of satisfaction suggests that the tools we employ to increase our satisfaction of choices — filters and recommendations — may be unsatisfying if they diminish the power of our choices. Another way to say this: no system can be absolutely satisfying. (…)

Let’s say that after all is said and done, in the history of the world there are 2,000 theatrical movies, 500 documentaries, 200 TV shows, 100,000 songs, and 10,000 books that I would be crazy about. I don’t have enough time to absorb them all, even if I were a full time fan. But what if our tools could deliver to me only those items to choose from? How would I — or you — choose from those select choices? (…)

I believe that answering this question is what outfits like Amazon will be selling in the future. For the price of a subscription you will subscribe to Amazon and have access to all the books in the world at a set price. (An individual book you want to read will be as if it was free, because it won’t cost you extra.) The same will be true of movies (Netflix), or music (iTunes or Spotify or Rhapsody.) You won’t be purchasing individual works.

Instead you will pay Amazon, or Netflix, or Spotify, or Google for their suggestions of what you should pay attention to next. Amazon won’t be selling books (which are marginally free); they will be selling their recommendations of what to read. You’ll pay the subscription fee in order to get access to their recommendations to the “free” works, which are also available elsewhere. Their recommendations (assuming continual improvements by more collaboration and sharing of highlights, etc.) will be worth more than the individual books. You won’t buy movies; you’ll buy cheap access and pay for personalized recommendations.

The new scarcity is not creative products but satisfaction. And because of the paradox of satisfaction, few people will ever be satisfied.”

Kevin Kelly, the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, The Satsisfaction Paradox, The Technium, March 2011.

Mar
21st
Mon
permalink

Lewis Carroll and psychoanalysis: Why nothing adds up in Wonderland

             
Lewis Carroll’s insight into meaning and interpretation remains of key interest to psychoanalysts intent on hearing all that he had to say about psychic life. (…)

What sparked their admiration,the psychoanalyst Jacques Lacan explained (1966), was Carroll’s interest in “all kinds of truths – ones that are certain even if not self-evident”. The truth apparently snared in Carroll’s fiction is that our culture adopts rules that can seem absurd, even ridiculous, when seen too close and interpreted too literally. And while a lot of fiction strives quite diligently to imitate those rules, Carroll joined iconoclasts such as Jonathan Swift in upending them, to cast a wry light on their sometimes ludicrous foundations. (…) The ensuing paradox about meaning and nonsense, to assess what it might teach Alice and her reader as they meditate on Wonderland. (…)

What is Carroll’s nonsense about and what is its overall effect? (…)

Carroll advanced an approach to subjectivity that has much in common with psychoanalysis, given their shared interest in ontology and the limits of meaning. The Alice stories “manage to have such a hold” on readers, he declared, because they touch on “the most pure network of our condition of being: the symbolic, the imaginary, and the real.” In its commitment to analyzing all three registers, moreover, “psychoanalysis is in the best position to explain the effect” of such fiction on readers, including how and why Alice’s madcap adventures in Wonderland “won over the entire world.”

Interest in the most nonsensical aspects of our culture led Lacan to rethink an argument previously put forward by the Surrealist André Breton – that Carroll had used nonsense as a “vital solution to the deep contradiction between an acceptance of madness and the exercise of reason.” - To Breton, Carroll was the Surrealists’ first “master in the school of truancy,” because he offset the “poetic order” with the madness – even the supposed tyranny – of rationalism. - Rather than simply repeating that line, however, which downplays much of the interest and originality of Carroll’s creativity and thinking, Lacan’s tribute aimed at something more: He wanted to rescue Carroll’s insight into the way human beings are compelled to adapt to broader cultural demands. As Lacan put it, almost pitting his reading against generations of devoted readers seeking only innocent pleasure from the Alice stories, Wonderland generates ‘unease,’ even a type of ‘malaise,’ by revealing how individuals struggle to conform to cultural systems to which they are not especially well suited. (…)

Lacan here predates Gilles Deleuze’s insight, in The Logic of Sense, that Carroll’s nonsense has an internal logic to it, and thus a meaning of its own, which competes with that of standard, everyday sense. Carroll “remains the master and the surveyor of surfaces,” Deleuze later contended. “Surfaces which were taken to be so well-known that nobody was exploring them anymore. On these surfaces, nonetheless, the entire logic of sense is located” (1969, p. 93). (…)

With Carroll the praise that critics frequently bestow on his fiction seems commensurate with its artistry, adventurousness, and semantic intelligence. It is to Carroll that we attribute such outsized flights of fancy as a mad tea party peopled by raucous, acrimonious creatures – almost a mini-society in dissensus. He also gives us philosophically-minded insects imitating classical Athens as they debate the meaning of life; babies that turn into pigs at the drop of a hat; the surreal grin of a cat that floats eerily across the sky; and the queen of a chess game transfigured miraculously into a sheep dressed as a grandmother, before she morphs into a kitten whom Alice asks, in turn, whether it dreamed the whole scenario. (…)

Most of the antics that Carroll relays in Wonderland seem pointedly to flatter Alice into believing that she sees through the many escapades, to what is beyond them – as if she were partly outside the worlds of each novella and thus able to gauge them from a position of relative mastery. From the works themselves, we also learn that the comparison Carroll sets up between Wonderland and the Victorians’ symbolic order is not in the least flattering to the latter. Nor does that comparison – and its associated critique – end with the Alice stories. Both are extended with still greater anxiety in Sylvie and Bruno (1991[1889]), Carroll’s proto-Joycean novel, which styles Fairyland and Outerland as largely interchangeable. As Carroll writes in the novel’s preface, signaling his fascination with psychology and consciousness,

"I have supposed a Human being to be capable of various psychical states, with varying degrees of consciousness, as follows:—

– the ‘eerie’ state, in which, while conscious of actual surroundings, he is also conscious of the presence of Fairies;

– a form of trance, in which, while unconscious of actual surroundings, and apparently asleep, he (i.e. his immaterial essence) migrates to other scenes, in the actual world, or in Fairyland, and is conscious of the presence of Fairies.”

— Lewis Carroll (1977[1896]), Symbolic logic, Warren BartleyWIII, editor. Brighton, UK: Harvester Press.

Three additional criteria convey the novel’s imagined states of being, indicating how seriously Carroll tried to maintain such ontological distinctions. (…)

Art and biography appear to part company over these interpretive dilemmas. For how we interpret the enigmas attached to both of these registers is, as the Alice stories show, central to determining what questions she and the reader can ask about them. As Lacan put it in the passage cited earlier, Carroll seems to want to “prepare” her for the lesson that “one only ever passes through a door one’s own size” – a statement hinting that an answer can emerge only after one has discovered the question attached to it. Approach such a portal from the wrong direction, with the wrong premise or at the wrong time, and awareness of it – much less passage through it – is unlikely. The idea is rather like that of Wonderland itself, in which much happens the wrong way round, playing havoc with cause and effect, meaning and intention, inference and interpretation. Alice has to shrink or expand to enter a different ontological realm. She has to adapt to circumstances, and does so sometimes with relative ease, at other times with intense difficulty.

One of the questions Carroll implicitly poses at such moments is whether interpretation can decipher “the most pure network of our condition of being: the symbolic, the imaginary, and the real.” The matter bears heavily on psychoanalysis, Lacan averred, given its interest in the psychical patterns and distortions that magnify suffering, stoke unease, and prevent mourning. In Wonderland, as in Outerland, those distortions persist not just because both realms are thoroughly imbued with nonsense, but also because investigation into both novellas enables but does not end interpretation. In Through the Looking-Glass, for instance, in a significant metafictional moment, Humpty Dumpty adopts an interpretive code that is comically incapable of addressing what other characters say and mean. As he declares: “When I use a word, it means just what I choose it to mean – neither more nor less … The question is … which is to be master”.

A successful outcome to such attempted mastery is of course as elusive to Humpty Dumpty as it is to other figures in Wonderland. Oblivious, however, he veers down another idiosyncratic track: how words assume – then seem almost to contain – a life of their own. Carroll himself dubs a few of them ‘portmanteau’ words, capturing the idea that meaning is almost literally encased in them. (…)

Carroll’s fiction most often focuses on the play and limits of meaning across semantic and ontological registers. As the narrator observes in Sylvie and Bruno, almost doffing his hat at the myriad philosophical and metafictional questions that ensue: “‘Either I’ve been dreaming about Sylvie,’ I said to myself, ‘and this is the reality. Or else I’ve really been with Sylvie, and this is a dream! Is Life itself a dream, I wonder?’” (…)

Carroll’s artistic and intellectual games render that language by such idiosyncratic signifiers as ‘Boojum,’‘Snark,’ and ‘slithy toves.’ Not all such neologisms are nonsensical. ‘Chortled,’ another Carrollian coinage, has since entered our language as a delightful verb. But the underside to this inventiveness is worth underlining because critics have found it easy to minimize: The ‘vertigo’ that ensues from Carroll’s model dramatizes a difficulty for Alice – and her reader – in adapting to the peculiar world of language and symbols. That is because the rules and rituals governing her world seem both whimsical and arbitrarily enforced. They serve as a check on contingency and freedom in Wonderland, while casting the adult world beyond it as authoritarian and almost willfully perverse. Consider the angry Queen of Hearts, whose face explodes with rage the moment others question her capricious, unjust orders. In each instance, her verdicts are a foregone conclusion. (…)

John Tenniel’s illustrations nicely capture this ontological challenge. They emphasize not just the difficulty but also the price of Alice’s attempts at adapting to circumstances. Alice is first too small (see Figure 1), then too big (see Figure 2) for the world she tries to inhabit. She is both unprepared for it, yet joining it long after it has established rules and laws with which she struggles to comply.

Carroll here deftly anticipates the radical argument that Lacan would popularize from Sigmund Freud’sBeyond the Pleasure Principle: because of our capacity for reflection and consciousness, we miss the ‘right moment’ of biology and arrive too quickly into a symbolic order that we can grasp and comprehend only quizzically and belatedly. (…)

In all senses, then, nothing quite adds up in Wonderland. None of the creatures in Wonderland easily coexists – each is peevish, irrepressible, and for the most part insistently singular. At the same time, nothingness amounts to an ontological dimension that Carroll and Lacan take very seriously, and with good reason. The patchwork quilt of our symbolic order is, they show, held pincers-like by the real. To confront the limits of the latter – as Alice does repeatedly, with her pointed questions, quirky imagination, preternatural respect for rules, and sometimes whimsical joy in breaking them – is to expose, in the 19th century no less, a rickety structure held together by desire, illusion and force, a volatile combination at the best of times. (…)

The Alice stories reveal both the generative possibilities and the unwelcome distortions of the symbolic order. In refusing to imitate or rationalize the comic pretensions of a system only loosely bound by rules and signifiers, Carroll gives us that world aslant and askew. His oblique perspective underscores the fantasies and psychical effects that exceed symbolization – fantasies that in his fiction come to assume ardent, impossible meaning.”

Christopher J. Lane (British-American literary critic and intellectual historian who is currently the Pearce Miller Research Professor of Literature at Northwestern University), Lewis Carroll and psychoanalysis: Why nothing adds up in wonderland, The International Journal of Psychoanalysis, March 1, 2011. (Illustrations: John Tenniel)

Sep
28th
Tue
permalink

Robert Lanza: Does the Past Exist Yet? Evidence Suggests Your Past Isn’t Set in Stone

(Picture source)

“Recent discoveries require us to rethink our understanding of history. “The histories of the universe,” said renowned physicist Stephen Hawking “depend on what is being measured, contrary to the usual idea that the universe has an objective observer-independent history.”

Is it possible we live and die in a world of illusions? Physics tells us that objects exist in a suspended state until observed, when they collapse in to just one outcome. Paradoxically, whether events happened in the past may not be determined until sometime in your future — and may even depend on actions that you haven’t taken yet.

In 2002, scientists carried out an amazing experiment, which showed that particles of light “photons" knew — in advance −- what their distant twins would do in the future. They tested the communication between pairs of photons — whether to be either a wave or a particle. Researchers stretched the distance one of the photons had to take to reach its detector, so that the other photon would hit its own detector first. The photons taking this path already finished their journeys — they either collapse into a particle or don’t before their twin encounters a scrambling device. Somehow, the particles acted on this information before it happened, and across distances instantaneously as if there was no space or time between them. They decided not to become particles before their twin ever encountered the scrambler. It doesn’t matter how we set up the experiment. Our mind and its knowledge is the only thing that determines how they behave. Experiments consistently confirm these observer-dependent effects.

More recently (Science 315, 966, 2007), scientists in France shot photons into an apparatus, and showed that what they did could retroactively change something that had already happened. As the photons passed a fork in the apparatus, they had to decide whether to behave like particles or waves when they hit a beam splitter. Later on - well after the photons passed the fork - the experimenter could randomly switch a second beam splitter on and off. It turns out that what the observer decided at that point, determined what the particle actually did at the fork in the past. At that moment, the experimenter chose his history. (…)

But what about dinosaur fossils? Fossils are really no different than anything else in nature. For instance, the carbon atoms in your body are “fossils” created in the heart of exploding supernova stars. Bottom line: reality begins and ends with the observer. “We are participators,” John Wheeler said “in bringing about something of the universe in the distant past.” Before his death, he stated that when observing light from a quasar, we set up a quantum observation on an enormously large scale. It means, he said, the measurements made on the light now, determines the path it took billions of years ago.

Like the light from Wheeler’s quasar, historical events such as who killed JFK, might also depend on events that haven’t occurred yet. There’s enough uncertainty that it could be one person in one set of circumstances, or another person in another. Although JFK was assassinated, you only possess fragments of information about the event. But as you investigate, you collapse more and more reality. According to biocentrism, space and time are relative to the individual observer - we each carry them around like turtles with shells. (…)

History is a biological phenomenon − it’s the logic of what you, the animal observer experiences. You have multiple possible futures, each with a different history like in the Science experiment. Consider the JFK example: say two gunmen shot at JFK, and there was an equal chance one or the other killed him. This would be a situation much like the famous Schrödinger’s cat experiment, in which the cat is both alive and dead − both possibilities exist until you open the box and investigate.

“We must re-think all that we have ever learned about the past, human evolution and the nature of reality, if we are ever to find our true place in the cosmos,” says Constance Hilliard, a historian of science at UNT. Choices you haven’t made yet might determine which of your childhood friends are still alive, or whether your dog got hit by a car yesterday. In fact, you might even collapse realities that determine whether Noah’s Ark sank. “The universe,” said John Haldane, “is not only queerer than we suppose, but queerer than we can suppose.”

See also:

Biocentrism
The Experience and Perception of Time, Stanford Encyclopedia of Philosophy
Time tag on Lapidarium
Jun
1st
Tue
permalink

The Mathematical Art of M.C. Escher

"For me it remains an open question whether [this work] pertains to the realm of mathematics or to that of art." — M.C. Escher



— BBC-4, 2005

Douglas R. Hofstadter on M. C. Escher’s drawings

"To my mind, the most beautiful and powerful visual realizations of this notion of Strange Loops exist in the work of the Dutch graphic artist M. C. Escher, who lived from 1902 to 1972. Escher was the creator of some of the most intellectually stimulating drawings of all time. Many of them have their origin in paradox, illusion, or double-meaning.

Mathematicians were among the first admirers of Escher’s drawings, and this is understandable because they often are based on mathematical principles of symmetry or pattern… But there is much more to a typical Escher drawing than just symmetry or pattern; there is often an underlying idea, realized in artistic form. And in particular, the Strange Loop is one of the most recurrent themes in Escher’s work. Look, for example, at the lithograph Waterfall, 1961 (Fig. 5), and compare its six-step endlessly falling loop with the six-step endlessly rising loop of the J.S. Bach's "Canon per Tonos". The similarity of vision is remarkable. Bach and Escher are playing one single theme in two different “keys”: music and art.

                                               Figure 5, M. C. Escher, Waterwall, 1961

Escher realized Strange Loops in several different ways, and they can be arranged according to the tightness of the loop. The lithograph Ascending and Descending, 1960 (Fig. 6), in which monks trudge forever in loops, is the loosest version, since it involves so many steps before the starting point is regained.

                          Figure 6, M. C. Escher, Ascending and Descending, 1960

A tighter loop is contained in Waterfall, which, as we already observed, involves only six discrete steps. You may be thinking that there is some ambiguity in the notion of a single “step”-for instance, couldn’t Ascending and Descending be seen just as easily as having four levels (staircases) as forty-five levels (stairs)% It is indeed true that there is an inherent haziness in level-counting, not only in Escher pictures, but in hierarchical, many-level systems. We will sharpen our understanding of this haziness later on.

But let us not get too distracted now’ As we tighten our loop, we come to the remarkable Drawing Hands (Fig. 135), in which each of two hands draws the other: a two-step Strange Loop.

                                                 Figure 135, M. C. Escher, Drawing Hands

  Figure 136, Abstract diagram of M.C. Escher’s Drawing Hands. On to, a seeming paradox. Below, it’s relsolution.

"And yet when I say "strange loop", I have something else in mind — a less concrete, more elusive notion. What I mean by “strange loop” is — here goes a first stab, anyway — not a physical circuit but an abstract loop in which, in the series of stages that constitute the cycling-around, there is a shift from one level of abstraction (or structure) to another, which feels like an upwards movement in a hierarchy, and yet somehow the successive “upward” shifts turn out to give rise to a closed cycle. That is, despite one’s sense of departing ever further from one’s origin, one winds up, to one’s shock, exactly where one had started out. In short, a strange loop is a paradoxical level-crossing feedback loop.” — (D. Hofstadter,  I Am a Strange Loop, p. 101-102)

And finally, the tightest of all Strange Loops is realized in Print Gallery (Fig. 142): a picture of a picture which contains itself. Or is it a picture of a gallery which contains itself? Or of a town which contains itself? Or a young man who contains himself’? (…)

                                              Figure 142, M. C. Escher, Print Gallery

Implicit in the concept of Strange Loops is the concept of infinity, since what else is a loop but a way of representing an endless process in a finite way? And infinity plays a large role in many of Escher’s drawings. Copies of one single theme often fit into each’ other, forming visual analogues to the canons of Bach. Several such patterns can be seen in Escher’s famous print Metamorphosis (Fig. 8). It is a little like the “Endlessly Rising Canon”: wandering further and further from its starting point, it suddenly is back. In the tiled planes of Metamorphosis and other pictures, there are already suggestions of infinity.

                                   Figure 8, M. C. Escher, Metamorphosis II

But wilder visions of infinity appear in other drawings by Escher. In some of his drawings, one single theme can appear on different levels of reality. For instance, one level in a drawing might clearly be recognizable as representing fantasy or imagination; another level would be recognizable as reality. These two levels might be the only explicitly portrayed levels. But the mere presence of these two levels invites the viewer to look upon himself as part of yet another level; and by taking that step, the viewer cannot help getting caught up in Escher’s implied chain of levels, in which, for any one level, there is always another level above it of greater “reality”, and likewise, there is always a level below, “more imaginary” than it is.

This can be mind-boggling in itself. However, what happens if the chain of levels is not linear, but forms a loop? What is real, then, and what is fantasy? The genius of Escher was that he could not only concoct, but actually portray, dozens of half-real, half-mythical worlds, worlds filled with Strange Loops, which he seems to be inviting his viewers to enter.”

Douglas R. Hofstadter, Gödel Escher Bach: an Eternal Golden Braid, Penguin Books, 1999, p. 18-23.

Inspirations: A Short Film Celebrating the Mathematical Art of M.C. Escher

Cristóbal Vila, Inspirations, February 2012

See also:

The Official M.C. Escher Website
The Strange Worlds of M C Escher, Escape Into Life
Mathematical Art of M.C. Escher, Platonic Realms MiniText
Nature by Numbers - a short movie inspired on numbers, geometry and nature by Cristóbal Vila

Apr
17th
Sat
permalink
Why do we believe in God? We are religious because we are paranoid | Psychology Today
”(…) Error Management Theory suggests that, in your inference, you can make a “Type I” error of false positive or “Type II” error of false negative, and these two types of error carry vastly different consequences and costs. The cost of a false-positive error is that you become paranoid. You are always looking around and behind your back for predators and enemies that don’t exist. The cost of a false-negative error is that you are dead, being killed by a predator or an enemy when you least expect them. Obviously, it’s better to be paranoid than dead, so evolution should have designed a mind that overinfers personal, animate, and intentional forces even when none exist.
Different theorists call this innate human tendency to commit false-positive errors rather than false-negative errors (and as a consequence be a bit paranoid) “animistic bias” or “the agency-detector mechanism.” These theorists argue that the evolutionary origins of religious beliefs in supernatural forces may have come from such an innate cognitive bias to commit false-positive errors rather than false-negative errors, and thus overinfer personal, intentional, and animate forces behind otherwise perfectly natural phenomena. (…)
In this view, religiosity (the human capacity for belief in supernatural beings) is not an evolved tendency per se; after all, religion in itself is not adaptive. It is instead a byproduct of animistic bias or the agency-detector mechanism, the tendency to be paranoid, which is adaptive because it can save your life. Humans did not evolve to be religious; they evolved to be paranoid. And humans are religious because they are paranoid. (…)”
— Satoshi Kanazawa, Why do we believe in God?, Psychology Today, March 28, 2008. (More). See also: Martie G. Haselton and David M. Buss, Error Management Theory: A New Perspective on Biases in Cross-Sex Mind Reading, University of Texas at Austin (pdf)
Michael Shermer: The pattern behind self-deception | TED.com



In this video Michael Shermer says the human tendency to believe strange things — from alien abductions to dowsing rods — boils down to two of the brain’s most basic, hard-wired survival skills. He explains what they are, and how they get us into trouble. Michael Shermer debunks myths, superstitions and urban legends, and explains why we believe them. (Source: TED.com, Feb 2010. 
See also: ☞ Why people believe in strange things, Lapidarium resume 

Why do we believe in God? We are religious because we are paranoid | Psychology Today

”(…) Error Management Theory suggests that, in your inference, you can make a “Type I” error of false positive or “Type II” error of false negative, and these two types of error carry vastly different consequences and costs. The cost of a false-positive error is that you become paranoid. You are always looking around and behind your back for predators and enemies that don’t exist. The cost of a false-negative error is that you are dead, being killed by a predator or an enemy when you least expect them. Obviously, it’s better to be paranoid than dead, so evolution should have designed a mind that overinfers personal, animate, and intentional forces even when none exist.

Different theorists call this innate human tendency to commit false-positive errors rather than false-negative errors (and as a consequence be a bit paranoid) “animistic bias” or “the agency-detector mechanism.” These theorists argue that the evolutionary origins of religious beliefs in supernatural forces may have come from such an innate cognitive bias to commit false-positive errors rather than false-negative errors, and thus overinfer personal, intentional, and animate forces behind otherwise perfectly natural phenomena. (…)

In this view, religiosity (the human capacity for belief in supernatural beings) is not an evolved tendency per se; after all, religion in itself is not adaptive. It is instead a byproduct of animistic bias or the agency-detector mechanism, the tendency to be paranoid, which is adaptive because it can save your life. Humans did not evolve to be religious; they evolved to be paranoid. And humans are religious because they are paranoid. (…)”

Satoshi Kanazawa, Why do we believe in God?, Psychology Today, March 28, 2008. (More). See also: Martie G. Haselton and David M. Buss, Error Management Theory: A New Perspective on Biases in Cross-Sex Mind Reading, University of Texas at Austin (pdf)

Michael Shermer: The pattern behind self-deception | TED.com

In this video Michael Shermer says the human tendency to believe strange things — from alien abductions to dowsing rods — boils down to two of the brain’s most basic, hard-wired survival skills. He explains what they are, and how they get us into trouble. Michael Shermer debunks myths, superstitions and urban legends, and explains why we believe them. (Source: TED.com, Feb 2010.

See also: Why people believe in strange things, Lapidarium resume 

Feb
16th
Tue
permalink
"Did you happen to meet any soldiers, my dear, as you came through the woods?"
Marshall McLuhan, The Medium is the Massage, Gingko Press, 2001 p. 140-141

"Did you happen to meet any soldiers, my dear, as you came through the woods?"

Marshall McLuhan, The Medium is the Massage, Gingko Press, 2001 p. 140-141