Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Oct
4th
Tue
permalink

Richard Feynman on Beauty, Honours and Curiosity



Richard Feynman, American physicist known for his work in the path integral formulation of quantum mechanics, Nobel Prize in Physics.
The Feynman Series is a companion project with The Sagan Series (Full playlist)

See also:

Richard Feynman and Jirayr Zorthian on science, art and beauty
Richard Feynman on the likelihood of Flying Saucers
Richard Feynman on how we would look for a new law (the key to science)
Richard Feynman on the way nature work: “You don’t like it? Go somewhere else!”

Sep
30th
Fri
permalink

Why Does Beauty Exist? Jonah Lehrer: ‘Beauty is a particularly potent and intense form of curiosity’

                          
                                Interwoven Beauty by John Lautermilch

Curiosity

"Here’s my (extremely speculative) theory: Beauty is a particularly potent and intense form of curiosity. It’s a learning signal urging us to keep on paying attention, an emotional reminder that there’s something here worth figuring out. Art hijacks this ancient instinct: If we’re looking at a Rothko, that twinge of beauty in the mOFC is telling us that this painting isn’t just a blob of color; if we’re listening to a Beethoven symphony, the feeling of beauty keeps us fixated on the notes, trying to find the underlying pattern; if we’re reading a poem, a particularly beautiful line slows down our reading, so that we might pause and figure out what the line actually means. Put another way, beauty is a motivational force that helps modulate conscious awareness. The problem beauty solves is the problem of trying to figure out which sensations are worth making sense of and which ones can be easily ignored.

Let’s begin with the neuroscience of curiosity, that weak form of beauty. There’s an interesting recent study from the lab of Colin Camerer at Caltech, led by Min Jeong Kang. (…)

The first thing the scientists discovered is that curiosity obeys an inverted U-shaped curve, so that we’re most curious when we know a little about a subject (our curiosity has been piqued) but not too much (we’re still uncertain about the answer). This supports the information gap theory of curiosity, which was first developed by George Loewenstein of Carnegie-Mellon in the early 90s. According to Loewenstein, curiosity is rather simple: It comes when we feel a gap “between what we know and what we want to know”. This gap has emotional consequences: it feels like a mental itch. We seek out new knowledge because we that’s how we scratch the itch.

The fMRI data nicely extended this information gap model of curiosity. It turns out that, in the moments after the question was first asked, subjects showed a substantial increase in brain activity in three separate areas: the left caudate, the prefrontal cortex and the parahippocampal gyri. The most interesting finding is the activation of the caudate, which seems to sit at the intersection of new knowledge and positive emotions. (For instance, the caudate has been shown to be activated by various kinds of learning that involve feedback, while it’s also been closely linked to various parts of the dopamine reward pathway.) The lesson is that our desire for more information – the cause of curiosity – begins as a dopaminergic craving, rooted in the same primal pathway that responds to sex, drugs and rock and roll.

I see beauty as a form of curiosity that exists in response to sensation, and not just information. It’s what happens when we see something and, even though we can’t explain why, want to see more. But here’s the interesting bit: the hook of beauty, like the hook of curiosity, is a response to an incompleteness. It’s what happens when we sense something missing, when there’s a unresolved gap, when a pattern is almost there, but not quite. I’m thinking here of that wise Leonard Cohen line: “There’s a crack in everything – that’s how the light gets in.” Well, a beautiful thing has been cracked in just the right way.

Beautiful music and the brain

The best way to reveal the link between curiosity and beauty is with music. Why do we perceive certain musical sounds as beautiful? On the one hand, music is a purely abstract art form, devoid of language or explicit ideas. The stories it tells are all subtlety and subtext; there is no content to get curious about. And yet, even though music says little, it still manages to touch us deep, to tittilate some universal dorsal hairs.

We can now begin to understand where these feelings come from, why a mass of vibrating air hurtling through space can trigger such intense perceptions of beauty. Consider this recent paper in Nature Neuroscience by a team of Montreal researchers. (…)

Because the scientists were combining methodologies (PET and fMRI) they were able to obtain a precise portrait of music in the brain. The first thing they discovered (using ligand-based PET) is that beautiful music triggers the release of dopamine in both the dorsal and ventral striatum. This isn’t particularly surprising: these regions have long been associated with the response to pleasurable stimuli. The more interesting finding emerged from a close study of the timing of this response, as the scientists looked to see what was happening in the seconds before the subjects got the chills.
I won’t go into the precise neural correlates – let’s just say that you should thank your right nucleus accumbens the next time you listen to your favorite song – but want to instead focus on an interesting distinction observed in the experiment:


                                                      Click image to enlarge

In essence, the scientists found that our favorite moments in the music – those sublimely beautiful bits that give us the chills – were preceeded by a prolonged increase of activity in the caudate, the same brain area involved in curiosity. They call this the “anticipatory phase,” as we await the arrival of our favorite part:

Immediately before the climax of emotional responses there was evidence for relatively greater dopamine activity in the caudate. This subregion of the striatum is interconnected with sensory, motor and associative regions of the brain and has been typically implicated in learning of stimulus-response associations and in mediating the reinforcing qualities of rewarding stimuli such as food.

In other words, the abstract pitches have become a primal reward cue, the cultural equivalent of a bell that makes us drool. Here is their summary:

The anticipatory phase, set off by temporal cues signaling that a potentially pleasurable auditory sequence is coming, can trigger expectations of euphoric emotional states and create a sense of wanting and reward prediction. This reward is entirely abstract and may involve such factors as suspended expectations and a sense of resolution. Indeed, composers and performers frequently take advantage of such phenomena, and manipulate emotional arousal by violating expectations in certain ways or by delaying the predicted outcome (for example, by inserting unexpected notes or slowing tempo) before the resolution to heighten the motivation for completion.

(…)

While music can often seem (at least to the outsider) like an intricate pattern of pitches – it’s art at its most mathematical – it turns out that the most important part of every song or symphony is when the patterns break down, when the sound becomes unpredictable. If the music is too obvious, it is annoyingly boring, like an alarm clock. (Numerous studies, after all, have demonstrated that dopamine neurons quickly adapt to predictable rewards. If we know what’s going to happen next, then we don’t get excited.) This is why composers introduce the tonic note in the beginning of the song and then studiously avoid it until the end. They want to make us curious, to create a beautiful gap between what we hear and what we want to hear.

To demonstrate this psychological principle, the musicologist Leonard Meyer, in his classic book Emotion and Meaning in Music (1956), analyzed the 5th movement of Beethoven’s String Quartet in C-sharp minor, Op. 131. Meyer wanted to show how music is defined by its flirtation with – but not submission to – our expectations of order. To prove his point, Meyer dissected fifty measures of Beethoven’s masterpiece, showing how Beethoven begins with the clear statement of a rhythmic and harmonic pattern and then, in an intricate tonal dance, carefully avoids repeating it. What Beethoven does instead is suggest variations of the pattern. He is its evasive shadow. If E major is the tonic, Beethoven will play incomplete versions of the E major chord, always careful to avoid its straight expression. He wants to preserve an element of uncertainty in his music, making our brains exceedingly curious for the one chord he refuses to give us. Beethoven saves that chord for the end.

According to Meyer, it is the suspenseful tension of music (arising out of our unfulfilled expectations) that is the source of the music’s beauty. While earlier theories of music focused on the way a noise can refer to the real world of images and experiences (its “connotative” meaning), Meyer argued that the emotions we find in music come from the unfolding events of the music itself. This “embodied meaning” arises from the patterns the symphony invokes and then ignores, from the ambiguity it creates inside its own form. “For the human mind,” Meyer writes, “such states of doubt and confusion are abhorrent. When confronted with them, the mind attempts to resolve them into clarity and certainty.” And so we wait, expectantly, for the resolution of E major, for Beethoven’s established pattern to be completed. This nervous anticipation, says Meyer, “is the whole raison d’etre of the passage, for its purpose is precisely to delay the cadence in the tonic.” The uncertainty – that crack in the melody – makes the feeling.

Why the feeling of beauty is useful

What I like about this speculation is that it begins to explain why the feeling of beauty is useful. The aesthetic emotion might have begun as a cognitive signal telling us to keep on looking, because there is a pattern here that we can figure out it. In other words, it’s a sort of a metacognitive hunch, a response to complexity that isn’t incomprehensible. Although we can’t quite decipher this sensation – and it doesn’t matter if the sensation is a painting or a symphony – the beauty keeps us from looking away, tickling those dopaminergic neurons and dorsal hairs. Like curiosity, beauty is a motivational force, an emotional reaction not to the perfect or the complete, but to the imperfect and incomplete. We know just enough to know that we want to know more; there is something here, we just don’t what. That’s why we call it beautiful.”

Jonah Lehrer, American journalist who writes on the topics of psychology, neuroscience, and the relationship between science and the humanities, Why Does Beauty Exist?, Wired science, July 18, 2011

See also:

Beauty is in the medial orbitofrontal cortex of the beholder, study finds
Denis Dutton: A Darwinian theory of beauty, TED, Lapidarium transcript
The Science of Art. A Neurological Theory of Aesthetic Experience
☞ Katherine Harmon, Brain on Beauty Shows the Same Pattern for Art and Music, Scientific American, July 7, 2011

Sep
8th
Thu
permalink

Curiosity as a mechanism for achieving and maintaining high levels of well-being and meaning in life

"A primary interest [of this study] was whether people high in trait curiosity derive greater well-being on days when they are more curious. We also tested whether trait and daily curiosity led to greater, sustainable well-being. Predictions were tested using trait measures and 21 daily diary reports from 97 college students.

We found that on days when they are more curious, people high in trait curiosity reported more frequent growth-oriented behaviors, and greater presence of meaning, search for meaning, and life satisfaction. Greater trait curiosity and greater curiosity on a given day also predicted greater persistence of meaning in life from one day into the next. People with greater trait curiosity reported more frequent hedonistic events but they were associated with less pleasure compared to the experiences of people with less trait curiosity.

The benefits of hedonistic events did not last beyond the day of their occurrence. As evidence of construct specificity, curiosity effects were not attributable to Big Five personality traits or daily positive or negative mood. Our results provide support for curiosity as an ingredient in the development of well-being and meaning in life.”

Todd B. Kashdan, Ph.D., is Associate Professor of Psychology at George Mason University, and Michael F. Steger, Assistant Professor in the Counseling Psychology and Applied Social Psychology programs at Colorado State University, Curiosity and pathways to well-being and meaning in life: Traits, states, and everyday behaviors, SpringerLink, Volume 31, Number 3, 159-173

See also:

☞ T. B. Kashdan, P. Rose, and F. D. Fincham, Curiosity and Exploration: Facilitating Positive Subjective Experiences and Personal Growth Opportunities (pdf), Department of Psychology State University of New York at Buffalo, 2004
Jonah Lehrer on the itch of Curiosity
Our brains are hardwired to fear creativity, Cornell University
Curiosity tag on Lapidarium

Sep
2nd
Fri
permalink

Vagabonding - the act of leaving behind the orderly world to travel independently for an extended period of time

   
                                                                                Rolf Potts, Vagabonding

"Vagabonding involves taking an extended time-out from your normal life - six weeks, four months, two years - to travel the world on your own terms.

But beyond travel, vagabonding is an outlook on life. Vagabonding is about using the prosperity and possibility of the information age to increase your personal options instead of your personal possessions. Vagabonding is about looking for adventure in normal life, and normal life within adventure. Vagabonding is an attitude - a friendly interest in people, places, and things that makes a person an explorer in the truest, most vivid sense of the word. (…)

It’s just an uncommon way of looking at life - a value adjustment from which action naturally follows. And, as much as anything, vagabonding is about time - our only real commodity - and how we choose to use it. (…)

Vagabonding starts now. Even if the practical reality of travel is still months or years away, vagabonding begins the moment you stop making excuses, start saving money, and begin to look at maps with the narcotic tingle of possibility. From here, the reality of vagabonding comes into sharper focus as you adjust your worldview and begin to embrace the exhilarating uncertainty that true travel promises.”

Rolf Potts, travel writer, Vagabonding, Villard Books, 2002

From this hour I ordain myself loos’d of limits and imaginary lines,
Going where I list, my own master total and absolute,
Listening to others, considering well what they say,
Pausing, searching, receiving, contemplating,
Gently, but with undeniable will divesting myself of the holds that would hold me.

Walt Whitman, Song of the Open Road

Journeys are the midwives of thought. Few places are more conducive to internal conversations than a moving plane, ship or train. There is an almost quaint correlation between what is in front of our eyes and the thoughts we are able to have in our heads: large thoughts at times requiring large views, new thoughts new places. Introspective reflections which are liable to stall are helped along by the flow of the landscape. The mind may be reluctant to think properly when thinking is all it is supposed to do.

At the end of hours of train-dreaming, we may feel we have been returned to ourselves - that is, brought back into contact with emotions and ideas of importance to us. It is not necessarily at home that we best encounter our true selves. (…)

Instead of bringing back 1600 plants, we might return from our journeys with a collection of small unfêted but life-enhancing thoughts.”

Alain de Botton, The Art of Travel, Pantheon, 2002

Rolf Potts: Time = wealth



His zeitgeist defining book is not just how to travel the world on a shoestring, but, more importantly, the mindset you need to take with you.

Rolf Potts, a travel writer, his travel advice book Vagabonding, which has been translated into several foreign languages, is in its tenth printing. In 2010, he traveled around the world in six weeks with no luggage or bags of any kind, Time = wealth, Do Lectures

See also:

Stefan Sagmeister, The power of time off, TED.com, July 2009
Rolf Potts on Vagabonding, Authors@Google (video) , Aug 2007
Pico Iyer, Why We Travel, World Hum, 27 Apr 2009
MOVE (video) - 3 guys, 44 days, 11 countries, 18 flights, 38 thousand miles, all to turn 3 ambitious linear concepts based on movement, learning and food, 2011

Jul
30th
Sat
permalink

Stewart Brand: ‘Look At the World Through the Eyes Of A Fool’

                                  

Q: Has society become too eager to discard things and ideas?

(…) I think we have become too shortsighted. Everything is moving faster, everybody is multitasking. Investments are made for short-term returns, democracies run on short-term election cycles. Speedy progress is great, but it is also chancy. When everything is moving fast, the future looks like it is next week. But what really counts is the future ten or hundred years from now. And we should also bear in mind that the history that matters is not only yesterday’s news but events from a decade or a century or a millennium ago. To balance that, we want to look at the long term: the last ten thousand years, the next ten thousand years. (…)

When NASA released the first photographs of the earth from space in the 1960s, people changed their frame of reference. We began to think differently about the earth, about our environment, about humanity. (…)

There had been many drawings of the earth from space, just like people made images of cities from above before we had hot-air balloons. But they were all wrong. Usually, images of the earth did not include any clouds, no weather, no climate. They also tended to neglect the shadow that much of the earth is usually in. From most angles, the earth appears as a crescent. Only when the sun is directly behind you would you see the whole planet brightly illuminated against the blackness of space. (…)

The question of framing

I think there is always the question of framing: How do we look at things? The first photos of the earth changed the frame. We began to talk more about “humans” and less about Germans or Americans. We began to start talking about the planet as a whole. That, in a way, gave us the ability to think about global problems like climate change. We did not have the idea of a global solution before. Climate Change is a century-sized problem. Never before has humanity tried to tackle something on such a long temporal scale. Both the large scale and the long timeframe have to be taken seriously.

Q: Do you believe in something like a human identity?

In a way, the ideal breakthrough would be to discover alien life. That would give us a clear sense of our humanity. But even without that, we have done pretty well in stepping outside our usual frame of reference and looking at the planet and at the human race from the outside. That’s nice. I would prefer if we didn’t encounter alien intelligence for a while. (…)

Q: So we have to improve the extrapolations and predictions that we make based on present data sets?

We like to think that we are living in a very violent time, that the future looks dark. But the data says that violence has declined every millennium, every century, every decade. The reduction in cruelty is just astounding. So we should not focus too much on the violence that has marked the twentieth century. The interesting question is how we can continue that trend of decreasing violence into the future. What options are open to us to make the world more peaceful? Those are data-based questions. (…)

Q: When you started to publish the Whole Earth Catalogue in 1968, you said that you wanted to create a database so that “anyone on Earth can pick up a telephone and find out the complete information on anything.” Is that the idea of the internet, before the internet?

Right, I had forgotten about that quote. Isn’t it nice that I didn’t have to go through the work of collecting that information, it just happened organically. Some people say to me that I should revive the catalogue and my answer is: The internet is better than any catalogue or encyclopedia could ever be. (…)

I don’t think the form determines the triviality of information or the level of discussion. By having much more opportunities and much lower costs of online participation, we are in a position to really expand and improve those discourses. (…)

When Nicholas Negroponte said a few years ago that every child in the world needed a laptop computer, he was right. Many people were skeptical of his idea, but they have been proven wrong. When you give internet access to people in the developing world, they immediately start forming educational networks. They expand their horizons, children teach their parents how to read and write. (…)

Q: On the back cover of the 1974 Whole Earth Catalogue, it said something similar: “Stay hungry, stay foolish”. Why?

It proposes that a beginner’s mind is the way to look at new things. We need a combination of confidence and of curiosity. It is a form of deep-seated opportunism that goes to the core of our nature and is very optimistic. I haven’t been killed by my foolishness yet, so let’s keep going, let’s take chances. The phrase expresses that our knowledge is always incomplete, and that we have to be willing to act on imperfect knowledge. That allows you to open your mind and explore. It means putting aside the explanations provided by social constructs and ideologies.

I really enjoyed your interview with Wade Davis. He makes a persuasive case for allowing native cultures to keep their cultures intact. That’s the idea behind the Rosetta Project as well. Most Americans are limited by the fact that they only speak one language. Being multilingual is a first step to being more aware of different perspectives on the world. We should expand our cognitive reach. I think there are many ways to do that: Embrace the internet. Embrace science. Travel a lot. Learn about people who are unlike yourself. I spent much of my twenties with Native American tribes, for example. You miss a lot of important stuff if you only follow the beaten path. If you look at the world through the eyes of a fool, you will see more. But I probably hadn’t thought about all of this back in 1974. It was a very countercultural move.

Q: In politics, we often talk about policies that supposedly have no rational alternative. Is that a sign of the stifling effects of ideology?

Ideologies are stories we like to tell ourselves. That’s fine, as long as we remember that they are stories and not accurate representations of the world. When the story gets in the way of doing the right thing, there is something wrong with the story. Many ideologies involve the idea of evil: Evil people, evil institutions, et cetera. Marvin Minsky has once said to me that the only real evil is the idea of evil. Once you let that go, the problems become manageable. The idea of pragmatism is that you go with the things that work and cast aside lovely and lofty theories. No theory can be coherent and comprehensive enough to provide a direct blueprint for practical actions. That’s the idea of foolishness again: You work with imperfect theories, but you don’t base your life on them.

Q: So “good” is defined in terms of a pragmatic assessment of “what works”?

Good is what creates more life and more options. That’s a useful frame. The opposite of that would not be evil, but less life and fewer options.”

Stewart Brand, American writer, best known as editor of the Whole Earth Catalog, "Look At the World Through the Eyes Of A Fool", The European, 30.05.2011

See also:Whole Earth Catalogue

Jul
18th
Mon
permalink

From Technologist to Philosopher. Why you should quit your technology job and get a Ph.D. in the humanities

                      

"It’s fun being a technologist. In our Internet-enabled era, it is easy for technologists to parlay creative power into societal power: We build systems that ease the transactions of everyday life, and earn social validation that we are "making the world a better place." Within a few years I had achieved more worldly success than previous generations could have imagined. I had a high-paying technology job, I was doing cutting-edge AI work, and I was living the technotopian good life.

But there was a problem. Over time, it became increasingly hard to ignore the fact that the artificial intelligence systems I was building were not actually that intelligent. They could perform well on specific tasks; but they were unable to function when anything changed in their environment. I realized that, while I had set out in AI to build a better thinker, all I had really done was to create a bunch of clever toys—toys that were certainly not up to the task of being our intellectual surrogates. (…)

I wanted to better understand what it was about how we were defining intelligence that was leading us astray: What were we failing to understand about the nature of thought in our attempts to build thinking machines? (…)

I realized that the questions I was asking were philosophical questions—about the nature of thought, the structure of language, the grounds of meaning. So if I really hoped to make major progress in AI, the best place to do this wouldn’t be another AI lab. If I really wanted to build a better thinker, I should go study philosophy. (…)

Thus, about a decade ago, I quit my technology job to get a Ph.D. in philosophy. (…) I was not aware that there existed distinct branches of analytic and continental philosophy, which took radically different approaches to exploring thought and language; or that there was a discipline of rhetoric, or hermeneutics, or literary theory, where thinkers explore different aspects of how we create meaning and make sense of our world.

As I learned about those things, I realized just how limited my technologist view of thought and language was. I learned how the quantifiable, individualistic, ahistorical—that is, computational—view I had of cognition failed to account for whole expanses of cognitive experience (including, say, most of Shakespeare). I learned how pragmatist and contextualist perspectives better reflect the diversity and flexibility of our linguistic practices than do formal language models. I learned how to recognize social influences on inquiry itself—to see the inherited methodologies of science, the implicit power relations expressed in writing—and how those shape our knowledge.

Most striking, I learned that there were historical precedents for exactly the sort of logical oversimplifications that characterized my AI work. Indeed, there were even precedents for my motivation in embarking on such work in the first place. I found those precedents in episodes ranging from ancient times—Plato’s fascination with math-like forms as a source of timeless truth—to the 20th century—the Logical Positivists and their quest to create unambiguous language to express sure foundations for all knowledge. They, too, had an uncritical notion of progress; and they, too, struggled in their attempts to formally quantify human concepts that I now see as inextricably bound up with human concerns and practices.

In learning the limits of my technologist worldview, I didn’t just get a few handy ideas about how to build better AI systems. My studies opened up a new outlook on the world. I would unapologetically characterize it as a personal intellectual transformation: a renewed appreciation for the elements of life that are not scientifically understood or technologically engineered.

In other words: I became a humanist.

And having a more humanistic sensibility has made me a much better technologist than I was before. I no longer see the world through the eyes of a machine—through the filter of what we are capable of reducing to its logical foundations. I am more aware of how the products we build shape the culture we are in. I am more attuned to the ethical implications of our decisions. And I no longer assume that machines can solve all of our problems for us. The task of thinking is still ours. (…)

The technology issues facing us today—issues of identity, communication, privacy, regulation—require a humanistic perspective if we are to deal with them adequately. (…)

I see a humanities degree as nothing less than a rite of passage to intellectual adulthood. A way of evolving from a sophomoric wonderer and critic into a rounded, open, and engaged intellectual citizen. When you are no longer engaged only in optimizing your products—and you let go of the technotopian view—your world becomes larger, richer, more mysterious, more inviting. More human. (…)

Getting a humanities Ph.D. is the most deterministic path you can find to becoming exceptional in the industry. (…) There is an industrywide shift toward more “product thinking” in leadership—leaders who understand the social and cultural contexts in which our technologies are deployed.

Products must appeal to human beings, and a rigorously cultivated humanistic sensibility is a valued asset for this challenge. That is perhaps why a technology leader of the highest status—Steve Jobs—recently credited an appreciation for the liberal arts as key to his company’s tremendous success with their various i-gadgets.

It is a convenient truth: You go into the humanities to pursue your intellectual passion; and it just so happens, as a by-product, that you emerge as a desired commodity for industry. Such is the halo of human flourishing.”

Damon Horowitz, BA in Computer Science from Columbia, a MS from MIT Media Lab, and a PhD in philosophy from Stanford, recently joined Google as In-House Philosopher / Director of Engineering, From Technologist to Philosopher, The Chronicle of Higher Education, July 17, 2011 (Illustration source: Brian Taylor for The Chronicle)

Why Machines Need People

— Damon Horowitz, Why Machines Need People, TEDxSoMa, 22 jan 2010

Oct
3rd
Sun
permalink
Domenico Remps, Cabinet of Curiosities, 1690s, Opificio delle Pietre Dure, Florence
Domenico Remps (also Rems) was an Italian painter of German or Flemish  origin. He was active in the second half of 17th century in Venice and  was a successful painter of Trompe-l’oeil paintings.
This trompe-l’oeil painting representing a cabinet of curiosities blurs the boundary between real and fictitious space.
Trompe-l’oeil, the French term for “eye-deceiver,” is a modern word for  an old phenomenon: a three-dimensional “perception” provoked by a flat  surface, for a puzzling moment of insecurity and reflection. The early  precursors of modern trompe l’oeil appeared during the Renaissance, with  the discovery of mathematically correct perspective. But the fooling of  the eye to the point of confusion with reality only emerged with the  rise of still-life painting in the Netherlands in the 17th century.  Though highly esteemed by collectors, from the beginning art theorists  often dismissed trompe-l’oeil as the lowest category of art, seeing it  as a mere technical tour-de-force that did not require invention or  intellectual thought. In the 17th century, trompe-l’oeil masters were  not only receiving praise and recognition from many quarters but also  pushing the boundaries of the genre. (Source: Web Gallery of Art)

Domenico Remps, Cabinet of Curiosities, 1690s, Opificio delle Pietre Dure, Florence

Domenico Remps (also Rems) was an Italian painter of German or Flemish origin. He was active in the second half of 17th century in Venice and was a successful painter of Trompe-l’oeil paintings.

This trompe-l’oeil painting representing a cabinet of curiosities blurs the boundary between real and fictitious space.

Trompe-l’oeil, the French term for “eye-deceiver,” is a modern word for an old phenomenon: a three-dimensional “perception” provoked by a flat surface, for a puzzling moment of insecurity and reflection. The early precursors of modern trompe l’oeil appeared during the Renaissance, with the discovery of mathematically correct perspective. But the fooling of the eye to the point of confusion with reality only emerged with the rise of still-life painting in the Netherlands in the 17th century. Though highly esteemed by collectors, from the beginning art theorists often dismissed trompe-l’oeil as the lowest category of art, seeing it as a mere technical tour-de-force that did not require invention or intellectual thought. In the 17th century, trompe-l’oeil masters were not only receiving praise and recognition from many quarters but also pushing the boundaries of the genre. (Source: Web Gallery of Art)

Sep
21st
Tue
permalink

Jonah Lehrer on the itch of Curiosity

(Picture source: Domenico Remps, Cabinet of Curiosities, 1690s, Opificio delle Pietre Dure, Florence)

“The first thing the scientists found is that curiosity obeys an inverted U-shaped curve, so that we’re most curious when we know a little about a subject (our curiosity has been piqued) but not too much (we’re still uncertain about the answer). This supports the information gap theory of curiosity which was first developed by George Loewenstein of Carnegie-Mellon in the early 90s. According to Loewenstein, curiosity is rather simple: It comes when we feel a gap “between what we know and what we want to know”. This gap has emotional consequences: it feels like a mental itch, a mosquito bite on the brain. We seek out new knowledge because we that’s how we scratch the itch.

The fMRI data nicely extended this information gap model of curiosity. It turns out that, in the moments after the question was first asked, subjects showed a substantial increase in brain activity in three separate areas: the left caudate, the prefrontal cortex and the parahippocampal gyri. The most interesting finding is the activation of the caudate, which seems to sit at the intersection of new knowledge and positive emotions. (For instance, the caudate has been shown to be activated by various kinds of learning that involve feedback, while it’s also been closely linked to various parts of the dopamine reward pathway.)

The lesson is that our desire for abstract information – this is the cause of curiosity – begins as a dopaminergic craving, rooted in the same primal pathway that also responds to sex, drugs and rock and roll. This reminds me of something Read Montague, a neuroscientist at Baylor College of Medicine, told me a few years ago: “The guy who’s on hunger strike for some political cause is still relying on his midbrain dopamine neurons, just like a monkey getting a sweet treat,” he said. “His brain simply values the cause more than it values dinner…You don’t have to dig very far before it all comes back to your loins.”

The elegance of this system is that it bootstraps a seemingly unique human talent to an ancient mental process. Because curiosity is ultimately an emotion, an inexplicable itch telling us to keep on looking for the answer, it can take advantage of all the evolutionary engineering that went into our dopaminergic midbrain. (Natural selection had already invented an effective motivational system.) When Einstein was curious about the bending of space-time, he wasn’t relying on some newfangled circuitry. Instead, he was using the same basic neural system as a rat in a maze, looking for a pellet of food. I’ll let the scientists have the last word:
"Understanding the neural basis of curiosity has important substantive implications. Note that while information-seeking is generally evolutionarily adaptive, modern technologies magnify the amount of information available, and hence the potential effects of curiosity. Understanding curiosity is also important for selecting and motivating knowledge workers who gather information (such as scientists, detectives, and journalists). The production of engaging news, advertising and entertainment is also, to some extent, an attempt to create curiosity. The fact that curiosity increases with uncertainty (up to a point), suggests that a small amount of knowledge can pique curiosity and prime the hunger for knowledge, much as an olfactory or visual stimulus can prime a hunger for food, which might suggest ways for educators to ignite the wick in the candle of learning.”
Jonah Lehrer, American journalist, The Itch of Curiosity, Wired science, Aug 3, 2010
May
30th
Sun
permalink

A novel theory explains why the brain craves information and seeks it throught the senses

(Illustration above: Droodles illustrate the notion that the pleasure of a visual pattern depends on its interpretability. By itself, a droodle is a simple pattern that does not elicit much of a response, but reading the droodle’s caption makes it amusing because the reader activates associations that reconcile what is otherwise a meaningless pattern. Captions for these droodles by artist Roger Price might read as follows: a, four elephants sniffing an orange; b, an early bird catching a very strong worm; and c, a man in a mailbox signaling a left turn. (Tallfellow Press))

Human beings are designed to be „infovores”. It’s a craving that begins with a simple preference for certain types of stimuli, then proceeds to more sophisticated levels of perception and cognition that draw on associations the brain makes with previous experiences. (…)

The infovore system is designed to maximize the rate at which people acquire knowledge under conditions where there may be no immediate need for information. (…) Even if there is no direct use of the new information, there is, in evolutionary terms, adaptive value to its acquisition. (…) If infovore behavior is so valuable to our species, one would expect the brain to have cellular and molecular mechanisms that encourage the acquisition of information. (…)

There is ample evidence that the brain is wired for pleasure. Indeed, human beings have been searching for chemical substances to stimulate these neural systems for thousands of years. Among the most rewarding substances ever discovered are compounds derived from opium poppy. (…) opiates target certain molecular receptors located on the surfaces of brain cells. When opiates bind to these ipioid receptors, they modulate the activity of the cells. (…) Mu-opioid receptors are generally localized to parts of the central nervous system that are implicated in the modulation of pain and reward. (…)

Why would a mechanism that’s involved in reducing pain and providing reward be associated with a system concerned with processing visual information? Furthermore, why is the greatest density of receptors found in the parahippocampal cortex, a so-called association area? We think that the muopioid receptors are the key to the pleasures we derive from acquiring new information. (…)

Our preliminary work suggest that the quest for knowledge can never be sated – as long as mu-opioid receptors remain unbound in the human brain.” “
Irving Biederman & E.A. Vessel, Perceptual Pleasure and the Brain (pdf), American Scientist, Volume 94, 2006.