Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Pensieri a caso
Photography
A Box Of Stories
Reading Space
Homepage

Twitter
Facebook

Contact

Archive

Apr
15th
Sun
permalink

How liberal and conservative brains are wired differently. Liberals and conservatives don’t just vote differently, they think differently

           

"There’s now a large body of evidence showing that those who opt for the political left and those who opt for the political right tend to process information in divergent ways and to differ on any number of psychological traits.

Perhaps most important, liberals consistently score higher on a personality measure called “openness to experience,” one of the “Big Five” personality traits, which are easily assessed through standard questionnaires. That means liberals tend to be the kind of people who want to try new things, including new music, books, restaurants and vacation spots — and new ideas.

“Open people everywhere tend to have more liberal values,” said psychologist Robert McCrae, who conducted voluminous studies on personality while at the National Institute on Aging at the National Institutes of Health.

Conservatives, in contrast, tend to be less open — less exploratory, less in need of change — and more “conscientious,” a trait that indicates they appreciate order and structure in their lives. This gels nicely with the standard definition of conservatism as resistance to change — in the famous words of William F. Buckley Jr., a desire to stand “athwart history, yelling ‘Stop!’ ” (…)

We see the consequences of liberal openness and conservative conscientiousness everywhere — and especially in the political battle over facts. (…)

Compare this with a different irrationality: refusing to admit that humans are a product of evolution, a chief point of denial for the religious right. In a recent poll, just 43 percent of tea party adherents accepted the established science here. Yet unlike the vaccine issue, this denial is anything but new and trendy; it is well over 100 years old. The state of Tennessee is even hearkening back to the days of the Scopes “Monkey” Trial, more than 85 years ago. It just passed a bill that will weaken the teaching of evolution.

Such are some of the probable consequences of openness, or the lack thereof. (…)

Now consider another related trait implicated in our divide over reality: the “need for cognitive closure.” This describes discomfort with uncertainty and a desire to resolve it into a firm belief. Someone with a high need for closure tends to seize on a piece of information that dispels doubt or ambiguity, and then freeze, refusing to consider new information. Those who have this trait can also be expected to spend less time processing information than those who are driven by different motivations, such as achieving accuracy.

A number of studies show that conservatives tend to have a greater need for closure than do liberals, which is precisely what you would expect in light of the strong relationship between liberalism and openness. “The finding is very robust,” explained Arie Kruglanski, a University of Maryland psychologist who has pioneered research in this area and worked to develop a scale for measuring the need for closure.

The trait is assessed based on responses to survey statements such as “I dislike questions which could be answered in many different ways” and “In most social conflicts, I can easily see which side is right and which is wrong.” (…)

Anti-evolutionists have been found to score higher on the need for closure. And in the global-warming debate, tea party followers not only strongly deny the science but also tend to say that they “do not need any more information” about the issue.

I’m not saying that liberals have a monopoly on truth. Of course not. They aren’t always right; but when they’re wrong, they are wrong differently.

When you combine key psychological traits with divergent streams of information from the left and the right, you get a world where there is no truth that we all agree upon. We wield different facts, and hold them close, because we truly experience things differently. (…)”

Chris Mooney, science and political journalist, author of four books, including the New York Times bestselling The Republican War on Science and the forthcoming The Republican Brain: The Science of Why They Deny Science and Reality (April 2012), Liberals and conservatives don’t just vote differently. They think differently, The Washington Post, April 13, 2012. (Illustration: Koren Shadmi for The Washington Post)

See also:

Political science: why rejecting expertise has become a campaign strategy, Lapidarium notes
Cognitive and Social Consequences of the Need for Cognitive Closure, European Review of Social Psychology
☞ Antonio Chirumbolo, The relationship between need for cognitiveclosure and political orientation: the mediating role of authoritarianism, Department of Social and Developmental Psychology, University of Rome ‘La Sapienza’
Paul Nurse, Stamp out anti-science in US politics, New Scientist, 14 Sept 2011
☞ Chris Mooney, Why Republicans Deny Science: The Quest for a Scientific Explanation, The Huffington Post, Jan 11, 2012
☞ John Allen Paulos, Why Don’t Americans Elect Scientists?, NYTimes, Feb 13, 2012.
Study: Conservatives’ Trust in Science Has Fallen Dramatically Since Mid-1970s, American Sociological Association, March 29, 2012.
Why people believe in strange things, Lapidarium notes

Mar
21st
Wed
permalink

Richard Doyle on Creativity, evolution of mind and the rhetorical membrane between humans and an informational universe

              

Q [Jason Silva]: The Jesuit Priest and scientist Pierre Teilhard de Chardin spoke of the Noosphere very early on. A profile in WIRED Magazine article said, 

"Teilhard imagined a stage of evolution characterized by a complex membrane of information enveloping the globe and fueled by human consciousness”.. Teilhard saw the Net coming more than half a century before it arrived. He believed this vast thinking membrane would ultimately coalesce into “the living unity of a single tissue” containing our collective thoughts and experiences."  Teilhard wrote, "The living world is constituted by consciousness clothed in flesh and bone.

He argued that the primary vehicle for increasing complexity consciousness among living organisms was the nervous system. The informational wiring of a being, he argued - whether of neurons or electronics - gives birth to consciousness. As the diversification of nervous connections increases, evolution is led toward greater consciousness… thoughts?

Richard Doyle: Yes, he also called it this process of the evolution of consciousness “Omega Point”. The noosphere imagined here relied on a change in our relationship to  consciousness as much to any technological change and was part of evolution’s epic quest for self awareness. Here Teilhard is in accord with Julian Huxley (Aldous’ brother, a biologist) and Carl Sagan when they observed that “we are a way for the cosmos to know itself.” Sri Aurobindo’s The Life Divine traces out this evolution of consciousness as well through the greek and Sanskrit traditions as well as Darwinism and (relatively) modern philosophy. All are describing evolution’s slow and dynamic quest towards understanding itself.

         

I honestly think we are still grappling with the fact that our minds are distributed across a network by technology, and have been in a feedback loop between our brains and technologies at least since the invention of writing. As each new “mutation” occurs in the history of evolution of information technology, the very character of our minds shifts. McLuhan's Understanding Media is instructive here as well (he parsed it as the Global Village), and of course McLuhan was the bard who advised Leary on "Tune in, Turn on, Drop Out" and very influential on Terence McKenna.

One difference between now and Plato’s time is the infoquake through which we are all living. This radical increase in quantity no doubt has qualitative effects - it changes what it feels like to think and remember. Plato was working through the effect of one new information technology – writing – whereas today we “upgrade” every six months or so…Teilhard observes the correlative of this evolutionary increase in information - and the sudden thresholds it crosses - in the evolution of complexity and nervous systemsThe noosphere is a way of helping us deal with this “phase transition” of consciousness that may well be akin to the phase transition between liquid water and water vapor - a change in degree that effects a change in kind.

Darwin’s Pharmacy suggests that ecodelics were precisely such a mutation in information technology that increased sexually selective fitness through the capacity to process greater amounts of information, and that they are “extraordinarily sensitive to initial rhetorical traditions.” What this means is that because ecodelic experiences are so sensitive to the context in which we experience them, they can help make us aware of the effect of language and music etc on our consciousness, and thereby offer an awareness of our ability to effect our own consciousness through our linguistic and creative choices. This can be helpful when trying to browse the infoquake. Many other practices do so as well - meditation is the most well established practice for noticing the effects we can have on our own consciousness, and Sufi dervishes demonstrate this same outcome for dancing. I do the same on my bicycle, riding up a hill and chanting.

One problem I have with much of the discourse of “memes" is that it is often highly reductionistic - it often forgets that ideas have an ecology too, they must be "cultured." Here I would argue that drawing on Lawrence Lessig's work on the commons, the “brain” is a necessary but insufficient “spawning” ground for ideas that becomes actual. The commons is the spawning ground of ideas; brains are pretty obviously social as well as individual. Harvard biologist Richard Lewontin notes that there is no such thing as “self replicating” molecules, since they always require a context to be replicated. This problem goes back at last to computer scientist John Von Neumann's 1947 paper on Self reproducing automata.

I think Terence McKenna described the condition as "language is loose on planet three", and its modern version probably occurs first in the work of writer William S. Burroughs, whose notion of the "word virus" predates the "meme" by at least a decade. Then again this notion of "ideas are real" goes back to cosmologies that begin with the priority of consciousness over matter, as in "In the beginning was the word, and the word was god, and the word was with god." So even Burroughs could get a late pass for his idea. (…)

Q: Richard Dawkin's definition of a meme is quite powerful: 

“I think that a new kind of replicator has recently emerged on this very planet, […] already achieving evolutionary change at a rate that leaves the old gene panting far behind.” [the replicator is] human culture; the vector of transmission is language, and the spawning ground is the brain.”  

This notion that the ”the vector of transmission is language" is very compelling.. It seems to suggest that just as in biological evolution the vector of transmission has been the DNA molecule, in the noosphere, the next stage up, it is LANGUAGE that has become a major player in the transfer of information towards achieving evolutionary change.. Kind of affects how you think about the phrase “words have power”. This insight reminds me of a quote that describes, in words, the subjective ecstasy that a mind feels when upon having a transcendent realization that feels as if it advances evolution: 

"A universe of possibilities,

Grey infused by color,

The invisible revealed,

The mundane blown away

by awe” 

Is this what you mean by ‘the ecstasy of language’?

Richard Doyle: Above, I noted that ecodelics can make us aware of the feedback loops between our creative choices – should I eat mushrooms in a box? - Should I eat them with a fox? - and our consciousness. In other words, they can make us aware of the tremendous freedom we have in creating our own experience. Leary called this “internal freedom.” Becoming aware of the practically infinite choices we have to compose our lives, including the words we use to map them, can be overwhelming – we feel in these instances the “vertigo of freedom.” What to do? In ecodelic experience we can perceive the power of our maps. That moment in which we can learn to abide the tremendous creative choice we have, and take responsibility for it, is what I mean by the “ecstasy of language.” 

I would point out, though, that for those words you quote to do their work, they have to be read. The language does not do it "on its own" but as a result of the highly focused attention of readers. This may seem trivial but it is often left out, with some serious consequences. And “reading” can mean “follow up with interpretation”. I cracked up when I googled those lines above and found them in a corporate blog about TED, for example. Who knew that neo-romantic poetry was the emerging interface of the global corporate noosphere? (…)

Q: Buckminster Fuller described humans as "pattern integrities", Ray Kurzweil says we are "patterns of information". James Gleick's new book, The Information, says that “information may be more primary than matter”..  what do you make of this? And if we indeed are complex patterns, how can we hack the limitations of biology and entropy to preserve our pattern integrity indefinitely? 

Richard Doyle: First: It is important to remember that the history of the concept and tools of “information” is full of blindspots – we seem to be constantly tempted to underestimate the complexity of any given system needed to make any bit of information meaningful or useful. Caitlin, Kolmogorov Stephan Wolfram and John Von Neumann each came independently to the conclusion that information is only meaningful when it is “run” - you can’t predict the outcome of even many trivial programs without running the program. So to say that “information may be more primary than matter” we have to remember that “information” does not mean “free from constraints.” Thermodynamics – including entropy – remains.

Molecular and informatic reductionism – the view that you can best understand the nature of a biological system by cutting it up into the most significant bits, e.g. DNA – is a powerful model that enables us to do things with biological systems that we never could before. Artist Eduardo Kac collaborated with a French scientist to make a bioluminescent bunny. That’s new! But sometimes it is so powerful that we forget its limitations. The history of the human genome project illustrates this well. AND the human genome is incredibly interesting. It’s just not the immortality hack many thought it would be.

In this sense biology is not a limitation to be “transcended” (Kurzweil), but a medium of exploration whose constraints are interesting and sublime. On this scale of ecosystems, “death” is not a “limitation” but an attribute of a highly dynamic interactive system. Death is an attribute of life. Viewing biology as a “limitation” may not be the best way to become healthy and thriving beings.

Now, that said, looking at our characteristics as “patterns of information” can be immensely powerful, and I work with it at the level of consciousness as well as life. Thinking of ourselves as “dynamic patterns of multiply layered and interconnected self transforming information” is just as accurate of a description of human beings as “meaningless noisy monkeys who think they see god”, and is likely to have much better effects. A nice emphasis on this “pattern” rather than the bits that make it up can be found in Carl Sagan’s “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.”

Q: Richard Dawkins declared in 1986 that ”What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions, […] If you want to understand life,” Dawkins wrote, “don’t think about vibrant, throbbing gels and oozes, think about information technology.” How would you explain the relationship between information technology and the reality of the physical world?

Richard Doyle: Again, information is indeed physical. We can treat a sequence of information as abstraction and take it out of its context – like a quotation or a jellyfish gene spliced into a rabbit to enable it to glow. We can compress information, dwindling the resources it takes to store or process it. But “Information, words, instructions” all require physical instantiation to even be “information, words, instructions.” Researcher Rolf Landauer showed back in the 1960s that even erasure is physical. So I actually think throbbing gels and oozes and slime mold and bacteria eating away at the garbage gyre are very important when we wish to “understand” life. I actually think Dawkins gets it wrong here – he is talking about “modeling” life, not “understanding” it. Erwin Schrödinger, the originator of the idea of the genetic code and therefore the beginning of the “informatic” tradition of biology that Dawkins speaks in here, knew this very well and insisted on the importance of first person experience for understanding.

So while I find these metaphors useful, that is exactly what they are: metaphors. There is a very long history to the attempt to model words and action together: Again, John 1:1 is closer to Dawkin’s position here than he may be comfortable with: “In the Beginning was the word, and the word was god, and the word was with god” is a way of working with this capacity of language to bring phenomena into being. It is really only because we habitually think of language as “mere words” that we continually forget that they are a manifestation of a physical system and that they have very actual effects not limited to the physics of their utterance – the words “I love you” can have an effect much greater than the amount of energy necessary to utter them. Our experiences are highly tuneable by the language we use to describe them.

Q: Talk about the mycelial archetype. Author Paul Stamet compares the pattern of the mushroom mycelium with the overlapping information-sharing systems that comprise the Internet, with the networked neurons in the brain, and with a computer model of dark matter in the universe. All share this densely intertwingled filamental structure…. what is the connection? what is the pattern that connects here? 

Richard Doyle: First things first: Paul Stamets is a genius and we should listen to his world view carefully and learn from it. Along with Lynn Margulis and Dorion Sagan, whose work I borrow from extensively in Darwin’s Pharmacy (as well as many others), Stamets is asking us to contemplate and act on the massive interconnection between all forms of life. This is a shift in worldview that is comparable to the Copernican shift from a geocentric cosmos – it is a shift toward interconnection and consciousness of interconnection. And I like how you weave in Gregory Bateson's phrase “the pattern that connects” here, because Bateson (whose father, William Bateson, was one of the founders of modern genetics) continuously pointed toward the need to develop ways of perceiving the whole. The “mycelial archetype”, as you call it, is a reliable and rather exciting way to recall the whole: What we call “mushrooms” are really the fruiting bodies of an extensive network of cross connection.

That fuzz growing in an open can of tomato paste in your fridge – mycelium. So even opening our refrigerator – should we be lucky enough to have one, with food in it - can remind us that what we take to be reality is is an actuality only appearance – a sliver, albeit a significant one for our world, of the whole. That fuzz can remind us that (1) appearance and reality or not the same thing at all and (2) beyond appearance there is a massive interconnection in unity. This can help remind us who and what we really are. 

With the word ‘archetype”, you of course invoke the psychologist Carl Jung who saw archetypes as templates for understanding, ways of organizing our story of the world. There are many archetypes – the Hero, the Mother, the Trickster, the sage. They are very powerful because they help stitch together what can seem to be a chaotic world – that is both their strength and their weakness. It is a weakness because most of the time we are operating within an archetype and we don’t even know it, and we don’t know therefore that we can change our archetype

By experimenting with a different archetype – imagining, for example, the world through the lens of a 2400 year old organism that is mostly invisible to a very short lived and recent species becoming aware of its creative responsibility in altering the planet – is incredibly powerful, and in Darwin’s Pharmacy I am trying to offer a way to experiment with the idea of plant planet as well as “mycelium” archetype. One powerful aspect of the treating the mycelium as our archetype as humanity is that it is “distributed” - it does not operate via a center of control but through cross connection “distributed” over a space.

Anything we can do to remember both our individuation and our interconnection is timely – we experience the world as individuals, and our task is to discover our nature within the larger scale reality of our dense ecological interconnection. In the book I point to the Upanishad’s “Tat Tvam Asi as a way of comprehending how we can both be totally individual and an aspect of the whole.

Q: You’ve talked about the ecstasy of language and the role of rhetoric in shaping reality.. These notions echo some of Terence McKenna's ideas about language… He calls language an “ecstatic activity of signification”… and says that for the “inspired one, it is almost as if existence is uttering itself through him”… Can you expand on this? How does language create reality?? 

Richard Doyle: It’s incredibly fun and insightful to echo Terence McKenna. He’s really in this shamanic bard tradition that goes all the back to Empedocles at least, and is distributed widely across the planet. He’s got a bit of Whitman in him with his affirmation of the erotic aspects of enlightenment. He was Emerson speaking to a Lyceum crowd remixed through rave culture. Leary and McKenna were resonating with the irish bard archetype. And Terrence was echoing Henry Munn, who was echoing Maria Sabina, whose chants and poetics can make her seem like Echo herself – a mythological story teller and poet (literally “sound”) who so transfixes Hera (Zeus’s wife) that Zeus can consort with nymphs. Everywhere we look there are allegories of sexual selection’s role in the evolution of poetic & shamanic language! 

And Terrence embodies the spirit of eloquence, helping translate our new technological realities (e.g. virtual reality, a fractal view of nature, radical ecology) and the states of mind that were likely to accompany them. Merlin Donald writes of the effects of “external symbolic storage” on human culture – as a onetime student of McLuhan’s, Donald was following up on Plato’s insights I mentioned above that writing changes how we think, and therefore, who we are

Human culture is going through a fantastic “reality crisis” wherein we discover the creative role we play in nature. Our role in global climate change – not to mention our role in dwindling biodiversity – is the “shadow” side of our increasing awareness that humans have a radical creative responsibility for their individual and collective lives. And our lives are inseparable from the ecosystems with which we are enmeshed. THAT is reality. To the extent that we can gather and focus our attention on retuning our relation towards ecosystems in crisis, language can indeed shape reality. We’ll get the future we imagine, not necessarily the one we deserve.

Q: Robert Anton Wilson spoke about “reality tunnels”…. These ‘constructs’ can limit our perspectives and perception of reality, they can trap us, belittle us, enslave us, make us miserable or set us free… How can we hack our reality tunnel?  Is it possible to use rhetoric and/or psychedelics to “reprogram” our reality tunnel? 

Richard Doyle: We do nothing but program and reprogram our reality tunnelsSeriously, the Japanese reactor crisis follows on the BP oil spill as a reminder that we are deeply interconnected on the level of infrastructure – technology is now planetary in scale, so what happens here effects somebody, sometimes Everybody, there. These infrastructures – our food sheds, our energy grid, our global media - run on networks, protocols, global standards, agreements: language, software, images, databases and their mycelial networks.

The historian Michel Foucault called these “discourses”, but we need to connect these discourses to the nonhuman networks with which they are enmeshed, and globalization has been in part about connecting discourses to each other across the planet. Ebola ends up in Virginia, Starbucks in Hong Kong. This has been true for a long time, of course – Mutual Assured Destruction was planetary in scale and required a communication and control structure linking, for example, a Trident submarine under the arctic ice sheet – remember that? - to a putatively civilian political structure Eisenhower rightly warned us about: the military industrial complex. The moon missions illustrate this principle as well – we remember what was said as much as what else was done, and what was said, for a while, seem to induce a sense of truly radical and planetary possibility.

So if we think of words as a description of reality rather than part of the infrastructure of reality, we miss out on the way different linguistic patterns act as catalysts for different realities. I call these “rhetorical softwares”. In my first two books, before I really knew about Wilson’s work or had worked through Korzybski with any intensity, I called these “rhetorical softwares.”

Now the first layer of our reality tunnel is our implicit sense of self – this is the only empirical reality any of us experiences – what we subjectively experience. RAW was a brilliant analyst of the ways experience is shaped by the language we use to describe it. One of my favorite examples from his work is his observation that in English, “reality” is a noun, so we start to treat it as a “thing”, when in fact reality, this cosmos, is also quite well mapped as an action – a dynamic unfolding for 13.7 billion years. That is a pretty big mismatch between language and reality, and can give us a sense that reality is inert, dead, lifeless, “concrete”, and thus not subject to change. By experimenting with what Wilson, following scientist John Lilly, called “metaprograms”, we can change the maps that shape the reality we inhabit. (…)

Q: The film Inception explored the notion that our inner world can be a vivid, experiential dimension, and that we can hack it, and change our reality… what do you make of this? 

Richard Doyle: The whole contemplative tradition insists on this dynamic nature of consciousness. “Inner” and “outer” are models for aspects of reality – words that map the world only imperfectly. Our “inner world” - subjective experience – is all we ever experience, so if we change it obviously we will see a change in what we label “external” reality it is of course part of and not separable from. One of the maps we should experiment with, in my view, is this “inner” and “outer” one – this is why one of my aliases is “mobius.” A mobius strip helps makes clear that “inside” and “outside” are… labels. As you run your finger along a mobius strip, the “inside” becomes “outside” and the “outside” becomes “inside.”.

Q: Can we give put inceptions out into the world?

Richard Doyle: We do nothing but! And, it is crucial to add, so too does the rest of our ecosystem. Bacteria engage in quorum sensing, begin to glow, and induce other bacteria to glow – this puts their inceptions into the world. Thanks to the work of scientists like Anthony Trewavas, we know that plants engage in signaling behavior between and across species and even kingdoms: orchids “throw” images of female wasps into the world, attracting male wasps, root cells map the best path through the soil. The whole blooming confusion of life is signaling, mapping and informing itself into the world. The etymology of “inception” is “to begin, take in hand” - our models and maps are like imagined handholds on a dynamic reality.

Q: What is the relationship between psychedelics and information technology? How are ipods, computers and the internet related to LSD? 

Richard Doyle: This book is part of a trilogy on the history of information in the life sciences. So, first: psychedelics and biology. It turns out that molecular biology and psychedelics were important contexts for each other. I first started noticing this when I found that many people who had taken LSD were talking about their experiences in the language of molecular biology – accessing their DNA and so forth. When I learned that psychedelic experience was very sensitive to “set and setting” - the mindset and context of their use - I wanted to find out how this language of molecular biology was effecting people’s experiences of the compounds. In other words, how did the language affect something supposedly caused by chemistry? 

Tracking the language through thousands of pages, I found that both the discourse of psychedelics and molecular biology were part of the “informatic vision” that was restructuring the life sciences as well as the world, and found common patterns of language in the work of Timothy Leary (the Harvard psychologist) and Francis Crick (who won the Nobel prize with James Watson and Maurice Wilkins for determining the structure of DNA in 1954), so in 2002 I published an article describing the common “language of information” spoken by Leary and Crick. I had no idea that Crick had apparently been using LSD when he was figuring out the structure of DNA. Yes, that blew my mind when it came out in 2004. I feel like I read that between the lines of Crick’s papers, which gave me confidence to write the rest of the book about the feedback between psychedelics and the world we inhabit.

The paper did hone in on the role that LSD played in the invention of PCR (polymerase chain reaction) – Kary Mullis, who won the Nobel prize for the invention of this method of making copies of a sequence of DNA, talked openly of the role that LSD played in the process of invention. Chapter 4 of the book looks to use of LSD in “creative problem solving” studies of the 1960s. These studies – hard to imagine now, 39 years into the War on Drugs, but we can Change the Archetype - suggest that used with care, psychedelics can be part of effective training in remembering how to discern the difference between words and things, maps and territories.

In short, this research suggested that psychedelics were useful for seeing the limitations of words as well as their power, perhaps occasioned by the experience of the linguistic feedback loops between language and psychedelic experiences that themselves could never be satisfactorily described in language. I argue that Mullis had a different conception of information than mainstream molecular biology – a pragmatic concept steeped in what you can do with words rather than in what they mean. Mullis seems to have thought of information as “algorithms” - recipes of code, while the mainsteam view was thinking of it as implicitly semantically, as “words with meaning.”

Ipods, Internet, etc: Well, in some cases there are direct connections. Perhaps Bill Joy said it best when he said that there was a reason that LSD and Unix were both from BerkeleyWhat the Doormouse Said by John Markoff came out after I wrote my first paper on Mullis and I was working on the book, and it was really confirmation of a lot of what I seeing indicated by my conceptual model of what is going on, which is as follows: Sexual selection is a good way to model the evolution of information technology. It yields bioluminescence – the most common communication strategy on the planet – chirping insects, singing birds, Peacocks fanning their feathers, singing whales, speaking humans, and humans with internet access. These are all techniques of information production, transformation or evaluation. I am persuaded by Geoffrey Miller’s update of Charles Darwin’s argument that language and mind are sexually selected traits, selected not simply for survival or even the representation of fitness, but for their sexiness. Leary: “Intelligence is the greatest aphrodisiac.”

I offer the hypothesis that psychedelics enter the human toolkit as “eloquence adjuncts” - tools and techniques for increasing the efficacy of language to seemingly create reality – different patterns of language ( and other attributes of set and setting) literally causes different experiences. The informatic revolution is about applying this ability to create reality with different “codes” to the machine interface. Perhaps this is one of the reason people like Mitch Kapor (a pioneer of computer spreadsheets), Stewart Brand (founder of a pre-internet computer commons known as the Well) and Bob Wallace (one of the original Microsoft seven and an early proponent of shareware), Mark Pesce were or are all psychonauts.

Q: Cyborg Anthropologist Amber Case has written about Techno-social wormholes.. the instant compression of time and space created every time we make a telephone call…  What do you make of this compression of time and space made possible by the engineering “magic” of technology? 

Richard Doyle:  It’s funny the role that the telephone call plays as an example in the history of our attempts to model the effects of information technologies. William Gibson famously defined cyberspace as the place where a telephone call takes place. (Gibson’s coinage of the term “cyberspace” is a good example of an “inception”) Avital Ronell wrote about Nietzsche’s telephone call to the beyond and interprets the history of philosophy according to a “telephonic logic”. When I was a child my father once threw our telephone into the atlantic ocean – that was what he made of the magic of that technology, at least in one moment of anger. This was back in the day when Bell owned your phone and there was some explaining to do. This magic of compression has other effects – my dad got phone calls all day at work, so when was at home he wanted to turn it off. The only way he knew to turn it off was to rip it out of the wall – there was no modular plug, just a wire into the wall - and throw it into the ocean.

So there is more than compression going on here: Deleuze and Guattari, along with the computer scientist Pierre Levy after them, call it “deterritorialization”. The differences between “here” and “there” are being constantly renegotiated as our technologies of interaction develop. Globalization is the collective effect of these deterritorializations and reterritorializations at any given moment.

And the wormhole example is instructive: the forces that enable such collapse of space and time as the possibility of time travel would likely tear us to smithereens. The tensions and torsions of this deterritorialization at part of what is at play in the Wikileaks revolutions, this compression of time and space offers promise for distributed governance as well as turbulence. Time travel through wormholes, by the way, is another example of an inception – Carl Sagan was looking for a reasonable way to transport his fictional aliens in Contact, called Cal Tech physicist Skip Thorne for help, and Thorne came up with the idea.

Q: The film Vanilla Sky explored the notion of a scientifically-induced lucid dream where we can live forever and our world is built out of our memories and ”sculpted moment to moment and lived with the romantic abandon of a summer day or the feeling of a great movie or a pop song you always loved”. Can we sculpt ‘real’ reality as if it were a “lucid dream”

Richard Doyle:Some traditions model reality as a lucid dream. The Diamond Sutra tells us that to be enlightened we must view reality as “a phantom, a dew drop, a bubble.”  This does not mean, of course, that reality does not exist, only that appearance has no more persistence than a dream and that what we call “reality” is our map of reality. When we wake up, the dream that had been so compelling is seen to be what it was: a dream, nothing more or less. Dreams do not lack reality – they are real patterns of information. They just aren’t what we usually think they are. Ditto for “ordinary” reality. Lucid dreaming has been practiced by multiple traditions for a long time – we can no doubt learn new ways of doing so. In the meantime, by recognizing and acting according to the practice of looking beyond appearances, we can find perhaps a smidgeon more creative freedom to manifest our intentions in reality.

Q: Paola Antonelli, design curator of MoMa, has written about Existenz Maximum, the ability of portable music devices like the ipod to create”customized realities”, imposing a soundtrack on the movie of our own life. This sounds empowering and godlike- can you expand on this notion? How is technology helping us design every aspect of both our external reality as well as our internal, psychological reality?

Richard Doyle: Well, the Upanishads and the Book of Luke both suggest that we “get our inner Creator on”, the former by suggesting that “Tat Tvam Asi” - there is an aspect of you that is connected to Everything, and the latter by recommending that we look not here or there for the Kingdom of God, but “within.” So if this sounds “god like”, it is part of a long and persistent tradition. I personally find the phrase “customized realities” redundant given the role of our always unique programs and metaprograms. So what we need to focus on his: to which aspect of ourselves do we wish to give this creative power? These customized realities could be enpowering and god like for corporations that own the material, or they could enpower our planetary aspect that unites all of us, and everything in between. It is, as always, the challenge of the magus and the artist to decide how we want to customize reality once we know that we can.

Q: The Imaginary Foundation says that "to understand is to perceive patterns"… Some advocates of psychedelic therapy have said that certain chemicals heighten our perception of patterns..They help! us “see more”.  What exactly are they helping us understand? 

Richard Doyle: Understanding! One of the interesting bits of knowledge that I found in my research was some evidence that psychonauts scored better on the Witkin Embedded Figure test, a putative measure of a human subject’s ability to “distinguish a simple geometrical figure embedded in a complex colored figure.” When we perceive the part within the whole, we can suddenly get context, understanding.

Q: An article pointing to the use of psychedelics as catalysts for breakthrough innovation in silicon valley says that users …

"employ these cognitive catalysts, de-condition their thinking periodically and come up with the really big connectivity ideas arrived at wholly outside the linear steps of argument. These are the gestalt-perceiving, asterism-forming “aha’s!” that connect the dots and light up the sky with a new archetypal pattern."

This seems to echo what other intellectuals have been saying for ages.  You referred to Cannabis as “an assassin of referentiality, inducing a butterfly effect in thought. Cannabis induces a parataxis wherein sentences resonate together and summon coherence in the bardos between one statement and another.”

Baudelaire also wrote about cannabis as inducing an artificial paradise of thought:  

“…It sometimes happens that people completely unsuited for word-play will improvise an endless string of puns and wholly improbable idea relationships fit to outdo the ablest masters of this preposterous craft. […and eventually]… Every philosophical problem is resolved. Every contradiction is reconciled. Man has surpassed the gods.”

Anthropologist Henry Munn wrote that:

"Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth… At times… the words leap to mind, one after another, of themselves without having to be searched for: a phenomenon similar to the automatic dictation of the surrealists except that here the flow of consciousness, rather than being disconnected, tends to be coherent: a rational enunciation of meanings.  The spontaneity they liberate is not only perceptual, but linguistic, the spontaneity of speech, of fervent, lucid discourse, astonishing. […] For the inspired one, it is as if existence were uttering itself through him […]

Can you expand a bit on how certain ecodelics (as well as marijuana) can help us de-condition our thinking, have creative breakthroughs as well as intellectual catharsis? How is it that “intoxication” could, under certain conditions, actually improve our cognition and creativity and contribute to the collective intelligence of the species?

Richard Doyle: I would point, again, to Pahnke's description of ego death. This is by definition an experience when our maps of the world are humbled. In the breakdown of our ordinary worldview - such as when a (now formerly)  secular being such as myself finds himself  feeling unmistakably sacred - we get a glimpse of reality without our usual filters. It is just not possible to use the old maps, so we get even an involuntary glimpse of reality. This is very close to the Buddhist practice of exhausting linguistic reference through chanting or Koans - suddenly we see the world through something besides our verbal mind.

Ramana Maharshi says that in the silence of the ego we perceive reality - reality IS the breakdown of the ego. Aldous Huxley, who was an extraordinarily adroit and eloquent writer with knowledge of increasingly rare breadth and depth, pointed to a quote by William Blake when trying to sum up his experience: the doors of perception were cleansed. This is a humble act, if you think about it: Huxley, faced with the beauty and grandeur of his mescaline experience, offers the equivalent of ‘What he said!”. Huxley also said that psychedelics offered a respite from “the throttling embrace of the self”, suggesting that we see the world without the usual filters of our egoic self. (…)

And if you look carefully at the studies by pioneers such as Myron Stolaroff and Willis Harman that you reference, as I do in the book, you will see that great care was taken to compose the best contexts for their studies. Subjects, for example, were told not to think about personal problems but to focus on their work at hand, and, astonishingly enough, it seems to have worked. These are very sensitive technologies and we really need much more research to explore their best use. This means more than studying their chemical function - it means studying the complex experiences human beings have with them. Step one has to be accepting that ecodelics are and always have been an integral part of human culture for some subset of the population. (…)

Q: Kevin Kelly refers to technological evolution as following the momentum begun at the big bang - he has stated:

"…there is a continuum, a connection back all the way to the Big Bang with these self-organizing systems that make the galaxies, stars, and life, and now is producing technology in the same way. The energies flowing through these things are, interestingly, becoming more and more dense. If you take the amount of energy that flows through one gram per second in a galaxy, it is increased when it goes through a star, and it is actually increased in life…We don’t realize this. We think of the sun as being a hugely immense amount of energy. Yet the amount of energy running through a sunflower per gram per second of the livelihood, is actually greater than in the sun. Actually, it’s so dense that when it’s multiplied out, the sunflower actually has a higher amount of energy flowing through it. "..

Animals have even higher energy usage than the plant, and a jet engine has even higher than an animal. The most energy-dense thing that we know about in the entire universe is the computer chip in your computer. It is sending more energy per gram per second through that than anything we know. In fact, if it was to send it through any faster, it would melt or explode. It is so energy-dense that it is actually at the edge of explosion.”…  

Can you comment on the implications of what he’s saying here?

Richard Doyle: I think maps of “continuity” are crucial and urgently needed. We can model the world as either “discrete” - made up of parts - or “continuous” - composing a whole - to powerful effect. Both are in this sense true. This is not “relativism” but a corollary of that creative freedom to choose our models that seems to be an attribute of consciousness. The mechanistic worldview extracts, separates and reconnects raw materials, labor and energy in ways that produce astonishing order as well as disorder (entropy).

By mapping the world as discrete – such as the difference between one second and another – and uniform – to a clock, there is no difference between one second and another – we have transformed the planet. Consciousness informed by discrete maps of reality has been an actual geological force in a tiny sliver of time. In so doing, we have have transformed the biosphere. So you can see just how actual this relation between consciousness, its maps, and earthly reality is. This is why Vernadsky, a geophysicist, thought we needed a new term for the way consciousness functions as a geological force: noosphere.

These discrete maps of reality are so powerful that we forget that they are maps. Now if the world can be cut up into parts, it is only because it forms a unity. A Sufi author commented that the unity of the world was both the most obvious and obscure fact. It is obvious because our own lives and the world we inhabit can be seen to continue without any experienced interruption – neither the world nor our lives truly stops and starts. This unity can be obscure because in a literal sense we can’t perceive it with our senses – this unity can only be “perceived” by our minds. We are so effective as separate beings that we forget the whole for the part.

The world is more than a collection of parts, and we can quote Carl Sagan: “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.” Equally beautiful is what Sagan follows up with: “The cosmos is also within us. We are made of star stuff.” Perhaps this is why models such as Kelly’s feel so powerful: reminding ourselves that there is a continuity between the Big Bang and ourselves means we are an aspect of something unfathomably grand, beautiful, complex and unbroken. This is perhaps the “grandeur” Darwin was discussing. And when we experience that grandeur it can help us think and act in aways appropriate to a geological force.

I am not sure about the claims for energy that Kelly is making – I would have to see the context and the source of his data – but I do know that when it comes to thermodynamics, what he is saying rings true. We are dissipative structures far from equilibrium, meaning that we fulfill the laws of thermodynamics. Even though biological systems such as ourselves are incredibly orderly – and we export that order through our maps onto and into the world – we also yield more entropy than our absence. Living systems, according to an emerging paradigm of Stanley Salthe, Rob Swenson, the aforementioned Margulis and Sagan, Eric Schneider, James J. kay and others, maximize entropy, and the universe is seeking to dissipate ever greater amounts of entropy.

Order is a way to dissipate yet more energy. We’re thermodynamic beings, so we are always on the prowl for new ways to dissipate energy as heat and create uncertainty (entropy), and consciousness helps us find ever new ways to do so. (In case you are wondering, Consciousness is the organized effort to model reality that yields ever increasing spirals of uncertainty in Deep Time. But you knew that.) It is perhaps in this sense that, again following Carl Sagan, “ We are a way for the cosmos to know itself.” That is pretty great map of continuity.

What I don’t understand in Kelly’s work, and I need to look at with more attention, is the discontinuity he posits between biology and technology. In my view our maps have made us think of technology as different in kind from biology, but the global mycelial web of fungi suggests otherwise, and our current view of technology seems to intensify this sense of separation even as we get interconnected through technology. I prefer Noosphere to what Kelly calls the Technium because it reminds us of the ways we are biologically interconnected with our technosocial realities. Noosphere sprouts from biosphere.

Q: There is this notion of increasing complexity… Yet in a universe where entropy destroys almost everything, here we are, the cutting edge of evolution, taking the reigns and accelerating this emergent complexity.. Kurzweil says that this makes us “very important”: 

“…It turns out that we are central, after all.  Our ability to create models—virtual realities—in our brains, combined with ou modest-looking thumbs, has been sufficient to usher in another form of evolution: technology. That development enabled the persistence of the accelerating pace that started with biological evolution. It will continue until the entire universe is at our fingertips.”   

What do you think?

Richard Doyle: Well, I think from my remarks already you can see that I agree with Kurzweil here and can only suggest that it is for this very reason that we must be very creative, careful and cunning with our models. Do we model the technologies that we are developing according to the effects they will have on the planetary whole? Only rarely, though this is what we are trying to do at the Penn State Center for Nanofutures, as are lots of people involved in Science, Technology and Society as well as engineering education. When we develop technologies - and that is the way psychedelics arrived in modern culture, as technologies -  we must model their effects not only on the individuals who use them, but on the whole of our ecosystem and planetary society.

If our technological models are based on the premise that this is a dead planet – and most of them very much are, one is called all kinds of names if you suggest otherwise - animist, vitalist, Gaian intelligence agent, names I wear with glee – then we will end up with a asymptotically dead planet. Consciousness will, of course, like the Terminator, “Be Back” should we perish, but let us hope that it learns to experiment better with its maps and learns to notice reality just a little bit more. I am actually an optimist on this front and think that a widespread “aha” moment is occurring where there is a collective recognition of the feedback loops that make up our technological & biological evolution.

Again, I don’t know why Kurzweil seems to think that technological evolution is discontinuous with biological evolution – technology is nested within the network of “wetwares” that make it work, and our wetwares are increasingly interconnected with our technological infrastructure, as the meltdowns in Japan demonstrate along with the dependence of many of us – we who are more bacterial than human by dry weight - upon a network of pharmaceuticals and electricity for continued life. The E. coli outbreak in Europe is another case in point – our biological reality is linked with the technological reality of supply chain management. Technological evolution is biological evolution enabled by the maps of reality forged by consciousness. (…)

Whereas technology for many promised the “disenchantment” of the world –the rationalization of this world of the contemplative spirit as everything became a Machine – here was mystical contemplative experience manifesting itself directly within what sociologist Max Weber called the “iron cage of modernity”, Gaia bubbling up through technological “Babylon.”

Now many contemplatives have sought to share their experiences through writing – pages and pages of it. As we interconnect through information technology, we perhaps have the opportunity to repeat this enchanted contemplative experience of radical interconnection on another scale, and through other means. Just say Yes to the Noosphere!”

Richard Doyle, Professor of English Affiliate Faculty, Information Science and Technology at Pennsylvania State University, in conversation with Jason Silva, Creativity, evolution of mind and the “vertigo of freedom”, Big Think, June 21, 2011. (Illustrations: 1) Randy Mora, Artífices del sonido, 2) Noosphere)

See also:

☞ RoseRose, Google and the Myceliation of Consciousness
Kevin Kelly on Why the Impossible Happens More Often
Luciano Floridi on the future development of the information society
Luciano Floridi on The Digital Revolution as a Fourth Revolution: “P2P does’t mean Pirate to Pirate but Platonist to Platonist”
The Rise of the Conversation Society: information, communication and collaboration
Keen On… James Gleick: Why Cyberspace, As a Mode of Being, Will Never Go Away (TCTV), (video) TechCrunch, Jun 23, 2011
Timothy Leary on cybernetics and a new global culture
Mark Changizi on Humans, Version 3.0.
Cyberspace tag on Lapidarium

Jan
19th
Thu
permalink

Cognitive scientists develop new take on old problem: why human language has so many words with multiple meanings

           

“Why did language evolve? While the answer might seem obvious — as a way for individuals to exchange information — linguists and other students of communication have debated this question for years. Many prominent linguists, including MIT’s Noam Chomsky, have argued that language is, in fact, poorly designed for communication. Such a use, they say, is merely a byproduct of a system that probably evolved for other reasons — perhaps for structuring our own private thoughts.

As evidence, these linguists point to the existence of ambiguity: In a system optimized for conveying information between a speaker and a listener, they argue, each word would have just one meaning, eliminating any chance of confusion or misunderstanding. Now, a group of MIT cognitive scientists has turned this idea on its head. In a new theory, they claim that ambiguity actually makes language more efficient, by allowing for the reuse of short, efficient sounds that listeners can easily disambiguate with the help of context.

“Various people have said that ambiguity is a problem for communication,” says Ted Gibson, an MIT professor of cognitive science and senior author of a paper describing the research to appear in the journal Cognition. “But once we understand that context disambiguates, then ambiguity is not a problem — it’s something you can take advantage of, because you can reuse easy [words] in different contexts over and over again.” (…)

What do you ‘mean’?

For a somewhat ironic example of ambiguity, consider the word “mean.” It can mean, of course, to indicate or signify, but it can also refer to an intention or purpose (“I meant to go to the store”); something offensive or nasty; or the mathematical average of a set of numbers. Adding an ‘s’ introduces even more potential definitions: an instrument or method (“a means to an end”), or financial resources (“to live within one’s means”).

But virtually no speaker of English gets confused when he or she hears the word “mean.” That’s because the different senses of the word occur in such different contexts as to allow listeners to infer its meaning nearly automatically.

Given the disambiguating power of context, the researchers hypothesized that languages might harness ambiguity to reuse words — most likely, the easiest words for language processing systems. Building on observation and previous studies, they posited that words with fewer syllables, high frequency and the simplest pronunciations should have the most meanings.

To test this prediction, Piantadosi, Tily and Gibson carried out corpus studies of English, Dutch and German. (In linguistics, a corpus is a large body of samples of language as it is used naturally, which can be used to search for word frequencies or patterns.) By comparing certain properties of words to their numbers of meanings, the researchers confirmed their suspicion that shorter, more frequent words, as well as those that conform to the language’s typical sound patterns, are most likely to be ambiguous — trends that were statistically significant in all three languages.

To understand why ambiguity makes a language more efficient rather than less so, think about the competing desires of the speaker and the listener. The speaker is interested in conveying as much as possible with the fewest possible words, while the listener is aiming to get a complete and specific understanding of what the speaker is trying to say. But as the researchers write, it is “cognitively cheaper” to have the listener infer certain things from the context than to have the speaker spend time on longer and more complicated utterances. The result is a system that skews toward ambiguity, reusing the “easiest” words. Once context is considered, it’s clear that “ambiguity is actually something you would want in the communication system,” Piantadosi says.

      

Tom Wasow, a professor of linguistics and philosophy at Stanford University, calls the paper “important and insightful.”

“You would expect that since languages are constantly changing, they would evolve to get rid of ambiguity,” Wasow says. “But if you look at natural languages, they are massively ambiguous: Words have multiple meanings, there are multiple ways to parse strings of words. This paper presents a really rigorous argument as to why that kind of ambiguity is actually functional for communicative purposes, rather than dysfunctional.

Implications for computer science

The researchers say the statistical nature of their paper reflects a trend in the field of linguistics, which is coming to rely more heavily on information theory and quantitative methods.

“The influence of computer science in linguistics right now is very high,” Gibson says, adding that natural language processing (NLP) is a major goal of those operating at the intersection of the two fields.

Piantadosi points out that ambiguity in natural language poses immense challenges for NLP developers. “Ambiguity is only good for us [as humans] because we have these really sophisticated cognitive mechanisms for disambiguating,” he says. “It’s really difficult to work out the details of what those are, or even some sort of approximation that you could get a computer to use.”

But, as Gibson says, computer scientists have long been aware of this problem. The new study provides a better theoretical and evolutionary explanation of why ambiguity exists, but the same message holds: “Basically, if you have any human language in your input or output, you are stuck with needing context to disambiguate,” he says.”

Emily Finn, The advantage of ambiguity, MIT news, Jan 19, 2012. (Illustration source: 1, 2)

See also:

☞ S. T. Piantadosi, H. Tily, E. Gibson, The communicative function of ambiguity in language (pdf), Department of Brain and Cognitive Sciences, MIT

"We present a general information-theoretic argument that all efficient communication systems will be ambiguous, assuming that context is informative about meaning. We also argue that ambiguity additionally allows for greater ease of processing by allowing efficient linguistic units to be re-used. We test predictions of this theory in English, German, and Dutch. Our results and theoretical analysis suggest that ambiguity is a functional property of language that allows for greater communicative efficiency. (…)

Our results argue for a rational explanation of ambiguity and demonstrate that ambiguity is not mysterious when language is considered as a cognitive system designed in part for communication.”

☞ B. Juba, A. Tauman, K. Sanjeev Khanna, M. Sudan, Compression without a common prior: an information-theoretic justification for ambiguity in language (pdf), Harvard University, MIT

"Compression is a fundamental goal of both human language and digital communication, yet natural language is very different from compression schemes employed by modern computers. We partly explain this difference using the fact that information theory generally assumes a common prior probability distribution shared by the encoder and decoder, whereas human communication has to be robust to the fact that a speaker and listener may have different prior beliefs about what a speaker may say. We model this information-theoretically using the following question: what type of compression scheme would be effective when the encoder and decoder have (boundedly) different prior probability distributions. The resulting compression scheme resembles natural language to a far greater extent than existing digital communication protocols. We also use information theory to justify why ambiguity is necessary for the purpose of compression."

Language tag on Lapidarium notes

Dec
27th
Tue
permalink

Do thoughts have a language of their own? The language of thought hypothesis

            
                                      The language of thought drawing by Robert Horvitz

"We dissect nature along lines laid down by our native languages. The categories and types that we isolate from the world of phenomena we do not find there because they stare the observer in the face; on the contrary, the world is presented in a kaleidoscopic flux of impressions which has to be organized by our minds-and this means largely by the linguistic systems of our minds.”

Benjamin Lee Whorf, American linguist (1897-1941), 1956, p. 213, cited in Does language determine thought? Boroditsky’s (2001) research on Chinese speakers’ conception of time (pdf)

"The mind thinks its thoughts in ‘Mentalese,’ codes them in the localnatural language, and then transmits them (say, by speaking them out loud) to the hearer. The hearer has a Cryptographer in his head too, of course, who thereupon proceeds to decode the ‘message.’ In this picture, natural language, far from being essential to thought, is merely a vehicle for the communication of thought.”

Hilary Putnam, American philosopher, mathematician and computer scientist, Representation and reality, A Bradford Book, 1991, p. 10-11.

"According to one school of philosophy, our thoughts have a language-like structure that is independent of natural language: this is what students of language call the language of thought (LOT) hypothesis. According to the LOT hypothesis, it is because human thoughts already have a linguistic structure that the emergence of common, natural languages was possible in the first place. (…)

Many - perhaps most - psychologists end up concluding that ordinary people do not use the rules of logic in everyday life.

There is an alternative way of seeing this: that there is a language of thought, and that it has a more logical form than ordinary natural language. This view has an added bonus: it tells us that, if you want to express yourself more clearly and more effectively in natural language, then you should express yourself in a form that is closer to computational logic - and therefore closer to the language of thought. Dry legalese never looked so good.”

Robert Kowalski, British logician and computer scientist, Do thoughts have a language of their own?, New Scientist, 8 Dec 2011

"In philosophy of mind, the language of thought hypothesis (LOTH) put forward by American philosopher Jerry Fodor describes thoughts as represented in a “language” (sometimes known as mentalese) that allows complex thoughts to be built up by combining simpler thoughts in various ways. In its most basic form the theory states that thought follows the same rules as language: thought has syntax.

Using empirical data drawn from linguistics and cognitive science to describe mental representation from a philosophical vantage-point, the hypothesis states that thinking takes place in a language of thought (LOT): cognition and cognitive processes are only ‘remotely plausible’ when expressed as a system of representations that is “tokened” by a linguistic or semantic structure and operated upon by means of a combinatorial syntax. Linguistic tokens used in mental language describe elementary concepts which are operated upon by logical rules establishing causal connections to allow for complex thought. Syntax as well as semantics have a causal effect on the properties of this system of mental representations.

These mental representations are not present in the brain in the same way as symbols are present on paper; rather, the LOT is supposed to exist at the cognitive level, the level of thoughts and concepts. LOTH has wide-ranging significance for a number of domains in cognitive science. It relies on a version of functionalist materialism, which holds that mental representations are actualized and modified by the individual holding the propositional attitude, and it challenges eliminative materialism and connectionism. It implies a strongly rationalist model of cognition in which many of the fundamentals of cognition are innate. (…)

Some philosophers have argued that our public language is our mental language, that a person who speaks English thinks in English. Others contend that people who do not know a public language (e.g. babies, aphasics) can think, and that therefore some form of mentalese must be present innately. (…)

Tim Crane, in his book The Mechanical Mind, states that, while he agrees with Fodor, his reason is very different. A logical objection challenges LOTH’s explanation of how sentences in natural languages get their meaning. That is the view that “Snow is white” is TRUE if and only if P is TRUE in the LOT, where P means the same thing in LOT as “Snow is white” means in the natural language. Any symbol manipulation is in need of some way of deriving what those symbols mean. If the meaning of sentences is explained in terms of sentences in the LOT, then the meaning of sentences in LOT must get their meaning from somewhere else. There seems to be an infinite regress of sentences getting their meaning. Sentences in natural languages get their meaning from their users (speakers, writers).  Therefore sentences in mentalese must get their meaning from the way in which they are used by thinkers and so on ad infinitum. This regress is often called the homunculus regress.

Daniel Dennett accepts that homunculi may be explained by other homunculi and denies that this would yield an infinite regress of homunculi. Each explanatory homunculus is “stupider” or more basic than the homunculus it explains but this regress is not infinite but bottoms out at a basic level that is so simple that it does not need interpretation. John Searle points out that it still follows that the bottom-level homunculi are manipulating some sorts of symbols.

LOTH implies that the mind has some tacit knowledge of the logical rules of inference and the linguistic rules of syntax (sentence structure) and semantics (concept or word meaning). If LOTH cannot show that the mind knows that it is following the particular set of rules in question then the mind is not computational because it is not governed by computational rules. Also, the apparent incompleteness of this set of rules in explaining behavior is pointed out. Many conscious beings behave in ways that are contrary to the rules of logic. Yet this irrational behavior is not accounted for by any rules, showing that there is at least some behavior that does not act in accordance with this set of rules.”

Wiki

Inner Speech as a Language

"A definition of language is always, implicitly or explicitly, a definition of human beings in the world."

Raymond Williams, Welsh academic, novelist and critic (1921-1988)

"A set (finite or infinite) of sentences, each finite in length and constructed out of a finite set of elements."

Noam Chomsky, American linguist, philosopher, cognitive scientist

"People often talk silently to themselves, engaging in what is called inner speech, internal conversation, inner dialogue, self talk and so on. This seems to be an inherent characteristic of human beings, commented on as early as Plato, who regarded thought as inner speech. The American pragmatists thought the inner dialogue was the defining feature of the self. For them the self is an internal community or network, communicating within itself in a field of meaning.

The idea that ordinary language is the language of thought however is not the only linquistic theory of thought. Since Saint Augustine there has been the idea that thought is itself a language of pure abstractions. This “mental language” as it was called differs from ordinary language by consisting solely of meanings, i.e. as signifieds without signifiers to use Saussure’s language (Ashworth 2003). This hypothesis peaked in the writings of William of Occam and declined when Hobbes introduced a purely computational, hedonistic theory of thought (Normore 2005).

A second competitor to the ordinary language theory of thought is the “mentalese” hypothesis of Noam Chomsky (1968) and Jerry Fodor (1975). This approach, which sometimes uses the computer as a metaphor for the mind, resembles the Scholastic’s theory in envisioning a purely abstract language of thought. Whatever processes of ordinary language might accompany it are viewed as epiphenomenal, gloss or what might be called “fluff.” Ordinary language, according to this view, is a pale shadow of the actual language of thought. In addition mentalese is regarded as both innate and unconscious. It is a faculty that is claimed to be present at birth and one which operates below the awareness of the mind.

There are then three language of thought hypotheses, the ordinary language or inner speech version, the now marginalized Augustine-Occam mental language and the computer-based, Chomsky-Fodor theory of mentalese. There seem to be no comparisons of the Scholastic and the mentalese theories except in Panaccio (1992, pp. 267–272). However there is a vigorous debate between the ordinary language theory and that of mentalese (for two collections see Carruthers and Boucher 1998 and Preston 1997). A major weak spot of mentalese is that, being unconscious, there is no empirical way of verifying it. The weak spot of the inner speech approach is that there are several examples of non-linguistic thought, e.g. in infants, animals, brain damaged people and ordinary people under conditions of high speed thought.

Still, all three of these language of thought hypotheses are alive and under
discussion in contemporary thought. (…) [p.319]

I will argue that inner speech is even more referential than outer speech in some respects, but also even more differential in other respects. In other words its semantic system is polarized between the differential and the referential.

Considering the peculiarities of inner speech, I think its vocabulary would be more differentially defined, i.e. more “structural”, than outer speech. First let me recall the special qualities of inner speech as silent, elliptical, embedded and egocentric. These qualities make it relatively private, both in the words and their meanings. And these privacy walls push things together, creating links and dependencies among the words.

Let us take the analogy of an intimate relationship, one that has some degree of deviance, with consequent secrecy. The mini culture of the relationship tends, due to secrecy, to be cut off from society at large. This culture gets isolated. There is the relationship time, the place, the transportation, the talk, the rituals, etc. The relationship elements are cut off from the outside world, and they inevitably share in that “relationship” feeling. They also imply each other, causally, sequentially, symbolically, etc. The relationship meanings are defined more differentially than, perhaps, items in a less deviant relationship. It is the privacy that melds things
together.

This internal language though is not only solitary and private, it is also much more self styled than outer language. Ordinary language has a smoothed over or idealized version, which Saussure refered to as language or “langue.” And it also has a more stylized, idiosyncratic version. This is its spoken variety, which Saussure referred to as parole or speech. Parole is more heterogeneous than langue, given that the speaking process reflects the unique mentalities of individuals and sub-cultures.

But by the same logic inner speech is even more individualized and heterogeneous than outer speech. Your spoken or outer speech is somewhat different from mine, and both are different from purified or formalized language. But your inner speech, given its elliptical, embedded and egocentric qualities, is even more different from mine, and both are quite different from the outer langue. In other words the gap between outer langue and inner speech is greater than that between outer langue and outer speech.

The peculiarities of inner speech are so stitched into the psyche, so personalitydependent, that they differ considerably from person to person. This does not seem to be primarily a reference-driven variation, for everyone’s inner speech has roughly the same, generic world of reference. The variation in the internal dialogue is largely due to the personal qualities of the speaker, to that person’s particular ego needs and short cuts.

We are little gods in the world of inner speech. We are the only ones, we run the show, we are the boss. This world is almost a little insane, for it lacks the usual social controls, and we can be as bad or as goofy as we want. On the other hand inner speech does have a job to do, it has to steer us through the world. That function sets up outer limits, even though within those limits we have a free rein to construct this language as we like.

There are similarities to the idealist world view in inner speech. The philosophical idealists, especially Berkeley, reduced the outer world to some version of an inner world. They internalized the external, each doing it somewhat differently, as though it were all a dream. For them all speech would be inner, since there is no outer. And since everything would be radiating from the self, everything would be connected via the self.

The Saussurean theory of linguistic differences [pdf], whether Saussure actually held it or not, is very much like idealistic metaphysics. In both cases everything is dangling from the same string. And some kind of self is pulling the string. The late l9th century British idealists thought all of reality was in relationship, and given that they had only an inner world, they referred to these as “internal relations.”

Saussure used this same phrase, internal relations, to refer to the differences among signifiers and signifieds. And whether he was aligning himself with the idealists or not, there is a similarity between his self-enclosed linguistic world and that of the idealists. It is the denial of reference, of an external world, that underlies this similarity. For Saussure this denial is merely a theoretical move, an “as if ” assumption, and not an assertion about the real world. The idealists said there actually was no external world, and Saussure said he would pretend, for methodological reasons, that there was no external world. But regardless of how they get there, they end up in the same place.

If there is no reference, no external world, then the only way language can be defined is internally, by a system of differences. Saussure’s purely differential theory of meaning follows from the loss of the referential. But if there is an external world, even for inner speech, then we are back to the dualistic semantic theory, i.e. to some sort of balance between referential and differential streams.

Although inner speech is not idealism, in some ways it seems to be a more differentially defined universe than outer speech. Linguistic context is even more important than in outer speech. One reason is that meaning is so condensed on the two axes. But a second is that inner language is so pervaded with emotion. We censor our emotions in ordinary interpersonal speech, hiding our fear, our shame, our jealousy, our gloating. It takes a while for little children to learn this, but when they grow up they are all, men and women alike, pretty good at it. Inner speech is another matter, for it is brutally honest. And its emotional life is anything goes. We can scream, whoop and holler to ourselves. Or we can sob on a wailing wall. In fact we probably emote more in inner speech to compensate for the restrictions on outer speech. Emotions pervade large stretches of inner speech, and they heighten the importance of internal relations.

The determinants of meaning in inner speech seem much more stark and unarguable than in outer speech. Inner speech is enclosed within us, and this seems to make it a more dense set of internal relations, both because of the intense privacy and the more spontaneous emotions. In these respects inner speech gives a rich example of Saussure’s differential meaning system.

On the other hand inner speech is also more obviously referential than outer speech. Ordinary speech is quite conventional or arbitrary, and when we say dog or apple pie, the sign has no resemblance to its object. In inner speech, though, the signs are often images of their objects, bearing an iconic or mirroring relation to them. In other words, as mentioned before, there can be a heavy dependency on sensory imagery in forming an internal sentence. (…)

In conclusion Saussure’s theory of semantics works well for some aspects of inner speech and quite poorly for others, i.e. the more referential ones. [signs of external objects, color coordination] (…) On the other hand inner speech is quite different from outer speech, and the Saussurean issues must be handled in special ways. Inner speech is only partially fitting to Saussure’s theories. And new ideas are needed to resolve Saussure’s questions. (…)

Saussure’s binaries were meant to simplify the study of language. The paradigmatic-syntagmatic distinction showed two axes of meaning, and it prepared the way for his differential theory of meaning. The history-systematics distinction was meant to justify the exclusion of history. The speech-language distinction was meant to get rid of speech. And the differential-referential distinction was meant to exclude reference. Saussure’s approach then is largely a pruning device which chopped off many traditional parts of linguistics.

My analysis suggests that this pruning apparatus does not work for inner speech. The two axes are useful but they do not prepare the way for the differential theory of meaning. History cannot be excluded, for it is too important for inner speech. Speech should be restored, and in fact langue applies only weakly to inner speech. And that capstone of Saussure and cultural studies, the differential theory of meaning, does not seem adequate for inner speech. Referential theory is also needed to make sense of its meaning system.

Ethnomethodology

Inner speech then is a distinct variation or dialect of ordinary language, and the characteristics I have pointed out seem to be central to its structure. (…)

Inner speech is quite similar to ethnomethodology in its use of short cuts and normalizing practices. Garfinkel (1967) and Cicourel (1974) discovered ethnomethodology by examining interpersonal or intersubjective communication. A great many economies and condensations of interpersonal conversation are similar to ones we use when we talk to ourselves. If I say to myself “shop on the way home,” this is a condensation of the fairly elaborate shopping list I mentioned earlier, but if I say to my wife “I’ll shop on the way home” she may understand something much like that same, implicit shopping list. In other words we are constantly using “etcetera clauses” to speed up our internal conversations. And, being both communicator and communicatee, we may understand these references even more accurately than we do in social conversations. (…)

The self is also a sort of family gathering with similar problems of maintaining and restoring solidarity. Much inner speech is a kind of Durkheimian self soothing ritual where we try to convince ourselves that everything’s fine, even when it is not. In this way we can comfort ourselves when we are frightened, restore some pride when we are ashamed, or find a silver lining when we are disappointed. Such expressions as “you can do it,” “you’re doing great,” and “this looks harder than it is” give us confidence and energy when the going is tough.

In sum inner speech helps one see the importance of ethnomethods. The fact that we engage in these practices in our deepest privacy shows they are rooted in our psychology as well as in our social life. And the fact that they run parallel in intra- and inter-subjective communication shows them to be a feature of communication as such.

Privacy

In philosophy Wittgenstein provoked a widespread and complex discussion of private language. By this he meant a language that is not only de facto but also inherently private. No one but the private language user would be able to fully understand it, even if the meanings were publically available. To constitute a private language such a tongue would not need to be completely private. If only a single word or sentence were inherently private, it would qualify as a private language in Wittgenstein’s sense.

It seems to me inner speech is clearly a private language, at least in some of its utterances. This language is so rooted in the unique self that an eavesdropper, could there be one, would not fully understand it. It has so much of one’s person in it, a listener would have to be another you to follow it. And if someone invented a window into consciousness, a mind-reading machine, that could invade one’s privacy, would they be able to understand the, now revealed, inner speech? I think not. They might be able to understand most of the words, but the non-linguistic or imagistic elements would be too much a personal script to follow. If this eavesdropper watched you, including your consciousness, for your whole life, had access to your memory and knew your way of combining non-linguistic representations with words, they might have your code, but this is another way of saying they would be another you. In practical terms inner speech would be inaccessible in its meaning even if it were accessible in its signifying forms.

Of course this semantic privacy does not prevent one from describing one’s own inner speech to another, at least to a substantial extent. Something is lost all right in the translation from first to third person representations. When, in footnote 2, I talked about the inner speech cluster I called “Tom,” I obviously left out some of the affect and all of the sensory imagery. But I was still able to communicate the gist of it, in other words to transform first to third person meanings. So even though this is a private language it can to some extent be made public and used for research purposes.

The importance of private language is that it sheds light on what a human being is. We are inherently private animals, and we become more so the more self-aware and internally communicative we are. This zone of privacy may well be the foundation for the moral (and legal) need people have for privacy. In any case the hidden individuality or uniqueness of each human being is closely related to the what the person says to him or her self.

Agency

One of the thorniest problems of the humanities and social sciences is human agency. Humans are the authors of their actions to a great extent, but the way this process works is difficult to understand. I would suggest that inner speech is both the locus and platform for agency.

Charles Sanders Peirce was under the impression that we guide our lives with inner speech. We choose internally in the zone of inner speech, and then we choose externally in the zone of practical action and the outer world. The first choice leads to the second choice. Peirce even thought we could make and break habits by first modelling them in our internal theater. Here we could visualize the performance of a particular action and also choose to perform this action. The visualization and the choice could give the energy for designing and moulding one’s life. (…)

More generally the self directing process, including planning, anticipating, rehearsing, etc. seems to be largely a product of inner speech. This includes both what one will do and how one will do it. Picturing one’s preferred action as the lesser evil or greater good, even if one fudges a bit on the facts, is probably also a powerful way of producing a given action, and possibly even a new habit. (…)

I showed that inner speech does not qualify as a public language, though it has a distinct structural profile as a semi-private language or perhaps as a dialect. This structure suggests the access points or research approaches that this language is amenable to. As examples of how this research might proceed I took a quick look at three issues: ethnomethodology, privacy and agency.”

Norbert Wiley, professor emeritus of Sociology at University of Illinois Urbana-Champaign, Illinois, Visiting Scholar at the University of California, Berkley. He is a prize-winning sociologist who has published on both the history and systematics of theory, to read full essay click Inner Speech as a Language: A Saussurean Inquiry (pdf), Journal for the Theory of Social Behaviour 36:3 0021–8308, 2006.

See also:

The Language of Thought Hypothesis, Stanford Encyclopedia of Philosophy
Private language argument, Wiki
Private Language, Stanford Encyclopedia of Philosophy
☞ Jerry A. Fodor, Why there still has to be a language of thought?
Robert Kowalski, British logician and computer scientist, Do thoughts have a language of their own?, New Scientist, 8 Dec 2011
☞ Jerry A. Fodor, The language of thoughtHarvard University Press, 1975
☞ Ned Block, The Mind as the Software of the Brain, New York University 
Antony, Louise M, What are you thinking? Character and content in the language of thought (pdf)
Ansgar Beckermann, Can there be a language of thought? (pdf) In G. White, B. Smith & R. Casati (eds.), Philosophy and the Cognitive Sciences. Proceedings of the 16th International Wittgenstein Symposium. Hölder-Pichler-Tempsky.
Edouard Machery, You don’t know how you think: Introspection and language of thought, British Journal for the Philosophy of Science 56 (3): 469-485, (2005)
☞ Christopher Bartel, Musical Thought and Compositionality (pdf), King’s College London
Psycholinguistics/Language and Thought, Wikiversity
MindPapers: The Language of Thought - A Bibliography of the Philosophy of Mind and the Science of Consciousness, links Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University

Sue Savage-Rumbaugh on Human Language—Human Consciousness. A personal narrative arises through the vehicle of language, Lapidarium notes
The time machine in our mind. The imagistic mental machinery that allows us to travel through time, Lapidarium notes

Dec
17th
Sat
permalink

Infinite Stupidity. Social evolution may have sculpted us not to be innovators and creators as much as to be copiers


A review of some big events

"Obviously one of the big events in our history was the origin of our planet, about 4.5 billion years ago. And what’s fascinating is that about 3.8 billion years ago, only about seven or eight hundred million years after the origin of our planet, life arose. That life was simple replicators, things that could make copies of themselves. And we think that life was a little bit like the bacteria we see on earth today. It would be the ancestors of the bacteria we see on earth today.

That life ruled the world for 2 billion years, and then about 1.5 billion years ago, a new kind of life emerged. These were the eukaryotic cells. They were a little bit different kind of cell from bacteria. And actually the kind of cells we are made of. And again, these organisms that were eukaryotes were single-celled, so even 1.5 billion years ago, we still just had single-celled organisms on earth. But it was a new kind of life.

It was another 500 million years before we had anything like a multicellular organism, and it was another 500 million years after that before we had anything really very interesting. So, about 500 million years ago, the plants and the animals started to evolve. And I think everybody would agree that this was a major event in the history of the world, because, for the first time, we had complex organisms.

After about 500 million years ago, things like the plants evolved, the fish evolved, lizards and snakes, dinosaurs, birds, and eventually mammals. And then it was really just six or seven million years ago, within the mammals, that the lineage that we now call the hominins arose. And they would be direct descendants of us. And then, within that lineage that arose about six or seven million years ago, it was only about 200,000 years ago that humans finally evolved.

Idea of idea evolution

And so, this is really just 99.99 percent of the way through the history of this planet, humans finally arose. But in that 0.01 percent of life on earth, we’ve utterly changed the planet. And the reason is that, with the arrival of humans 200,000 years ago, a new kind of evolution was created. The old genetical evolution that had ruled for 3.8 billion years now had a competitor, and that new kind of evolution was ideas.

It was a true form of evolution, because now ideas could arise, and they could jump from mind to mind, without genes having to change. So, populations of humans could adapt at the level of ideas. Ideas could accumulate. We call this cumulative cultural adaptation. And so, cultural complexity could emerge and arise orders and orders of magnitude faster than genetic evolution.

Now, I think most of us take that utterly for granted, but it has completely rewritten the way life evolves on this planet because, with the arrival of our species, everything changed. Now, a single species, using its idea evolution, that could proceed apace independently of genes, was able to adapt to nearly every environment on earth, and spread around the world where no other species had done that. All other species are limited to places on earth that their genes adapt them to. But we were able to adapt at the level of our cultures to every place on earth. (…)

If we go back in our lineage 2 million years or so, there was a species known as Homo erectus. Homo erectus is an upright ape that lived on the African savannah. It could make tools, but they were very limited tools, and those tools, the archaeological record tells us, didn’t change for about 1.5 million years. That is, until about the time they went extinct. That is, they made the same tools over and over and over again, without any real changes to them.

If we move forward in time a little bit, it’s not even clear that our very close cousins that we know are related to us 99.5 or 99.6 percent in the sequences of their genes, the Neanderthals, it’s not even clear that they had what we call idea evolution. Sure enough, their tools that they made were more complex than our tools. But the 300,000 or so years that they spent in Europe, their toolkit barely changed. So there’s very little evolution going on.

So there’s something really very special about this new species, humans, that arose and invented this new kind of evolution, based on ideas. And so it’s useful for us to ask, what is it about humans that distinguishes them? It must have been a tiny genetic difference between us and the Neanderthals because, as I said, we’re so closely related to them genetically, a tiny genetic difference that had a vast cultural potential.

That difference is something that anthropologists and archaeologists call social learning. It’s a very difficult concept to define, but when we talk about it, all of us humans know what it means. And it seems to be the case that only humans have the capacity to learn complex new or novel behaviors, simply by watching and imitating others. And there seems to be a second component to it, which is that we seem to be able to get inside the minds of other people who are doing things in front of us, and understand why it is they’re doing those things. These two things together, we call social learning.

Many people respond that, oh, of course the other animals can do social learning, because we know that the chimpanzees can imitate each other, and we see all sorts of learning in animals like dolphins and the other monkeys, and so on. But the key point about social learning is that this minor difference between us and the other species forms an unbridgeable gap between us and them. Because, whereas all of the other animals can pick up the odd behavior by having their attention called to something, only humans seem to be able to select, among a range of alternatives, the best one, and then to build on that alternative, and to adapt it, and to improve upon it. And so, our cultures cumulatively adapt, whereas all other animals seem to do the same thing over and over and over again.

Even though other animals can learn, and they can even learn in social situations, only humans seem to be able to put these things together and do real social learning. And that has led to this idea evolution. What’s a tiny difference between us genetically has opened up an unbridgeable gap, because only humans have been able to achieve this cumulative cultural adaptation. (…)

I’m interested in this because I think this capacity for social learning, which we associate with our intelligence, has actually sculpted us in ways that we would have never anticipated. And I want to talk about two of those ways that I think it has sculpted us. One of the ways has to do with our creativity, and the other has to do with the nature of our intelligence as social animals.

One of the first things to be aware of when talking about social learning is that it plays the same role within our societies, acting on ideas, as natural selection plays within populations of genes. Natural selection is a way of sorting among a range of genetic alternatives, and finding the best one. Social learning is a way of sifting among a range of alternative options or ideas, and choosing the best one of those. And so, we see a direct comparison between social learning driving idea evolution, by selecting the best ideas —we copy people that we think are successful, we copy good ideas, and we try to improve upon them — and natural selection, driving genetic evolution within societies, or within populations.

I think this analogy needs to be taken very seriously, because just as natural selection has acted on genetic populations, and sculpted them, we’ll see how social learning has acted on human populations and sculpted them.

What do I mean by “sculpted them”? Well, I mean that it’s changed the way we are. And here’s one reason why. If we think that humans have evolved as social learners, we might be surprised to find out that being social learners has made us less intelligent than we might like to think we are. And here’s the reason why.

If I’m living in a population of people, and I can observe those people, and see what they’re doing, seeing what innovations they’re coming up with, I can choose among the best of those ideas, without having to go through the process of innovation myself. So, for example, if I’m trying to make a better spear, I really have no idea how to make that better spear. But if I notice that somebody else in my society has made a very good spear, I can simply copy him without having to understand why.

What this means is that social learning may have set up a situation in humans where, over the last 200,000 years or so, we have been selected to be very, very good at copying other people, rather than innovating on our own. We like to think we’re a highly inventive, innovative species. But social learning means that most of us can make use of what other people do, and not have to invest the time and energy in innovation ourselves.

Now, why wouldn’t we want to do that? Why wouldn’t we want to innovate on our own? Well, innovation is difficult. It takes time. It takes energy. Most of the things we try to do, we get wrong. And so, if we can survey, if we can sift among a range of alternatives of people in our population, and choose the best one that’s going at any particular moment, we don’t have to pay the costs of innovation, the time and energy ourselves. And so, we may have had strong selection in our past to be followers, to be copiers, rather than innovators.

This gives us a whole new slant on what it means to be human, and I think, in many ways, it might fit with some things that we realize are true about ourselves when we really look inside ourselves. We can all think of things that have made a difference in the history of life. The first hand axe, the first spear, the first bow and arrow, and so on. And we can ask ourselves, how many of us have had an idea that would have changed humanity? And I think most of us would say, well, that sets the bar rather high. I haven’t had an idea that would change humanity. So let’s lower the bar a little bit and say, how many of us have had an idea that maybe just influenced others around us, something that others would want to copy? And I think even then, very few of us can say there have been very many things we’ve invented that others would want to copy.

This says to us that social evolution may have sculpted us not to be innovators and creators as much as to be copiers, because this extremely efficient process that social learning allows us to do, of sifting among a range of alternatives, means that most of us can get by drawing on the inventions of others.

The formation of social groups

Now, why do I talk about this? It sounds like it could be a somewhat dry subject, that maybe most of us are copiers or followers rather than innovators. And what we want to do is imagine that our history over the last 200,000 years has been a history of slowly and slowly and slowly living in larger and larger and larger groups.

Early on in our history, it’s thought that most of us lived in bands of maybe five to 25 people, and that bands formed bands of bands that we might call tribes. And maybe tribes were 150 people or so on. And then tribes gave way to chiefdoms that might have been thousands of people. And chiefdoms eventually gave way to nation-states that might have been tens of thousands or even hundreds of thousands, or millions, of people. And so, our evolutionary history has been one of living in larger and larger and larger social groups.

What I want to suggest is that that evolutionary history will have selected for less and less and less innovation in individuals, because a little bit of innovation goes a long way. If we imagine that there’s some small probability that someone is a creator or an innovator, and the rest of us are followers, we can see that one or two people in a band is enough for the rest of us to copy, and so we can get on fine. And, because social learning is so efficient and so rapid, we don’t need all to be innovators. We can copy the best innovations, and all of us benefit from those.

But now let’s move to a slightly larger social group. Do we need more innovators in a larger social group? Well, no. The answer is, we probably don’t. We probably don’t need as many as we need in a band. Because in a small band, we need a few innovators to get by. We have to have enough new ideas coming along. But in a larger group, a small number of people will do. We don’t have to scale it up. We don’t have to have 50 innovators where we had five in the band, if we move up to a tribe. We can still get by with those three or four or five innovators, because all of us in that larger social group can take advantage of their innovations.

Language is the way we exchange ideas

And here we can see a very prominent role for language. Language is the way we exchange ideas. And our eyes allow us to see innovations and language allows us to exchange ideas. And language can operate in a larger society, just as efficiently as it can operate in a small society. It can jump across that society in an instant.

You can see where I’m going. As our societies get larger and larger, there’s no need, in fact, there’s even less of a need for any one of us to be an innovator, whereas there is a great advantage for most of us to be copiers, or followers. And so, a real worry is that our capacity for social learning, which is responsible for all of our cumulative cultural adaptation, all of the things we see around us in our everyday lives, has actually promoted a species that isn’t so good at innovation. It allows us to reflect on ourselves a little bit and say, maybe we’re not as creative and as imaginative and as innovative as we thought we were, but extraordinarily good at copying and following.

If we apply this to our everyday lives and we ask ourselves, do we know the answers to the most important questions in our lives? Should you buy a particular house? What mortgage product should you have? Should you buy a particular car? Who should you marry? What sort of job should you take? What kind of activities should you do? What kind of holidays should you take? We don’t know the answers to most of those things. And if we really were the deeply intelligent and imaginative and innovative species that we thought we were, we might know the answers to those things.

And if we ask ourselves how it is we come across the answers, or acquire the answers to many of those questions, most of us realize that we do what everybody else is doing. This herd instinct, I think, might be an extremely fundamental part of our psychology that was perhaps an unexpected and unintended, you might say, byproduct of our capacity for social learning, that we’re very, very good at being followers rather than leaders. A small number of leaders or innovators or creative people is enough for our societies to get by.

Now, the reason this might be interesting is that, as the world becomes more and more connected, as the Internet connects us and wires us all up, we can see that the long-term consequences of this is that humanity is moving in a direction where we need fewer and fewer and fewer innovative people, because now an innovation that you have somewhere on one corner of the earth can instantly travel to another corner of the earth, in a way that it would have never been possible to do 10 years ago, 50 years ago, 500 years ago, and so on. And so, we might see that there has been this tendency for our psychology and our humanity to be less and less innovative, at a time when, in fact, we may need to be more and more innovative, if we’re going to be able to survive the vast numbers of people on this earth.

That’s one consequence of social learning, that it has sculpted us to be very shrewd and intelligent at copying, but perhaps less shrewd at innovation and creativity than we’d like to think. Few of us are as creative as we’d like to think we are. I think that’s been one perhaps unexpected consequence of social learning.

Another side of social learning I’ve been thinking about - it’s a bit abstract, but I think it’s a fascinating one -goes back again to this analogy between natural selection, acting on genetic variation, and social learning, acting on variation in ideas. And any evolutionary process like that has to have both a sorting mechanism, natural selection, and what you might call a generative mechanism, a mechanism that can create variety.

We all know what that mechanism is in genes. We call it mutation, and we know that from parents to offspring, genes can change, genes can mutate. And that creates the variety that natural selection acts on. And one of the most remarkable stories of nature is that natural selection, acting on this mindlessly-generated genetic variation, is able to find the best solution among many, and successively add those solutions, one on top of the other. And through this extraordinarily simple and mindless process, create things of unimaginable complexity. Things like our cells, eyes and brains and hearts, and livers, and so on. Things of unimaginable complexity, that we don’t even understand and none of us could design. But they were designed by natural selection.

Where do ideas come from?

Now let’s take this analogy of a mindless process and take - there’s a parallel between social learning driving evolution at the idea level and natural selection driving evolution at the genetic level - and ask what it means for the generative mechanism in our brains.

Well, where do ideas come from? For social learning to be a sorting process that has varieties to act on, we have to have a variety of ideas. And where do those new ideas come from?

The idea that I’ve been thinking about, that I think is worth contemplating about our own minds is what is the generative mechanism? If we do have any creativity at all and we are innovative in some ways, what’s the nature of that generative mechanism for creating new ideas?

This is a question that’s been asked for decades. What is the nature of the creative process? Where do ideas come from? And let’s go back to genetic evolution and remember that, there, the generative mechanism is random mutation.

Now, what do we think the generative mechanism is for idea evolution? Do we think it’s random mutation of some sort, of ideas? Well, all of us think that it’s better than that. All of us think that somehow we can come up with good ideas in our minds. And whereas natural selection has to act on random variation, social learning must be acting on directed variation. We know what direction we’re going.

But, we can go back to our earlier discussion of social learning, and ask the question, well, if you were designing a new hand axe, or a new spear, or a new bow and a new arrow, would you really know how to make a spear fly better? Would you really know how to make a bow a better bow? Would you really know how to shape an arrowhead so that it penetrated its prey better? And I think most of us realize that we probably don’t know the answers to those questions. And that suggests to us that maybe our own creative process rests on a generative mechanism that isn’t very much better than random itself.

And I want to go further, and suggest that our mechanism for generating ideas maybe couldn’t even be much better than random itself. And this really gives us a different view of ourselves as intelligent organisms. Rather than thinking that we know the answers to everything, could it be the case that the mechanism that our brain uses for coming up with new ideas is a little bit like the mechanism that our genes use for coming up with new genetic variance, which is to randomly mutate ideas that we have, or to randomly mutate genes that we have.

Now, it sounds incredible. It sounds insane. It sounds mad. Because we think of ourselves as so intelligent. But when we really ask ourselves about the nature of any evolutionary process, we have to ask ourselves whether it could be any better than random, because in fact, random might be the best strategy.

Genes could never possibly know how to mutate themselves, because they could never anticipate the direction the world was going. No gene knows that we’re having global warming at the moment. No gene knew 200,000 years ago that humans were going to evolve culture. Well, the best strategy for any exploratory mechanism, when we don’t know the nature of the processes we’re exploring, is to throw out random attempts at understanding that field or that space we’re trying to explore.

And I want to suggest that the creative process inside our brains, which relies on social learning, that creative process itself never could have possibly anticipated where we were going as human beings. It couldn’t have anticipated 200,000 years ago that, you know, a mere 200,000 years later, we’d have space shuttles and iPods and microwave ovens.

What I want to suggest is that any process of evolution that relies on exploring an unknown space, such as genes or such as our neurons exploring the unknown space in our brains, and trying to create connections in our brains, and such as our brain’s trying to come up with new ideas that explore the space of alternatives that will lead us to what we call creativity in our social world, might be very close to random.

We know they’re random in the genetic case. We think they’re random in the case of neurons exploring connections in our brain. And I want to suggest that our own creative process might be pretty close to random itself. And that our brains might be whirring around at a subconscious level, creating ideas over and over and over again, and part of our subconscious mind is testing those ideas. And the ones that leak into our consciousness might feel like they’re well-formed, but they might have sorted through literally a random array of ideas before they got to our consciousness.

Karl Popper famously said the way we differ from other animals is that our hypotheses die in our stead; rather than going out and actually having to try out things, and maybe dying as a result, we can test out ideas in our minds. But what I want to suggest is that the generative process itself might be pretty close to random.

Putting these two things together has lots of implications for where we’re going as societies. As I say, as our societies get bigger, and rely more and more on the Internet, fewer and fewer of us have to be very good at these creative and imaginative processes. And so, humanity might be moving towards becoming more docile, more oriented towards following, copying others, prone to fads, prone to going down blind alleys, because part of our evolutionary history that we could have never anticipated was leading us towards making use of the small number of other innovations that people come up with, rather than having to produce them ourselves.

The interesting thing with Facebook is that, with 500 to 800 million of us connected around the world, it sort of devalues information and devalues knowledge. And this isn’t the comment of some reactionary who doesn’t like Facebook, but it’s rather the comment of someone who realizes that knowledge and new ideas are extraordinarily hard to come by. And as we’re more and more connected to each other, there’s more and more to copy. We realize the value in copying, and so that’s what we do.

And we seek out that information in cheaper and cheaper ways. We go up on Google, we go up on Facebook, see who’s doing what to whom. We go up on Google and find out the answers to things. And what that’s telling us is that knowledge and new ideas are cheap. And it’s playing into a set of predispositions that we have been selected to have anyway, to be copiers and to be followers. But at no time in history has it been easier to do that than now. And Facebook is encouraging that.

And then, as corporations grow … and we can see corporations as sort of microcosms of societies … as corporations grow and acquire the ability to acquire other corporations, a similar thing is happening, is that, rather than corporations wanting to spend the time and the energy to create new ideas, they want to simply acquire other companies, so that they can have their new ideas. And that just tells us again how precious these ideas are, and the lengths to which people will go to acquire those ideas.

A tiny number of ideas can go a long way, as we’ve seen. And the Internet makes that more and more likely. What’s happening is that we might, in fact, be at a time in our history where we’re being domesticated by these great big societal things, such as Facebook and the Internet. We’re being domesticated by them, because fewer and fewer and fewer of us have to be innovators to get by. And so, in the cold calculus of evolution by natural selection, at no greater time in history than ever before, copiers are probably doing better than innovators. Because innovation is extraordinarily hard. My worry is that we could be moving in that direction, towards becoming more and more sort of docile copiers.

But, these ideas, I think, are received with incredulity, because humans like to think of themselves as highly shrewd and intelligent and innovative people. But I think what we have to realize is that it’s even possible that, as I say, the generative mechanisms we have for coming up with new ideas are no better than random.

And a really fascinating idea itself is to consider that even the great people in history whom we associate with great ideas might be no more than we expect by chance. I’ll explain that. Einstein was once asked about his intelligence and he said, “I’m no more intelligent than the next guy. I’m just more curious.” Now, we can grant Einstein that little indulgence, because we think he was a pretty clever guy.

What does curiosity mean?

But let’s take him at his word and say, what does curiosity mean? Well, maybe curiosity means trying out all sorts of ideas in your mind. Maybe curiosity is a passion for trying out ideas. Maybe Einstein’s ideas were just as random as everybody else’s, but he kept persisting at them.

And if we say that everybody has some tiny probability of being the next Einstein, and we look at a billion people, there will be somebody who just by chance is the next Einstein. And so, we might even wonder if the people in our history and in our lives that we say are the great innovators really are more innovative, or are just lucky.

Now, the evolutionary argument is that our populations have always supported a small number of truly innovative people, and they’re somehow different from the rest of us. But it might even be the case that that small number of innovators just got lucky. And this is something that I think very few people will accept. They’ll receive it with incredulity. But I like to think of it as what I call social learning and, maybe, the possibility that we are infinitely stupid.”

Mark Pagel, Professor of Evolutionary Biology, Reading University, England and The Santa Fe Institute, Infinite Stupidity, Edge, Dec 16, 2011 (Illustration by John S. Dykes)

See also:

☞ Mark Pagel: How language transformed humanity



Biologist Mark Pagel shares an intriguing theory about why humans evolved our complex system of language. He suggests that language is a piece of “social technology” that allowed early human tribes to access a powerful new tool: cooperation. Mark Pagel: How language transformed humanity, TED.com, July 2011

The Kaleidoscopic Discovery Engine. ‘All scientific discoveries are in principle ‘multiples’’
Neal Gabler on The Elusive Big Idea - ‘We are living in a post ideas world where bold ideas are almost passé’

Nov
25th
Fri
permalink

Sue Savage-Rumbaugh on Human Language—Human Consciousness. A personal narrative arises through the vehicle of language


                                        Jamie Marie Waelchli, Thought Map No. 8

Human language, coupled with human maternal care, enables the consciousness to bifurcate very early and extensively. Without the self-reflective properties inherent in a reflexive agent- recipient language, and without the objectification of the human infant — a very different kind of humanity would arise.

Human consciousness, as constructed by human language, becomes the vehicle through which the self-reflective human mind envisions time. Language enables the viewer to reflect upon the actions of the doer (and the actions of one’s internal body), while projecting forward and backward — other possible bodily actions — into imagined space/time. Thus the projected and imagined space/time increasingly becomes the conscious world and reality of the viewer who imagines or remembers actions mapped onto that projected plan. The body thus becomes a physical entity progressing through the imaged world of the viewer. As the body progresses through this imaged world, the viewer also constructs a way to mark progress from one imagined event to another. Having once marked this imagined time into units, the conscious viewer begins to order the anticipated actions of the body into a linear progression of events.

A personal narrative then arises through the vehicle of language. Indeed a personal narrative is required, expected and placed upon every human being, by the very nature of human language. This personal narrative becomes organized around the anticipated bodily changes that it is imagined will take place from birth to old age. The power of the bifurcated mind, through linguistically encoded expectancies, shapes and molds all of human behavior. When these capacities are jointly executed by other similar minds — the substrate of human culture is manufactured.

Human culture, because it rides upon a manufactured space/time self-reflective substrate, is unique. Though it shares some properties with animal culture, it is not merely a natural Darwinian extension of animal culture. It is based on constructed time/space, constructed mental relationships, constructed moral responsibilities, and constructed personal narratives — and individuals, must, at all times, justify their actions toward another on the basis of their co-constructed expectancies.

Human Consciousness seems to burst upon the evolutionary scene in something of an explosion between 40,000 and 90,000 years ago. Trading emerges, art emerges, and symboling ability emerges with a kind of intensity not noted for any previous time in the archeological record. (…)

Humans came with a propensity to alter the world around them wherever they went. We were into object manipulation in all aspects of our existence, and wherever we went we altered the landscape. We did not accept the natural world as we found it — we set about refashioning our worlds according to our own needs and desires. From the simple act of intentionally setting fires to eliminate underbrush, to the exploration of outer space, humanity manifested the view that it was here to control its own destiny, by changing the world around it, as well as by individuals’ changing their own appearances.

We put on masks and masqueraded about the world, seeking to make the world conform to our own desires, in a way no other species emulated. In brief, the kind of language that emerged between 40,000 and 90,000 years ago, riding upon the human anatomical form, changed us forever, and we began to pass that change along to future generations.

While Kanzi and family are bonobos, the kind of language they have acquired — even if they have not manifested all major components yet — is human language as you and I speak it and know it. Therefore, although their biology remains that of apes, their consciousness has begun to change as a function of the language, the marks it leaves on their minds and the epigenetic marks it leaves on the next generation. (Epigenetic: chemical markers which become attached to segments of genes during the lifetime of an individual are passed along to future generations, affecting which genes will be expressed in succeeding generations.) They explore art, they explore music, they explore creative linguistic negotiation, they have an autobiographical past and they think about the future. They don’t do all these things with human-like proficiency at this point, but they attempt them if given opportunity. Apes not so reared do not attempt to do these things.

What kind of power exists within the kind of language we humans have perfected? Does it have the power to change biology across time, if it impacts the biological form upon conception? Science has now become aware of the power of initial conditions, through chaos theory, the work of Mandelbrot with fractal geometric forms, and the work of Wolfram and the patterns that can be produced by digital reiterations of simple and only slightly different starting conditions. Within the fertilized egg lie the initial starting conditions of every human.

We also now realize that epigenetic markers from parental experience can set these initial starting conditions, determining such things as the order, timing, and patterning of gene expression profiles in the developing organism. Thus while the precise experience and learning of the parents is not passed along, the effects of those experiences, in the form of genetic markers that have the power to affect the developmental plan of the next generation during the extraordinarily sensitive conditions of embryonic development, are transmitted. Since language is the most powerful experience encountered by the human being and since those individuals who fail to acquire human language are inevitably excluded from (or somehow set apart in) the human community, it is reasonable to surmise that language will, in some form, transmit itself through epigenetic mechanisms.

When a human being enters into a group of apes and begins to participate in the rearing of offspring, different epigenetic markers have the potential to become activated. We already know, for example, that in human beings, expectancies or beliefs can affect gene activity. The most potent of the epigenetic markers would most probably arise from the major difference between human and ape infants. Human infants do not cling, ape infants do. When ape infants are carried like human infants, they begin to development eye/hand coordination from birth. This sets the developmental trajectory of the ape infant in a decidedly human direction — that of manipulating the world around it. Human mothers, unlike ape mothers, also communicate their intentions linguistically to the infant. Once an intention is communicated linguistically, it can be negotiated, so there arises an intrinsic motivation to tune into and understand such communications on the part of the ape infant. The ‘debate’ in ape language, which has centered around do they have or don’t they — has missed the point. This debate has ignored the key rearing variables that differ dramatically across the studies. Apart from Kanzi and family, all other apes in these studies are left alone at night and drilled on associative pairings during the day.”

Sue Savage-Rumbaugh, is a primatologist most known for her work with two bonobos, Kanzi and Panbanisha, investigating their use of “Great Ape language” using lexigrams and computer-based keyboards. Until recently based at Georgia State University’s Language Research Center in Atlanta.

To read full essay click Human Language—Human Consciousness, National Humanities Center, Jan 2nd, 2011

See also:

John Shotter on encounters with ‘Other’ - from inner mental representation to dialogical social practices
Do thoughts have a language of their own? The language of thought hypothesis, Lapidarium notes

Sep
8th
Thu
permalink

Google and the Myceliation of Consciousness
    

"Is this the largest organism in the world? This 2,400-acre (9.7 km2) site in eastern Oregon had a contiguous growth of mycelium before logging roads cut through it. Estimated at 1,665 football fields in size and 2,200 years old, this one fungus has killed the forest above it several times over, and in so doing has built deeper soil layers that allow the growth of ever-larger stands of trees. Mushroom-forming forest fungi are unique in that their mycelial mats can achieve such massive proportions.”

Paul Stamets, American mycologist, author, Mycelium Running

"What Stamet calls the mycelial archetype [Mycelial nets are designed the same as brain cells: centers with branches reaching out, whole worlds. 96% of dark matter threads]. He compares the mushroom mycelium with the overlapping information-sharing systems that comprise the Internet, with the networked neurons in the brain, and with a computer model of dark matter in the universe. All share this densely intertwingled filamental structure. Stamets says in Mycelium Running,

“I believe that the mycelium operates at a level of complexity that exceeds the computational powers of our most advanced supercomputers. I see the mycelium as the Earth’s natural Internet, a consciousness with which we might be able to communicate.” (…)

This super-connectivity and conductivity is often accompanied by blissful mindbody states and the cognitive ecstasy of multiple “aha’s!” when the patterns in the mycelium are revealed. The Googling that has become a prime noetic technology (How can we recognize a pattern and connect more and more, faster and faster?: superconnectivity and superconductivity) mirrors the increased speed of connection of thought-forms from cannabis highs on up. The whole process is driven by desire not only for these blissful states in and of themselves, but also as the cognitive resource they represent. (…) The devices of desire are those that connect. The Crackberry is just the latest super-connectivity and conductivity device-of-desire.

The psilocybin mushroom embeds the form of its own life-cycle into consciousness when consciousness is altered by the mushroom, and this template, brought home to Google Earth, made into tools of connectivity, potentiates the mycelium of knowledge, connecting all cultural production. The traditional repositories—the books and print and CD and DVD materials—swarm online, along with intimate glimpses of Everyblogger’s Life in multimediated detail.

Here on Google watch, I’m tracking the form of this whole wildly interconnecting activity that this desire to connect inscribes, the millions of simultaneous individual expressions of desire: searches, adclicks, where am I?, what’s near me?, who’s connected to whom? The desire extends the filaments, and energizes the constant linking and unlinking of the vast signaling system that lights up the mycelium. Periodic visits to the psychedelic sphere reveal the progress of this mycelial growth, as well as its back-history, future, origins, inhabitants, and purpose. Google is growing the cultural mycelial mat, advancing this process exponentially. Google is the first psychedelically informed super-power to shape the noosphere and NASDAQ. Google is part of virtually everybody’s online day. The implications are staggering. (…)

In the domain of consciousness, super-connectivity and super-conductivity also reign. Superconductivity: speed is of the essence. Speed of conductivity of meaning. How fast can consciousness make meaning out of the flux of perceptions? (…)

When Google breaks through the natural language barrier and catches a glimpse, at least, of what it’s like to operate cognition entirely outside the veil of natural language, they will truly be Masters of Meaning. (…) Meaning manifests independently of language, though often finds itself entombed therein. But from this bootstrap move outside language, new insights arise regarding the structures and functions of natural language from a perspective that handles cognition with different tools, perceptions, sensory modalities—and produces new forms of language with new feature sets. (…)

This is the download Terence McKenna kept cycling through, and represents the key noetic technology for the stabilization of the transformation of consciousness in a sharable conceptual architecture. In Terence’s words,

It’s almost as though the project of communication becomes high-speed sculpture in a conceptual dimension made of light and intentionality. This would remain a kind of esoteric performance on the part of shamans at the height of intoxication if it were not for the fact that electronics and electronic cultural media, computers, make it possible for us to actually create records of these higher linguistic modalities.”

RoseRose, “paleoanthropologist from a distant timeframe”, in deep cover on Google Earth as a video performance artist, Google and the Myceliation of Consciousness, Reality Sandwich, Nov 10, 2007 (Illustration source)

See also:

☞ Paul Stamets, Six Ways Mushrooms Can Save the World, TED.com, 2008 (video)

Sep
7th
Wed
permalink

Universal Semantic Communication. Is it possible for two intelligent beings to communicate meaningfully, without any common language or background?

                              

"This question has interest on its own, but is especially relevant in the context of modern computational infrastructures where an increase in the diversity of computers is making the task of inter-computer interaction increasingly burdensome. Computers spend a substantial amount of time updating their software to increase their knowledge of other computing devices. In turn, for any pair of communicating devices, one has to design software that enables the two to talk to each other. Is it possible instead to let the two computing entities use their intelligence (universality as computers) to learn each others’ behavior and attain a common understanding? What is “common understanding?” We explore this question in this paper.

To formalize this problem, we suggest that one should study the “goal of communication:” why are the two entities interacting with each other, and what do they hope to gain by it? We propose that by considering this question explicitly, one can make progress on the question of universal communication.

We start by considering a computational setting for the problem where the goal of one of the interacting players is to gain some computational wisdom from the other player. We show that if the second player is “sufficiently” helpful and powerful, then the first player can gain significant computational power (deciding PSPACE complete languages).

Our work highlights some of the definitional issues underlying the task of formalizing universal communication, but also suggests some interesting phenomena and highlights potential tools that may be used for such communication. (…)

Consider the following scenario: Alice, an extraterrestrial, decides to initiate contact with a terrestrial named Bob by means of a radio wave transmission. How should he respond to her? Will he ever be able to understand her message? In this paper we explore such scenarios by framing the underlying questions computationally.

We believe that the above questions have intrinsic interest, as they raise some further fundamental questions. How does one formalize the concept of understanding? Does communication between intelligent beings require a “hardwired” common sense of meaning or language? Or, can intelligence substitute for such requirements? What role, if any, does computational complexity play in all this? (…)

Marvin Minsky suggested that communication should be possible from a philosophical standpoint, but did not provide any formal definitions or constructions.

LINCOS [an abbreviation of the Latin phrase lingua cosmica]: The most notable and extensive prior approach to this problem is due to Hans Freudenthal, who claims that it is possible to code messages describing mathematics, physics, or even simple stories in such a radio transmission which can be understood by any sufficiently humanlike recipient. Ideally, we would like to have such a rich language at our disposal; it should be clear that the “catch” lies in Freudenthal’s assumption of a “humanlike” recipient, which serves as a catch-all for the various assumptions that serve as the foundations for Freudenthal’s scheme.

It is possible to state more precise assumptions which form the basis of Freudenthal’s scheme, but among these will be some fairly strong assumptions about how the recipient interprets the message. In particular, one of these is the assumption that all semantic concepts of interest can be characterized by lists of syntactic examples. (…)

Information Theory

The classical theory of communication does not investigate the meaning associated with information and simply studies the process of communicating the information, in its exact syntactic form. It is the success of this theory that motivates our work: computers are so successful in communicating a sequence of bits, that the most likely source of “miscommunication” is a misinterpretation of what these bits mean. (…)

Interactive Proofs and Knowledge

Finally, the theory of interactive proofs and knowledge [pdf] (and also the related M. Blum and S. Kannan. Designing programs that check their work) gets further into the gap between Alice and Bob, by ascribing to them different, conflicting intents, though they still share common semantics. It turns out this gap already starts to get to the heart of the issues that we consider, and this theory is very useful to us at a technical level. In particular, in this work we consider a setting where Bob wishes to gain knowledge from Alice. Of course, in our setting Bob is not mistrustful of Alice, he simply does not understand her. (…)

Modeling issues

Our goal is to cast the problem of “meaningful” communication between Alice and Bob in a purely mathematical setting. We start by considering how to formulate the problem where the presence of a “trusted third party” would easily solve the problem.

Consider the informal setting in which Alice and Bob speak different natural languages and wish to have a discussion via some binary channel. We would expect that a third party who knows both languages could give finite encoding rules to Alice and Bob to facilitate this discussion, and we might be tempted to require that Alice’s statements translate into the same statements in Bob’s language that the third party would have selected and vice-versa.

In the absence of the third party, this is unreasonable to expect, though: suppose that Alice and Bob were given encoding rules that were identical to those that a third party would have given them, except that some symmetric sets of words have been exchanged—say, Alice thinks “left” means “right,” “clockwise” means “counter-clockwise,” etc. Unless they have some way to tell that these basic concepts have been switched, observe that they would still have a conversation that is entirely sensible to each of them. [See also] Thus, if we are to have any hope at all, we must be prepared to accept interactions that are indistinguishable from successes as “successes” as well. We do not wish to take this to an extreme, though: Bob cannot distinguish among Alices who say nothing, and yet we would not classify their interactions as “successes.”

At the heart of the issues raised by the discussion above is the question: what does Bob hope to get out of this conversation with Alice? In general, why do computers, or humans communicate? Only by pinning down this issue can we ask the question, “can they do it without a common language?”

We believe that there are actually many possible motivations for communication. Some communication is motivated by physical needs, and others are motivated purely by intellectual needs or even curiosity. However these diverse settings still share some common themes: communication is being used by the players to achieve some effects that would be hard to achieve without communication. In this paper, we focus on one natural motivation for communication: Bob wishes to communicate with Alice to solve some computa- tional problems. (…)

In order to establish communication between Alice and Bob, Bob runs in time exponential in a parameter that could be described informally as the length of the dictionary that translates Bob’s language into Alice’s language. (Formally, the parameter is the description length of the protocol for interpreting Alice in his encoding of Turing machines.) (…) [p.3]

[To see proofs of theorems and more, click pdf]

Conclusions

In the previous sections we studied the question, “how can two intelligent interacting players attempt to achieve some meaningful communication in a universal setting, i.e., one in which the two players do not start with a common background?” We return now to the motivation for studying this question, and the challenges that need to be dealt with to address the motivations. (…)

We believe that this work has raised and addressed some fundamental questions of intrinsic interest. However this is not the sole motivation for studying this problem. We believe that these questions also go to the heart of “protocol issues” in modern computer networks. Modern computational infrastructures are built around the concept of communication and indeed a vast amount of effort is poured into the task of ensuring that the computers work properly as communication devices. Yet as computers and networks continue to evolve at this rapid pace, one problem is becoming increasingly burdensome: that of ensuring that every pair of computers is able to “understand” each other, so as to communicate meaningfully. (…)

Current infrastrusctures ensure this ability for pairs to talk to each other by explicitly going through a “setup” phase, where a third party who knows the specifications of both elements of a pair sets up a common language/protocol for the two to talk to each other, and then either or both players learn (download) this common language to establish communication. An everyday example of such an occurence is when we attempt to get our computer to print on a new printer. We download a device driver for our computer which is a common language written by someone who knows both our computer and the printer.

We remark that this issue is a fundamental one, and not merely an issue of improper design. Current protocols are designed with a fixed pair of types of devices in mind. However, we expect for our computers to be capable of communicating with all other communication devices, even ones that did not exist when our computer was built. While it would be convenient if all computers interacted with each other using a single fixed protocol that is static over time, this is no more reasonable to expect than asking humans to agree on a single language to converse in, and then to expect this language to stay fixed over time. Thus, to satisfy our expectations in the current setting, it is essential that computers are constantly updated so as to have universal connectivity over time. (…)

This work was motivated by a somewhat radical alternative scenario for communication. Perhaps we should not set computers up with common languages, but rather exploit the universality in our favor, by letting them evolve to a common language. But then this raises issues such as: how can the computers know when they have converged to a common understanding? Or, how does one of the computers realize that the computer it is communicating with is no longer in the same mode as they were previously, and so the protocol for communication needs to be adjusted? The problem described in the opening paragraph of the introduction is simply the extremal version of such issues, where the communicating players are modeled as having no common background. (…)

Perhaps the main contribution of this work is to suggest that communication is not an end in itself, but rather a means to achieving some general goal. Such a goal certainly exists in all the practical settings above, though it is no longer that of deciding membership in some set S. Our thesis is that one can broaden the applicability of this work to other settings by (1) precisely articulating the goal of communication in each setting and (2) constructing “universal protocols” that achieve these goals. (…)

One of the implicit suggestions in this work is that communicating players should periodically test to see if the assumption of common understanding still holds. When this assumption fails, presumably this happened due to a “mild” change in the behavior of one of the players. It may be possible to design communication protocols that use such a “mildness” assumption to search and re-synchronize the communicating players where the “exponential search” takes time exponential in the amount of change in the behavior of the players. Again, pinning down a precise measure of the change and designing protocols that function well against this measure are open issues.”

Brendan Juba, Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory, and Harvard University. School of Engineering and Applied Sciences - Theory of Computing group, Madhu Sudan, Indian computer scientist, professor of computer science at the Massachusetts Institute of Technology (MIT), Universal Semantic Communication I (pdf), MIT, 2010 (Illustration source)

See also:

☞ Brendan Juba, Madhu Sudan, Universal Semantic Communication II (pdf), MIT
☞ J. Bao, P. Basu, M. Dean, C. Partridge, A. Swami, W. Leland, J. A. Hendler, Towards a Theory of Semantic Communication (Extended Technical Report) (pdf)

Sep
3rd
Sat
permalink

Republic of Letters ☞ Exploring Correspondence and Intellectual Community in the Early Modern Period (1500-1800)


                                            The Republic of Letters

"Despite the wars and despite different religions. All the sciences, all the arts, thus received mutal assistance in this way: the academies formed this republic. (…) True scholars in each field drew closer the bonds of this great society of minds, spread everywhere and everywhere independent. This correspondence still remains; it is one of the consolations for the evils that ambition and politics spread across the Earth."

Voltaire, Le Siècle de Louis XIV cited in Dena Goodman, The Republic of letters: a cultural history of the French enlightenment, Cornell University Press, 1996, p. 20

Republic of Letters (Respublica literaria) is most commonly used to define intellectual communities in the late 17th and 18th century in Europe and America. It especially brought together the intellectuals of Age of Enlightenment, or “philosophes" as they were called in France. The Republic of Letters emerged in the 17th century as a self-proclaimed community of scholars and literary figures that stretched across national boundaries but respected differences in language and culture. These communities that transcended national boundaries formed the basis of a metaphysical Republic. (…)

As is evident from the term, the circulation of handwritten letters was necessary for its function because it enabled intellectuals to correspond with each other from great distances. All citizens of the 17th century Republic of Letters corresponded by letter, exchanged published papers and pamphlets, and considered it their duty to bring others into the Republic through the expansion of correspondence.” - (Wiki)

"[They] organized itself around cultural institutions (e. g. museums, libraries, academies) and research projects that collected, sorted, and dispersed knowledge. A pre-disciplinary community in which most of the modern disciplines developed, it was the ancestor to a wide range of intellectual societies from the seventeenth-century salons and eighteenth-century coffeehouses to the scientific academy or learned society and the modern research university.

Forged in the humanist culture of learning that promoted the ancient ideal of the republic as the place for free and continuous exchange of knowledge, the Republic of Letters was simultaneously an imagined community (a scholar’s utopia where differences, in theory, would not matter), an information network, and a dynamic platform from which a wide variety of intellectual projects – many of them with important ramifications for society, politics, and religion – were proposed, vetted, and executed. (…)

The Republic of Letters existed for almost four hundred years. Its scope encompassed all of Europe, but reached well beyond this region as western Europeans had more regular contact with and presence in Russia, Asia, Africa, and the Americas. In the sixteenth and seventeenth century merchants and missionaries helped to create global information networks and colonial outposts that transformed the geography of the Republic of Letters. By the eighteenth century we can speak of a trans-Atlantic republic of letters shaped by central figures such as Franklin and many others, north and south, who wrote and traveled across the Atlantic.”

"Recent scholarship has established that intellectuals across Europe came to see themselves, in the sixteenth, seventeenth and eighteenth centuries, as citizens of a transnational intellectual society—a Republic of Letters in which speech was free, rank depended on ability and achievement rather than birth, and scholars, philosophers and scientists could find common ground in intellectual inquiry even if they followed different faiths and belonged to different nations.”

— Anthony Grafton, Republic of Letters introduction, Stanford University

Republic of Letters Project

         
                                          (click image to explore)

Researchers map thousands of letters exchanged in the 18th century’s “Republic of Letters” and learn at a glance what it once took a lifetime of study to comprehend.

Mapping the Republic of Letters, Stanford University

See also:

☞ Dena Goodman, The Republic of letters: a cultural history of the French enlightenment, Cornell University Press, 1996
☞ April Shelford, Transforming the republic of letters: Pierre-Daniel Huet and European intellectual life, 1650-1720, University Rochester Press, 2007
New social media? Same old, same old, say Stanford experts, Stanford University News, Nov 2, 2011.
☞ Cynthia Haven, Hot new social media maybe not so new: plus ça change, plus c’est la même chose, Stanford University The Book Haven, Nov 2, 2011

Sep
2nd
Fri
permalink

Kevin Kelly on Why the Impossible Happens More Often

     
                                                   Noosphere by Tatiana Plakhova

"Everyone "knew" that people don’t work for free, and if they did, they could not make something useful without a boss. But today entire sections of our economy run on software instruments created by volunteers working without pay or bosses. Everyone knew humans were innately private beings, yet the impossibility of total open round-the-clock sharing still occurred. Everyone knew that humans are basically lazy, and they would rather watch than create, and they would never get off their sofas to create their own TV. It would be impossible that millions of amateurs would produce billions of hours of video, or that anyone would watch any of it. Like Wikipedia, or Linux, YouTube is theoretically impossible. But here this impossibility is real in practice. (…)

As far as I can tell the impossible things that happen now are in every case manifestations of a new, bigger level of organization. They are the result of large-scale collaboration, or immense collections of information, or global structures, or gigantic real-time social interactions. Just as a tissue is a new, bigger level of organization for a bunch of individual cells, these new social structures are a new bigger level for individual humans. And in both cases the new level breeds emergence. New behaviors emerge from the new level that were impossible at the lower level. Tissue can do things that cells can’t. The collectivist organizations of wikipedia, Linux, the web can do things that industrialized humans could not. (…)

The cooperation and coordination breed by irrigation and agriculture produced yet more impossible behaviors of anticipation and preparation, and sensitivity to the future. Human society unleashed all kinds of previously impossible human behaviors into the biosphere.

The technium is accelerating the creation of new impossibilities by continuing to invent new social organizations. (…)

When we are woven together into a global real-time society, the impossibilities will really start to erupt. It is not necessary that we invent some kind of autonomous global consciousness. It is only necessary that we connect everyone to everyone else. Hundreds of miracles that seem impossible today will be possible with this shared human awareness. (…)

In large groups the laws of statistics take over and our brains have not evolved to do statistics. The amount of data tracked is inhuman; the magnitudes of giga, peta, and exa don’t really mean anything to us; it’s the vocabulary of machines. Collectively we behave differently than individuals. Much more importantly, as individuals we behave differently in collectives. (…)

We are swept up in a tectonic shift toward large, fast, social organizations connecting us in novel ways. There may be a million different ways to connect a billion people, and each way will reveal something new about us. Something hidden previously. Others have named this emergence the Noosphere, or MetaMan, or Hive Mind. We don’t have a good name for it yet. (…)

I’ve used the example of the bee before. One could exhaustively study a honey bee for centuries and never see in the lone individual any of the behavior of a bee hive. It is just not there, and can not emerge until there are a mass of bees. A single bee lives 6 weeks, so a memory of several years is impossible, but that’s how long a hive of individual bees can remember. Humanity is migrating towards its hive mind. Most of what “everybody knows” about us is based on the human individual. Collectively, connected humans will be capable of things we cannot imagine right now. These future phenomenon will rightly seem impossible. What’s coming is so unimaginable that the impossibility of wikipedia will recede into outright obviousness.

Connected, in real time, in multiple dimensions, at an increasingly global scale, in matters large and small, with our permission, we will operate at a new level, and we won’t cease surprising ourselves with impossible achievements.”

Kevin Kelly, writer, the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, Why the Impossible Happens More Often, The Technium, 26 August 2011

Jul
26th
Tue
permalink

Minority rules: Scientists discover tipping point for the spread of ideas

“The same mathematics of networks that governs the interactions of molecules in a cell, neurons in a brain, and species in an ecosystem can be used to understand the complex interconnections between people, the emergence of group identity, and the paths along which information, norms, and behavior spread from person to person to person.” James Fowler is a political scientist at the University of California

"Scientists at Rensselaer Polytechnic Institute have found that when just 10 percent of the population holds an unshakable belief, their belief will always be adopted by the majority of the society. The scientists, who are members of the Social Cognitive Networks Academic Research Center (SCNARC) at Rensselaer, used computational and analytical methods to discover the tipping point where a minority belief becomes the majority opinion. The finding has implications for the study and influence of societal interactions ranging from the spread of innovations to the movement of political ideals.

"When the number of committed opinion holders is below 10 percent, there is no visible progress in the spread of ideas. It would literally take the amount of time comparable to the age of the universe for this size group to reach the majority," said SCNARC Director Boleslaw Szymanski, the Claire and Roland Schmitt Distinguished Professor at Rensselaer. "Once that number grows above 10 percent, the idea spreads like flame."

         

In this visualization, we see the tipping point where minority opinion (shown in red) quickly becomes majority opinion. Over time, the minority opinion grows. Once the minority opinion reached 10 percent of the population, the network quickly changes as the minority opinion takes over the original majority opinion (shown in green). Credit: SCNARC/Rensselaer Polytechnic Institute

As an example, the ongoing events in Tunisia and Egypt appear to exhibit a similar process, according to Szymanski. “In those countries, dictators who were in power for decades were suddenly overthrown in just a few weeks.”

The findings were published in the July 22, 2011, early online edition of the journal Physical Review E in an article titled Social consensus through the influence of committed minorities.”

An important aspect of the finding is that the percent of committed opinion holders required to shift majority opinion does not change significantly regardless of the type of network in which the opinion holders are working. In other words, the percentage of committed opinion holders required to influence a society remains at approximately 10 percent, regardless of how or where that opinion starts and spreads in the society.

To reach their conclusion, the scientists developed computer models of various types of social networks. One of the networks had each person connect to every other person in the network. The second model included certain individuals who were connected to a large number of people, making them opinion hubs or leaders. The final model gave every person in the model roughly the same number of connections. The initial state of each of the models was a sea of traditional-view holders. Each of these individuals held a view, but were also, importantly, open minded to other views.

Once the networks were built, the scientists then “sprinkled” in some true believers throughout each of the networks. These people were completely set in their views and unflappable in modifying those beliefs. As those true believers began to converse with those who held the traditional belief system, the tides gradually and then very abruptly began to shift.

"In general, people do not like to have an unpopular opinion and are always seeking to try locally to come to consensus. We set up this dynamic in each of our models," said SCNARC Research Associate and corresponding paper author Sameet Sreenivasan. To accomplish this, each of the individuals in the models “talked” to each other about their opinion. If the listener held the same opinions as the speaker, it reinforced the listener’s belief. If the opinion was different, the listener considered it and moved on to talk to another person. If that person also held this new belief, the listener then adopted that belief.

"As agents of change start to convince more and more people, the situation begins to change," Sreenivasan said. “People begin to question their own views at first and then completely adopt the new view to spread it even further. If the true believers just influenced their neighbors, that wouldn’t change anything within the larger system, as we saw with percentages less than 10.”

The research has broad implications for understanding how opinion spreads. “There are clearly situations in which it helps to know how to efficiently spread some opinion or how to suppress a developing opinion,” said Associate Professor of Physics and co-author of the paper Gyorgy Korniss. “Some examples might be the need to quickly convince a town to move before a hurricane or spread new information on the prevention of disease in a rural village.”“

Minority rules: Scientists discover tipping point for the spread of ideas, EurekaAlert, 25 July 2011

See also:

The Story of Networks
☞ Manuel Castells, Network Theories of Power - video lecture, USCAnnenberg

Jul
3rd
Sun
permalink

Nicholas Ostler on The Last Lingua Franca. English Until the Return of Babel

                        

"By and large, lingua-francas are the languages of wider communication, such as enable vast empires to have a common administration, and also allow international contacts. (…)

In the second half of the 1st millennium BC, Greek persisted around the Mediterranean mostly as the result of Greek trading, reinforced by cultural prestige of its arts and literature, but was then massively reinforced by Alexander’s conquests. Persian spread within the eastern zones conquered by the Muslims, but it flowed back westward as Persian administration became common within the empire of the Caliphs. Then it was spread wider by Turkic-speaking armies, notably into modern Turkey and India, since they could not conceive of a cultured administration without it. The use of English as a widespread lingua franca began in India (actually, as it happened, replacing Persian), but was aided elsewhere by the schools which tended to accompany religious missions in new British colonies. Later (in the 20th century) it had become unchallenged as the common language of science, of international relations and business.

Q: Latin lasted a particularly long time. Why did it survive the collapse of the Roman Empire?

It kept changing its role. First it expanded to become the language of Christianity, replacing Greek: so Latin, the mother-tongue of Western Christian majority, began to be used to express their common faith. Then it survived because it was the language of the Roman Catholic Church, i.e. the Catholic lingua franca. (Gothic-speaking Arian Christians lost out to Catholics everywhere during the sixth century AD.) De facto, Latin became the lingua franca of Western Europe, because it was the only language taught in schools. This status continued for another 1,000 years, because it was so convenient to the elite. Only when European society began to be transformed in the 16th century, with the decline of the Church, and rising power of France and England (and their middle classes), as well as the opening of the world as a whole to European commercial interests, did Latin’s advantages seem outweighed by the costs of maintaining its status.

Q: What makes you suspect that English will not reign as long as Latin?

All the factors that have spread English have already peaked, and there is no stability of power and influence which might simply leave the status quo in place. There is no accepted common political dispensation in the world nowadays, comparable to the Catholic Church in Europe. Individual powers for which English is an alien burden (China, Russia, Brazil, the Arab world, Indonesia, Mexico, even India) are already stirring, and attempting to enhance their global roles.

Q: How much longer do you think English has as a global language?

It will continue to be used until there is a workable alternative, and not a moment longer. It appears that language technology will soon provide that alternative, allowing speakers to go on using every mother-tongue, and yet be understood by speakers of any other language. This will be available in a decade or two, and (since all the costs will fall as soon as the technical problem is clearly solved) will very soon spread to be universal. So it is very unlikely that global learning (and use) of English will still be popular by the middle of this century.

Q: Will Chinese or another language take its place?

Probably not. All languages that might compete (except French, whose global days have probably passed) are regionally focused, hence limited as to global utility; and I do not anticipate a new round of global colonization, say from China, India or Indonesia. Technology will probably make a single replacement unnecessary anyway. (…)

Q: If English declines in use as a lingua franca, how must Anglophones adjust? Will travelers have to take more Berlitz classes before going abroad?

It is unlikely much adjustment will be needed. Everyone will increasingly use their own languages, and the world - given the necessary information technology - will understand. But it may increasingly be incumbent on English-speakers to find ways of penetrating statements that are made in foreign languages without an English translation (much as the world’s diplomatic establishments used to do routinely). Foreigners will increasingly adopt a “take it or leave it” attitude to English-speakers, leaving them to sink or (make the effort to) swim. But all this is much as English-speakers have long done to the rest of the world. (…)

Q: Do you think America’s elitist attitude toward other languages is changing? Is there evidence that more Americans are studying foreign languages?

No. No. Quite the reverse, despite the panic about US ignorance of Middle Eastern languages supposedly caused by 9/11, and the wars to which it has led. (…)

Q: When English loses its dominance outside its mother tongue regions, are Americans likely to become even more open or more hostile toward learning other languages and toward immigrants speaking other languages in the U.S? (Is there any historical example to point one-way or the other?)

As I said, I think there will be more hostility against immigrants who do not adopt English. Such symbolic disloyalty (as it will be seen) will be more offensive to many, as it becomes apparent that the USA is losing its acknowledged dominance. Americans may, if anything, be more likely to “stand on ceremony” and insist militantly that others - even in foreign parts of the world - accommodate them by adopting the means to cope with English, while (perhaps, at least in the early days) resisting the need to make equal and opposite accommodations themselves.

The best recent model might be the reluctance, not to say ‘denial’, of the French in reacting to the decline in international use of their language post 1918. But it was also notable that the nations of northern and eastern Europe (the last to acquire Latin as a lingua-franca) tried to hang on to use of Latin longest in 18th and even 19th centuries, when French (and other major European vernaculars) had become established as media of international communication. It is not a direct parallel, but one recalls Valerius Maximus in the first century AD, congratulating the Roman magistrates who “persistently maintained the practice of replying only in Latin to the Greeks. And so they forced them to speak through interpreters, losing their linguistic fluency, their great strength, not just in our capital city but in Greece and Asia too, evidently to promote the honour of the Latin language throughout the world.”

Nicholas Ostler, British scholar and author. Ostler studied at Balliol College, Oxford, where he received degrees in Greek, Latin, philosophy, and economics. He later studied under Noam Chomsky at the Massachusetts Institute of Technology, where he earned his Ph.D. in linguistics and Sanskrit, The Last Lingua Franca. English Until the Return of Babel, Penguin Books, 2011 (Illustration source)

See also:

List of lingua francas
☞ Henry Hitchings, What’s the language of the future?, Salon, Nov 6, 2011.
Why Do Languages Die? Urbanization, the state and the rise of nationalism, Lapidarium notes

Jun
24th
Fri
permalink

The Neurobiology of “We”. Relationship is the flow of energy and information between people, essential in our development

  

"The study of neuroplasticity is changing the way scientists think about the mind/brain connection. While they’ve known for years that the brain is the physical substrate for the mind, the central mystery of neuroscience is how the mind influences the physical structure of the brain. In the last few decades, thanks to PET and MRI imaging techniques, scientists can observe what’s actually going on in the brain while people sleep, work, make decisions, or attempt to function under limitations caused by illness, accident, or war. (…)

Dr. Daniel Siegel, Clinical Professor of Psychiatry at the UCLA School of Medicine, co-director of the Mindful Awareness Research Center, and director of the Mindsight Institute (…) convinced that the “we” connection is a little-understood, but powerful means for individual and societal transformation that should be taught in schools and churches, and even enter into politics.

Interpersonal neurobiology isn’t a form of therapy,” he told me, “but a form of integrating a range of scientific research into a picture of the nature of human reality. It’s a phrase I invented to account for the human effort to understand truth. We can define the mind. We can define mental health. We can base everything on science, but I want to base it on all the sciences. We’re looking for what we call ‘consilience.’ If you think of the neuroscientist as a blind man looking at only one part of an elephant, we are trying to discover the ‘whole-elephant’ view of reality.” (…)

“We is what me is!”

Our nervous system has two basic modes: it fires up or quiets down. When we’re in a reactive state, our brainstem signals the need for fight or flight. This means we’re unable to open ourselves to another person, and even neutral comments may be taken as fighting words. On the other hand, an attitude of receptivity activates a different branch of the brainstem as it sends messages to relax the muscles of the face and vocal chords, and normalizes blood pressure and heart rate. “A receptive state turns on the social engagement system that connects us to others,” Siegel explains in his recent book, Mindsight. “Receptivity is our experience of being safe and seen; reactivity is our fight-flight-freeze survival reflex.” (…)

He describes the brain as part of “an embodied nervous system, a physical mechanism through which both energy and information flow to influence relationship and the mind.” He defines relationship as “the flow of energy and information between people.” Mind is “an embodied and relational process that regulates the flow of energy and information, consciousness included. Mind is shared between people. It isn’t something you own; we are profoundly interconnected. We need to make maps of we because we is what me is!” (…)

[Siegel]: “We now know that integration leads to health and harmony. We can re-envision the DSM symptoms as examples of syndromes filled with chaos and rigidity, conditions created when integration is impaired. So we can define mental health as the ability to monitor ourselves and modify our states so that we integrate our lives. Then things that appeared unchangeable can actually be changed.” (…)

Relationships, mind and brain aren’t different domains of reality—they are each about energy and information flow. The mechanism is the brain; subjective impressions and consciousness are mind. The regulation of energy and information flow is a function of mind as an emergent process emanating from both relationships and brain. Relationships are the way we share this flow. In this view, the emergent process we are calling “mind” is located in the body (nervous system) and in our relationships. Interpersonal relationships that are attuned promote the growth of integrative fibers in the brain. It is these regulatory fibers that enable the embodied brain to function well and for the mind to have a deep sense of coherence and well-being. Such a state also creates the possibility of a sense of being connected to a larger world. The natural outcome of integration is compassion, kindness, and resilience.” (…)

“Everything we experience, memory or emotion or thought, is part of a process, not a place in the brain! Energy is the capacity to do stuff. There’s nothing that’s not energy, even ‘mass.’ Remember E=MC squared? Information is literally a swirl of energy in a certain pattern that has a symbolic meaning; it stands for something other than itself. Information should be a verb; mind, too—as in minding or informationing. And the mind is an embodied and relational emergent process that regulates the flow of energy and information.”

“We can be both an ‘I’ and part of an ‘us’”

[Siegel]: “Certain neurons can fire when someone communicates with you. They dissolve the border between you and others. These mirror neurons are a hardwired system designed for us to see the mind-state of another person. That means we can learn easily to dance, but also to empathize with another. They automatically and spontaneously pick up information about the intentions and feelings of those around us, creating emotional resonance and behavioral imitation as they connect our internal state with those around us, even without the participation of our conscious mind.” And in Mindsight: “Mirror neurons are the antennae that pick up information about the intentions and feelings of others.… Right hemisphere signals (are those) the mirror neuron system uses to simulate the other within ourselves and to construct a neural map of our interdependent sense of a ‘self.’ It’s how we can be both an ‘I’ and part of an ‘us.’” (…)

So how can we re-shape our brain to become more open and receptive to others? We already know the brain receives input from the senses and gives it meaning, he points out. That’s how blind people find ways to take in information and map out their world. According to Siegel, they do this on secondary pathways rather than the main highways of the brain. That’s a major key to how we can bring about change: “You can take an adult brain in whatever state it’s in and change a person’s life by creating new pathways,” he affirms. “Since the cortex is extremely adaptable and many parts of the brain are plastic, we can unmask dormant pathways we don’t much use and develop them. A neural stem cell is a blob, an undifferentiated cell in the brain that divides into two every twenty-four hours. In eight–ten weeks, it will become established as a specialized neural cell and exist as a part of an interconnected network. How we learn has everything to do with linking wide areas of the brain with each other.”

He calls the prefrontal cortex “the portal through which interpersonal relations are established.” He demonstrates, by closing his hand over his thumb, how this little tiny piece of us (the last joint of the two middle fingers) is especially important because it touches all three major parts of our brain: the cortex, limbic area, and brainstem as well as the body-proper. “It’s the middle prefrontal fibers which map out the internal states of others,” he adds. “And they do this not only within one brain, mine, but also between two brains, mine and yours, and even among many brains. The brain is exquisitely social, and emotions are its fundamental language. Through them we become integrated and develop an emergent resonance with the internal state of the other.” (…)

“Relationship is key,” he emphasizes. “When we work with relationship, we work with brain structure. Relationship stimulates us and is essential in our development. People rarely mention relationship in brain studies, but it provides vital input to the brain. Every form of psychotherapy that works, works because it creates healthier brain function and structure.… In approaching our lives, we can ask where do we experience the chaos or rigidity that reveal where integration is impaired. We can then use the focus of our attention to integrate both our brain and our relationships. Ultimately we can learn to be open in an authentic way to others, and to ourselves. The outcome of such an integrative presence is not only a sense of deep well-being and compassion for ourselves and others, but also an opening of the doors of awareness to a sense of the interdependence of everything. ‘We’ are indeed a part of an interconnected whole.””

— Patty de Llosa, author, The Neurobiology of “We”, Parabola Magazine, 2011, Daniel Siegel, Clinical Professor of Psychiatry at the UCLA School of Medicine, co-director of the Mindful Awareness Research Center. (Illustration source)

Jun
20th
Mon
permalink

The Argumentative Theory: ‘Reason evolved to win arguments, not seek truth’

                    

"For centuries thinkers have assumed that the uniquely human capacity for reasoning has existed to let people reach beyond mere perception and reflex in the search for truth. Rationality allowed a solitary thinker to blaze a path to philosophical, moral and scientific enlightenment.

Now some researchers are suggesting that reason evolved for a completely different purpose: to win arguments. Rationality, by this yardstick (and irrationality too, but we’ll get to that) is nothing more or less than a servant of the hard-wired compulsion to triumph in the debating arena. According to this view, bias, lack of logic and other supposed flaws that pollute the stream of reason are instead social adaptations that enable one group to persuade (and defeat) another. (…)

The idea, labeled the argumentative theory of reasoning, is the brainchild of French cognitive social scientists, and it has stirred excited discussion (and appalled dissent) among philosophers, political scientists, educators and psychologists, some of whom say it offers profound insight into the way people think and behave. The Journal of Behavioral and Brain Sciences devoted its April issue to debates over the theory, with participants challenging everything from the definition of reason to the origins of verbal communication.

“Reasoning doesn’t have this function of helping us to get better beliefs and make better decisions,” said Hugo Mercier, who is a co-author of the journal article, with Dan Sperber. “It was a purely social phenomenon. It evolved to help us convince others and to be careful when others try to convince us.” Truth and accuracy were beside the point.

Indeed, Mr. Sperber, a member of the Jean-Nicod research institute in Paris, first developed a version of the theory in 2000 to explain why evolution did not make the manifold flaws in reasoning go the way of the prehensile tail and the four-legged stride. Looking at a large body of psychological research, Mr. Sperber wanted to figure out why people persisted in picking out evidence that supported their views and ignored the rest — what is known as confirmation bias — leading them to hold on to a belief doggedly in the face of overwhelming contrary evidence.

Other scholars have previously argued that reasoning and irrationality are both products of evolution. But they usually assume that the purpose of reasoning is to help an individual arrive at the truth, and that irrationality is a kink in that process, a sort of mental myopia. Gary F. Marcus, for example, a psychology professor at New York University and the author of “Kluge: The Haphazard Construction of the Human Mind,” says distortions in reasoning are unintended side effects of blind evolution. They are a result of the way that the brain, a Rube Goldberg mental contraption, processes memory. People are more likely to remember items they are familiar with, like their own beliefs, rather than those of others.

What is revolutionary about argumentative theory is that it presumes that since reason has a different purpose — to win over an opposing group — flawed reasoning is an adaptation in itself, useful for bolstering debating skills.

Mr. Mercier, a post-doctoral fellow at the University of Pennsylvania, contends that attempts to rid people of biases have failed because reasoning does exactly what it is supposed to do: help win an argument.

“People have been trying to reform something that works perfectly well,” he said, “as if they had decided that hands were made for walking and that everybody should be taught that.”

Think of the American judicial system, in which the prosecutors and defense lawyers each have a mission to construct the strongest possible argument. The belief is that this process will reveal the truth, just as the best idea will triumph in what John Stuart Mill called the “marketplace of ideas.” (…)

Patricia Cohen, writer, journalist, Reason Seen More as Weapon Than Path to Truth, The New York Times, June 14, 2011.

"Imagine, at some point in the past, two of our ancestors who can’t reason. They can’t argue with one another. And basically as soon as they disagree with one another, they’re stuck. They can’t try to convince one another. They are bound to keep not cooperating, for instance, because they can’t find a way to agree with each other. And that’s where reasoning becomes important.
                                 
We know that in the evolutionary history of our species, people collaborated a lot. They collaborated to hunt, they collaborated to gather food, and they collaborated to raise kids. And in order to be able to collaborate effectively, you have to communicate a lot. You have to tell other people what you want them to do, and you have to tell them how you feel about different things.
                                 
But then once people start to communicate, a host of new problems arise. The main problem posed by communication in an evolutionary context is that of deceiving interlocutors. When I am talking to you, if you accept everything I say then it’s going to be fairly easy for me to manipulate you into doing things that you shouldn’t be doing. And as a result, people have a whole suite of mechanisms that are called epistemic vigilance, which they use to evaluate what other people tell them.
                                 
If you tell me something that disagrees with what I already believe, my first reaction is going to be to reject what you’re telling me, because otherwise I could be vulnerable. But then you have a problem. If you tell me something that I disagree with, and I just reject your opinion, then maybe actually you were right and maybe I was wrong, and you have to find a way to convince me. This is where reasoning kicks in. You have an incentive to convince me, so you’re going to start using reasons, and I’m going to have to evaluate these reasons. That’s why we think reasoning evolved. (…)

We predicted that reasoning would work rather poorly when people reason on their own, and that is the case. We predicted that people would reason better when they reason in groups of people who disagree, and that is the case. We predicted that reasoning would have a confirmation bias, and that is the case. (…)

The starting point of our theory was this contrast between all the results showing that reasoning doesn’t work so well and the assumption that reasoning is supposed to help us make better decisions. But this assumption was not based on any evolutionary thinking, it was just an intuition that was probably cultural in the West, people think that reasoning is a great thing. (…)

That’s important to keep in mind is that reasoning is used in a very technical sense. And sometimes not only laymen, but philosophers, and sometimes psychologists tend to use “reasoning” in an overly broad way, in which basically reasoning can mean anything you do with your mind.

By contrast, the way we use the term “reasoning” is very specific. And we’re only referring to what reasoning is supposed to mean in the first place, when you’re actually processing reasons. Most of the decisions we make, most of the inferences we make, we make without processing reasons. (…) When you’re shopping for cereals at the supermarket, and you just grab a box of cereal not because you’ve reasoned through all the alternatives, but just because it’s the one you always buy. And you’re just doing the same thing. There is no reasoning involved in that decision. (…)

It’s only when you’re considering reasons, reasons to do something, reasons to believe, that you’re reasoning. If you’re just coming up with ideas without reasons for these ideas, then you’re using your intuitions.”

The Argumentative Theory. A Conversation with Hugo Mercier, Edge, 4.27.2011

"Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis.

Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found. (…) p.1

Some of the evidence reviewed here shows not only that reasoning falls short of delivering rational beliefs and rational decisions reliably, but also that, in a variety of cases, it may even be detrimental to rationality. Reasoning can lead to poor outcomes not because humans are bad at it but because they systematically look for arguments to justify their beliefs or their actions. The argumentative theory, however, puts such well-known demonstrations of “irrationality” in a novel perspective. Human reasoning is not a profoundly flawed general mechanism; it is a remarkably efficient specialized device adapted to a certain type of social and cognitive interaction at which it excels. (…)

People are good at assessing arguments and are quite able to do so in an unbiased way, provided they have no particular axe to grind. In group reasoning experiments where participants share an interest in discovering the right answer, it has been shown that truth wins. (…) p.58

What makes [Sherlock] Holmes such a fascinating character is precisely his preternatural turn of mind operating in a world rigged by Conan Doyle, where what should be inductive problems in fact have deductive solutions. More realistically, individuals may develop some limited ability to distance themselves from their own opinion, to consider alternatives and thereby become more objective. Presumably this is what the 10% or so of people who pass the standard Wason selection task do. But this is an acquired skill and involves exercising some imperfect control over a natural disposition that spontaneously pulls in a different direction. (…)” p. 60

Hugo Mercier, postdoc in the Philosophy, Politics and Economics program at the University of Pennsylvania, and Dan Sperber, French social and cognitive scientist, Why do humans reason? Arguments for an argumentative theory, (pdf) Cambridge University Press 2011, published in Behavioral and Brain Sciences (Illustration source)

See also:

☞ Dan Sperber, Hugo Mercier, Reasoning as a Social Competence (pdf), Collective Wisdom Landemore, H. and Elster, J. (Eds.)
☞ Hugo Mercier, On the Universality of Argumentative Reasoning, Journal of Cognition and Culture, Vol. 11, pp. 85–113, 2011