Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Apr
29th
Sun
permalink

The time machine in our mind. The imagistic mental machinery that allows us to travel through time

            

Our ability to close our eyes and imagine the pleasures of Super Bowl Sunday or remember the excesses of New Year’s Eve is a fairly recent evolutionary development, and our talent for doing this is unparalleled in the animal kingdom. We are a race of time travelers, unfettered by chronology and capable of visiting the future or revisiting the past whenever we wish. If our neural time machines are damaged by illness, age or accident, we may become trapped in the present. (…)

Why did evolution design our brains to go wandering in time? Perhaps it’s because an experience is a terrible thing to waste. Moving around in the world exposes organisms to danger, so as a rule they should have as few experiences as possible and learn as much from each as they can. (…)

Time travel allows us to pay for an experience once and then have it again and again at no additional charge, learning new lessons with each repetition. When we are busy having experiences—herding children, signing checks, battling traffic—the dark network is silent, but as soon as those experiences are over, the network is awakened, and we begin moving across the landscape of our history to see what we can learn—for free.

Animals learn by trial and error, and the smarter they are, the fewer trials they need. Traveling backward buys us many trials for the price of one, but traveling forward allows us to dispense with trials entirely. Just as pilots practice flying in flight simulators, the rest of us practice living in life simulators, and our ability to simulate future courses of action and preview their consequences enables us to learn from mistakes without making them.

We don’t need to bake a liver cupcake to find out that it is a stunningly bad idea; simply imagining it is punishment enough. The same is true for insulting the boss and misplacing the children. We may not heed the warnings that prospection provides, but at least we aren’t surprised when we wake up with a hangover or when our waists and our inseams swap sizes. (…)

Perhaps the most startling fact about the dark network isn’t what it does but how often it does it. Neuroscientists refer to it as the brain’s default mode, which is to say that we spend more of our time away from the present than in it. People typically overestimate how often they are in the moment because they rarely take notice when they take leave. It is only when the environment demands our attention—a dog barks, a child cries, a telephone rings—that our mental time machines switch themselves off and deposit us with a bump in the here and now. We stay just long enough to take a message and then we slip off again to the land of Elsewhen, our dark networks awash in light.”

Daniel Gilbert, Professor of Psychology at Harvard University, Essay: The Brain: Time Travel in the Brain, TIME, Jan. 29, 2007. (Illustration for TIME by Jeffery Fischer).

Kurt Stocker: The time machine in our mind (2012)

                                            
                                          (Click image to open research paper in pdf)

Abstract:

"This article provides the first comprehensive conceptual account for the imagistic mental machinery that allows us to travel through time—for the time machine in our mind. It is argued that language reveals this imagistic machine and how we use it. Findings from a range of cognitive fields are theoretically unified and a recent proposal about spatialized mental time travel is elaborated on. The following novel distinctions are offered: external vs. internal viewing of time; “watching” time vs. projective “travel” through time; optional vs. obligatory mental time travel; mental time travel into anteriority or posteriority vs. mental time travel into the past or future; single mental time travel vs. nested dual mental time travel; mental time travel in episodic memory vs. mental time travel in semantic memory; and “seeing” vs. “sensing” mental imagery. Theoretical, empirical, and applied implications are discussed.”

"The theoretical strategy I adopt is to use language as an entree to a conceptual level that seems deeper than language itself (Pinker, 2007; Talmy, 2000). The logic of this strategy is in accordance with recent findings that many conceptualizations observed in language have also been found to exist in mental representations that are more basic than language itself. (…)

It is proposed that this strategy helps to uncover an imagistic mental machinery that allows us to travel through time—that this strategy helps us to uncover the time machine in our mind.

A central term used in this article is “the imagery structuring of time.” By this I refer to an invisible spatial scaffolding in our mental imagery across which temporal material can be splayed, the existence of which will be proposed in this article. At times it will be quite natural to assume that a space-to-time mapping in the sense of conceptual metaphor theory is involved in the structuring of this invisible scaffolding. (…)

It is thus for the present investigation more coherent to assume that mental time is basically constructed out of “spatialized” mental imagery—“spatialized” is another central term that I use in this article. I use it in the sense that it is neutral as to whether some of the imagery might be transferred via space-to-time mappings or whether some of the imagery might relate to space-to-time mappings only in an etymological sense. An example of temporal constructions that are readily characterized in terms of spatialized temporal imagery structuring are the conceptualizations underlying the use of before and after, conceptualizations that are often treated as having autonomous temporal status and as relating only etymologically to space.

The current investigation can refine this view somewhat, by postulating that spatialized temporal structures still play a very vital role in the imagery structuring underlying before and after. (…)

The theoretical strategy, to use linguistic expressions about time as an entree to conceptual structures about time that seem deeper than language itself, has been applied quite fruitfully, since it has allowed for the development of a rather comprehensive and precise conceptual account of the time machine in our mind. The theory is not an ad-hoc theory, since linguistic conceptualizations cannot be interpreted in a totally arbitrary way—for example language does not allow us to assume that a sentence such as I shopped at the store before I went home means that first the going home took place and then the shopping. In this respect the theory is to some degree already a data-guided theory, since linguistic expressions are data. However, the proposal of the theory that language has helped us to uncover a specific system of spatialized imagery structuring of time can only be evaluated by carrying out corresponding psychological (cognitive and neurocognitive) experiments and some ideas for such experiments have been presented. Since the time machine in our mind is a deeply fascinating apparatus, I am confident that theoretical and empirical investigations will continue to explore it.”

— Kurt Stocker, The time machine in our mind (pdf), Institute of Cognitive and Brain Sciences, University of California, Berkeley, CA, USA, 2012

See also:

☞ T. Suddendorf, D. Rose Addis and M C. Corballis, Mental time travel and the shaping of the human mind (pdf), The Royal Society, 2009.

Abstract: “Episodic memory, enabling conscious recollection of past episodes, can be distinguished from semantic memory, which stores enduring facts about the world. Episodic memory shares a core neural network with the simulation of future episodes, enabling mental time travel into both the past and the future. The notion that there might be something distinctly human about mental time travel has provoked ingenious attempts to demonstrate episodic memory or future simulation in nonhuman animals, but we argue that they have not yet established a capacity comparable to the human faculty. The evolution of the capacity to simulate possible future events, based on episodic memory, enhanced fitness by enabling action in preparation of different possible scenarios that increased present or future survival and reproduction chances. Human language may have evolved in the first instance for the sharing of past and planned future events, and, indeed, fictional ones, further enhancing fitness in social settings.”

☞ George Lakoff, Mark Johnson, Conceptual Metaphor in Everyday Language (pdf), The Journal of Philosophy, Vol 77, 1980.
Our sense of time is deeply entangled with memory
Time tag on Lapidarium notes

Mar
21st
Wed
permalink

Richard Doyle on Creativity, evolution of mind and the rhetorical membrane between humans and an informational universe

              

Q [Jason Silva]: The Jesuit Priest and scientist Pierre Teilhard de Chardin spoke of the Noosphere very early on. A profile in WIRED Magazine article said, 

"Teilhard imagined a stage of evolution characterized by a complex membrane of information enveloping the globe and fueled by human consciousness”.. Teilhard saw the Net coming more than half a century before it arrived. He believed this vast thinking membrane would ultimately coalesce into “the living unity of a single tissue” containing our collective thoughts and experiences."  Teilhard wrote, "The living world is constituted by consciousness clothed in flesh and bone.

He argued that the primary vehicle for increasing complexity consciousness among living organisms was the nervous system. The informational wiring of a being, he argued - whether of neurons or electronics - gives birth to consciousness. As the diversification of nervous connections increases, evolution is led toward greater consciousness… thoughts?

Richard Doyle: Yes, he also called it this process of the evolution of consciousness “Omega Point”. The noosphere imagined here relied on a change in our relationship to  consciousness as much to any technological change and was part of evolution’s epic quest for self awareness. Here Teilhard is in accord with Julian Huxley (Aldous’ brother, a biologist) and Carl Sagan when they observed that “we are a way for the cosmos to know itself.” Sri Aurobindo’s The Life Divine traces out this evolution of consciousness as well through the greek and Sanskrit traditions as well as Darwinism and (relatively) modern philosophy. All are describing evolution’s slow and dynamic quest towards understanding itself.

         

I honestly think we are still grappling with the fact that our minds are distributed across a network by technology, and have been in a feedback loop between our brains and technologies at least since the invention of writing. As each new “mutation” occurs in the history of evolution of information technology, the very character of our minds shifts. McLuhan's Understanding Media is instructive here as well (he parsed it as the Global Village), and of course McLuhan was the bard who advised Leary on "Tune in, Turn on, Drop Out" and very influential on Terence McKenna.

One difference between now and Plato’s time is the infoquake through which we are all living. This radical increase in quantity no doubt has qualitative effects - it changes what it feels like to think and remember. Plato was working through the effect of one new information technology – writing – whereas today we “upgrade” every six months or so…Teilhard observes the correlative of this evolutionary increase in information - and the sudden thresholds it crosses - in the evolution of complexity and nervous systemsThe noosphere is a way of helping us deal with this “phase transition” of consciousness that may well be akin to the phase transition between liquid water and water vapor - a change in degree that effects a change in kind.

Darwin’s Pharmacy suggests that ecodelics were precisely such a mutation in information technology that increased sexually selective fitness through the capacity to process greater amounts of information, and that they are “extraordinarily sensitive to initial rhetorical traditions.” What this means is that because ecodelic experiences are so sensitive to the context in which we experience them, they can help make us aware of the effect of language and music etc on our consciousness, and thereby offer an awareness of our ability to effect our own consciousness through our linguistic and creative choices. This can be helpful when trying to browse the infoquake. Many other practices do so as well - meditation is the most well established practice for noticing the effects we can have on our own consciousness, and Sufi dervishes demonstrate this same outcome for dancing. I do the same on my bicycle, riding up a hill and chanting.

One problem I have with much of the discourse of “memes" is that it is often highly reductionistic - it often forgets that ideas have an ecology too, they must be "cultured." Here I would argue that drawing on Lawrence Lessig's work on the commons, the “brain” is a necessary but insufficient “spawning” ground for ideas that becomes actual. The commons is the spawning ground of ideas; brains are pretty obviously social as well as individual. Harvard biologist Richard Lewontin notes that there is no such thing as “self replicating” molecules, since they always require a context to be replicated. This problem goes back at last to computer scientist John Von Neumann's 1947 paper on Self reproducing automata.

I think Terence McKenna described the condition as "language is loose on planet three", and its modern version probably occurs first in the work of writer William S. Burroughs, whose notion of the "word virus" predates the "meme" by at least a decade. Then again this notion of "ideas are real" goes back to cosmologies that begin with the priority of consciousness over matter, as in "In the beginning was the word, and the word was god, and the word was with god." So even Burroughs could get a late pass for his idea. (…)

Q: Richard Dawkin's definition of a meme is quite powerful: 

“I think that a new kind of replicator has recently emerged on this very planet, […] already achieving evolutionary change at a rate that leaves the old gene panting far behind.” [the replicator is] human culture; the vector of transmission is language, and the spawning ground is the brain.”  

This notion that the ”the vector of transmission is language" is very compelling.. It seems to suggest that just as in biological evolution the vector of transmission has been the DNA molecule, in the noosphere, the next stage up, it is LANGUAGE that has become a major player in the transfer of information towards achieving evolutionary change.. Kind of affects how you think about the phrase “words have power”. This insight reminds me of a quote that describes, in words, the subjective ecstasy that a mind feels when upon having a transcendent realization that feels as if it advances evolution: 

"A universe of possibilities,

Grey infused by color,

The invisible revealed,

The mundane blown away

by awe” 

Is this what you mean by ‘the ecstasy of language’?

Richard Doyle: Above, I noted that ecodelics can make us aware of the feedback loops between our creative choices – should I eat mushrooms in a box? - Should I eat them with a fox? - and our consciousness. In other words, they can make us aware of the tremendous freedom we have in creating our own experience. Leary called this “internal freedom.” Becoming aware of the practically infinite choices we have to compose our lives, including the words we use to map them, can be overwhelming – we feel in these instances the “vertigo of freedom.” What to do? In ecodelic experience we can perceive the power of our maps. That moment in which we can learn to abide the tremendous creative choice we have, and take responsibility for it, is what I mean by the “ecstasy of language.” 

I would point out, though, that for those words you quote to do their work, they have to be read. The language does not do it "on its own" but as a result of the highly focused attention of readers. This may seem trivial but it is often left out, with some serious consequences. And “reading” can mean “follow up with interpretation”. I cracked up when I googled those lines above and found them in a corporate blog about TED, for example. Who knew that neo-romantic poetry was the emerging interface of the global corporate noosphere? (…)

Q: Buckminster Fuller described humans as "pattern integrities", Ray Kurzweil says we are "patterns of information". James Gleick's new book, The Information, says that “information may be more primary than matter”..  what do you make of this? And if we indeed are complex patterns, how can we hack the limitations of biology and entropy to preserve our pattern integrity indefinitely? 

Richard Doyle: First: It is important to remember that the history of the concept and tools of “information” is full of blindspots – we seem to be constantly tempted to underestimate the complexity of any given system needed to make any bit of information meaningful or useful. Caitlin, Kolmogorov Stephan Wolfram and John Von Neumann each came independently to the conclusion that information is only meaningful when it is “run” - you can’t predict the outcome of even many trivial programs without running the program. So to say that “information may be more primary than matter” we have to remember that “information” does not mean “free from constraints.” Thermodynamics – including entropy – remains.

Molecular and informatic reductionism – the view that you can best understand the nature of a biological system by cutting it up into the most significant bits, e.g. DNA – is a powerful model that enables us to do things with biological systems that we never could before. Artist Eduardo Kac collaborated with a French scientist to make a bioluminescent bunny. That’s new! But sometimes it is so powerful that we forget its limitations. The history of the human genome project illustrates this well. AND the human genome is incredibly interesting. It’s just not the immortality hack many thought it would be.

In this sense biology is not a limitation to be “transcended” (Kurzweil), but a medium of exploration whose constraints are interesting and sublime. On this scale of ecosystems, “death” is not a “limitation” but an attribute of a highly dynamic interactive system. Death is an attribute of life. Viewing biology as a “limitation” may not be the best way to become healthy and thriving beings.

Now, that said, looking at our characteristics as “patterns of information” can be immensely powerful, and I work with it at the level of consciousness as well as life. Thinking of ourselves as “dynamic patterns of multiply layered and interconnected self transforming information” is just as accurate of a description of human beings as “meaningless noisy monkeys who think they see god”, and is likely to have much better effects. A nice emphasis on this “pattern” rather than the bits that make it up can be found in Carl Sagan’s “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.”

Q: Richard Dawkins declared in 1986 that ”What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions, […] If you want to understand life,” Dawkins wrote, “don’t think about vibrant, throbbing gels and oozes, think about information technology.” How would you explain the relationship between information technology and the reality of the physical world?

Richard Doyle: Again, information is indeed physical. We can treat a sequence of information as abstraction and take it out of its context – like a quotation or a jellyfish gene spliced into a rabbit to enable it to glow. We can compress information, dwindling the resources it takes to store or process it. But “Information, words, instructions” all require physical instantiation to even be “information, words, instructions.” Researcher Rolf Landauer showed back in the 1960s that even erasure is physical. So I actually think throbbing gels and oozes and slime mold and bacteria eating away at the garbage gyre are very important when we wish to “understand” life. I actually think Dawkins gets it wrong here – he is talking about “modeling” life, not “understanding” it. Erwin Schrödinger, the originator of the idea of the genetic code and therefore the beginning of the “informatic” tradition of biology that Dawkins speaks in here, knew this very well and insisted on the importance of first person experience for understanding.

So while I find these metaphors useful, that is exactly what they are: metaphors. There is a very long history to the attempt to model words and action together: Again, John 1:1 is closer to Dawkin’s position here than he may be comfortable with: “In the Beginning was the word, and the word was god, and the word was with god” is a way of working with this capacity of language to bring phenomena into being. It is really only because we habitually think of language as “mere words” that we continually forget that they are a manifestation of a physical system and that they have very actual effects not limited to the physics of their utterance – the words “I love you” can have an effect much greater than the amount of energy necessary to utter them. Our experiences are highly tuneable by the language we use to describe them.

Q: Talk about the mycelial archetype. Author Paul Stamet compares the pattern of the mushroom mycelium with the overlapping information-sharing systems that comprise the Internet, with the networked neurons in the brain, and with a computer model of dark matter in the universe. All share this densely intertwingled filamental structure…. what is the connection? what is the pattern that connects here? 

Richard Doyle: First things first: Paul Stamets is a genius and we should listen to his world view carefully and learn from it. Along with Lynn Margulis and Dorion Sagan, whose work I borrow from extensively in Darwin’s Pharmacy (as well as many others), Stamets is asking us to contemplate and act on the massive interconnection between all forms of life. This is a shift in worldview that is comparable to the Copernican shift from a geocentric cosmos – it is a shift toward interconnection and consciousness of interconnection. And I like how you weave in Gregory Bateson's phrase “the pattern that connects” here, because Bateson (whose father, William Bateson, was one of the founders of modern genetics) continuously pointed toward the need to develop ways of perceiving the whole. The “mycelial archetype”, as you call it, is a reliable and rather exciting way to recall the whole: What we call “mushrooms” are really the fruiting bodies of an extensive network of cross connection.

That fuzz growing in an open can of tomato paste in your fridge – mycelium. So even opening our refrigerator – should we be lucky enough to have one, with food in it - can remind us that what we take to be reality is is an actuality only appearance – a sliver, albeit a significant one for our world, of the whole. That fuzz can remind us that (1) appearance and reality or not the same thing at all and (2) beyond appearance there is a massive interconnection in unity. This can help remind us who and what we really are. 

With the word ‘archetype”, you of course invoke the psychologist Carl Jung who saw archetypes as templates for understanding, ways of organizing our story of the world. There are many archetypes – the Hero, the Mother, the Trickster, the sage. They are very powerful because they help stitch together what can seem to be a chaotic world – that is both their strength and their weakness. It is a weakness because most of the time we are operating within an archetype and we don’t even know it, and we don’t know therefore that we can change our archetype

By experimenting with a different archetype – imagining, for example, the world through the lens of a 2400 year old organism that is mostly invisible to a very short lived and recent species becoming aware of its creative responsibility in altering the planet – is incredibly powerful, and in Darwin’s Pharmacy I am trying to offer a way to experiment with the idea of plant planet as well as “mycelium” archetype. One powerful aspect of the treating the mycelium as our archetype as humanity is that it is “distributed” - it does not operate via a center of control but through cross connection “distributed” over a space.

Anything we can do to remember both our individuation and our interconnection is timely – we experience the world as individuals, and our task is to discover our nature within the larger scale reality of our dense ecological interconnection. In the book I point to the Upanishad’s “Tat Tvam Asi as a way of comprehending how we can both be totally individual and an aspect of the whole.

Q: You’ve talked about the ecstasy of language and the role of rhetoric in shaping reality.. These notions echo some of Terence McKenna's ideas about language… He calls language an “ecstatic activity of signification”… and says that for the “inspired one, it is almost as if existence is uttering itself through him”… Can you expand on this? How does language create reality?? 

Richard Doyle: It’s incredibly fun and insightful to echo Terence McKenna. He’s really in this shamanic bard tradition that goes all the back to Empedocles at least, and is distributed widely across the planet. He’s got a bit of Whitman in him with his affirmation of the erotic aspects of enlightenment. He was Emerson speaking to a Lyceum crowd remixed through rave culture. Leary and McKenna were resonating with the irish bard archetype. And Terrence was echoing Henry Munn, who was echoing Maria Sabina, whose chants and poetics can make her seem like Echo herself – a mythological story teller and poet (literally “sound”) who so transfixes Hera (Zeus’s wife) that Zeus can consort with nymphs. Everywhere we look there are allegories of sexual selection’s role in the evolution of poetic & shamanic language! 

And Terrence embodies the spirit of eloquence, helping translate our new technological realities (e.g. virtual reality, a fractal view of nature, radical ecology) and the states of mind that were likely to accompany them. Merlin Donald writes of the effects of “external symbolic storage” on human culture – as a onetime student of McLuhan’s, Donald was following up on Plato’s insights I mentioned above that writing changes how we think, and therefore, who we are

Human culture is going through a fantastic “reality crisis” wherein we discover the creative role we play in nature. Our role in global climate change – not to mention our role in dwindling biodiversity – is the “shadow” side of our increasing awareness that humans have a radical creative responsibility for their individual and collective lives. And our lives are inseparable from the ecosystems with which we are enmeshed. THAT is reality. To the extent that we can gather and focus our attention on retuning our relation towards ecosystems in crisis, language can indeed shape reality. We’ll get the future we imagine, not necessarily the one we deserve.

Q: Robert Anton Wilson spoke about “reality tunnels”…. These ‘constructs’ can limit our perspectives and perception of reality, they can trap us, belittle us, enslave us, make us miserable or set us free… How can we hack our reality tunnel?  Is it possible to use rhetoric and/or psychedelics to “reprogram” our reality tunnel? 

Richard Doyle: We do nothing but program and reprogram our reality tunnelsSeriously, the Japanese reactor crisis follows on the BP oil spill as a reminder that we are deeply interconnected on the level of infrastructure – technology is now planetary in scale, so what happens here effects somebody, sometimes Everybody, there. These infrastructures – our food sheds, our energy grid, our global media - run on networks, protocols, global standards, agreements: language, software, images, databases and their mycelial networks.

The historian Michel Foucault called these “discourses”, but we need to connect these discourses to the nonhuman networks with which they are enmeshed, and globalization has been in part about connecting discourses to each other across the planet. Ebola ends up in Virginia, Starbucks in Hong Kong. This has been true for a long time, of course – Mutual Assured Destruction was planetary in scale and required a communication and control structure linking, for example, a Trident submarine under the arctic ice sheet – remember that? - to a putatively civilian political structure Eisenhower rightly warned us about: the military industrial complex. The moon missions illustrate this principle as well – we remember what was said as much as what else was done, and what was said, for a while, seem to induce a sense of truly radical and planetary possibility.

So if we think of words as a description of reality rather than part of the infrastructure of reality, we miss out on the way different linguistic patterns act as catalysts for different realities. I call these “rhetorical softwares”. In my first two books, before I really knew about Wilson’s work or had worked through Korzybski with any intensity, I called these “rhetorical softwares.”

Now the first layer of our reality tunnel is our implicit sense of self – this is the only empirical reality any of us experiences – what we subjectively experience. RAW was a brilliant analyst of the ways experience is shaped by the language we use to describe it. One of my favorite examples from his work is his observation that in English, “reality” is a noun, so we start to treat it as a “thing”, when in fact reality, this cosmos, is also quite well mapped as an action – a dynamic unfolding for 13.7 billion years. That is a pretty big mismatch between language and reality, and can give us a sense that reality is inert, dead, lifeless, “concrete”, and thus not subject to change. By experimenting with what Wilson, following scientist John Lilly, called “metaprograms”, we can change the maps that shape the reality we inhabit. (…)

Q: The film Inception explored the notion that our inner world can be a vivid, experiential dimension, and that we can hack it, and change our reality… what do you make of this? 

Richard Doyle: The whole contemplative tradition insists on this dynamic nature of consciousness. “Inner” and “outer” are models for aspects of reality – words that map the world only imperfectly. Our “inner world” - subjective experience – is all we ever experience, so if we change it obviously we will see a change in what we label “external” reality it is of course part of and not separable from. One of the maps we should experiment with, in my view, is this “inner” and “outer” one – this is why one of my aliases is “mobius.” A mobius strip helps makes clear that “inside” and “outside” are… labels. As you run your finger along a mobius strip, the “inside” becomes “outside” and the “outside” becomes “inside.”.

Q: Can we give put inceptions out into the world?

Richard Doyle: We do nothing but! And, it is crucial to add, so too does the rest of our ecosystem. Bacteria engage in quorum sensing, begin to glow, and induce other bacteria to glow – this puts their inceptions into the world. Thanks to the work of scientists like Anthony Trewavas, we know that plants engage in signaling behavior between and across species and even kingdoms: orchids “throw” images of female wasps into the world, attracting male wasps, root cells map the best path through the soil. The whole blooming confusion of life is signaling, mapping and informing itself into the world. The etymology of “inception” is “to begin, take in hand” - our models and maps are like imagined handholds on a dynamic reality.

Q: What is the relationship between psychedelics and information technology? How are ipods, computers and the internet related to LSD? 

Richard Doyle: This book is part of a trilogy on the history of information in the life sciences. So, first: psychedelics and biology. It turns out that molecular biology and psychedelics were important contexts for each other. I first started noticing this when I found that many people who had taken LSD were talking about their experiences in the language of molecular biology – accessing their DNA and so forth. When I learned that psychedelic experience was very sensitive to “set and setting” - the mindset and context of their use - I wanted to find out how this language of molecular biology was effecting people’s experiences of the compounds. In other words, how did the language affect something supposedly caused by chemistry? 

Tracking the language through thousands of pages, I found that both the discourse of psychedelics and molecular biology were part of the “informatic vision” that was restructuring the life sciences as well as the world, and found common patterns of language in the work of Timothy Leary (the Harvard psychologist) and Francis Crick (who won the Nobel prize with James Watson and Maurice Wilkins for determining the structure of DNA in 1954), so in 2002 I published an article describing the common “language of information” spoken by Leary and Crick. I had no idea that Crick had apparently been using LSD when he was figuring out the structure of DNA. Yes, that blew my mind when it came out in 2004. I feel like I read that between the lines of Crick’s papers, which gave me confidence to write the rest of the book about the feedback between psychedelics and the world we inhabit.

The paper did hone in on the role that LSD played in the invention of PCR (polymerase chain reaction) – Kary Mullis, who won the Nobel prize for the invention of this method of making copies of a sequence of DNA, talked openly of the role that LSD played in the process of invention. Chapter 4 of the book looks to use of LSD in “creative problem solving” studies of the 1960s. These studies – hard to imagine now, 39 years into the War on Drugs, but we can Change the Archetype - suggest that used with care, psychedelics can be part of effective training in remembering how to discern the difference between words and things, maps and territories.

In short, this research suggested that psychedelics were useful for seeing the limitations of words as well as their power, perhaps occasioned by the experience of the linguistic feedback loops between language and psychedelic experiences that themselves could never be satisfactorily described in language. I argue that Mullis had a different conception of information than mainstream molecular biology – a pragmatic concept steeped in what you can do with words rather than in what they mean. Mullis seems to have thought of information as “algorithms” - recipes of code, while the mainsteam view was thinking of it as implicitly semantically, as “words with meaning.”

Ipods, Internet, etc: Well, in some cases there are direct connections. Perhaps Bill Joy said it best when he said that there was a reason that LSD and Unix were both from BerkeleyWhat the Doormouse Said by John Markoff came out after I wrote my first paper on Mullis and I was working on the book, and it was really confirmation of a lot of what I seeing indicated by my conceptual model of what is going on, which is as follows: Sexual selection is a good way to model the evolution of information technology. It yields bioluminescence – the most common communication strategy on the planet – chirping insects, singing birds, Peacocks fanning their feathers, singing whales, speaking humans, and humans with internet access. These are all techniques of information production, transformation or evaluation. I am persuaded by Geoffrey Miller’s update of Charles Darwin’s argument that language and mind are sexually selected traits, selected not simply for survival or even the representation of fitness, but for their sexiness. Leary: “Intelligence is the greatest aphrodisiac.”

I offer the hypothesis that psychedelics enter the human toolkit as “eloquence adjuncts” - tools and techniques for increasing the efficacy of language to seemingly create reality – different patterns of language ( and other attributes of set and setting) literally causes different experiences. The informatic revolution is about applying this ability to create reality with different “codes” to the machine interface. Perhaps this is one of the reason people like Mitch Kapor (a pioneer of computer spreadsheets), Stewart Brand (founder of a pre-internet computer commons known as the Well) and Bob Wallace (one of the original Microsoft seven and an early proponent of shareware), Mark Pesce were or are all psychonauts.

Q: Cyborg Anthropologist Amber Case has written about Techno-social wormholes.. the instant compression of time and space created every time we make a telephone call…  What do you make of this compression of time and space made possible by the engineering “magic” of technology? 

Richard Doyle:  It’s funny the role that the telephone call plays as an example in the history of our attempts to model the effects of information technologies. William Gibson famously defined cyberspace as the place where a telephone call takes place. (Gibson’s coinage of the term “cyberspace” is a good example of an “inception”) Avital Ronell wrote about Nietzsche’s telephone call to the beyond and interprets the history of philosophy according to a “telephonic logic”. When I was a child my father once threw our telephone into the atlantic ocean – that was what he made of the magic of that technology, at least in one moment of anger. This was back in the day when Bell owned your phone and there was some explaining to do. This magic of compression has other effects – my dad got phone calls all day at work, so when was at home he wanted to turn it off. The only way he knew to turn it off was to rip it out of the wall – there was no modular plug, just a wire into the wall - and throw it into the ocean.

So there is more than compression going on here: Deleuze and Guattari, along with the computer scientist Pierre Levy after them, call it “deterritorialization”. The differences between “here” and “there” are being constantly renegotiated as our technologies of interaction develop. Globalization is the collective effect of these deterritorializations and reterritorializations at any given moment.

And the wormhole example is instructive: the forces that enable such collapse of space and time as the possibility of time travel would likely tear us to smithereens. The tensions and torsions of this deterritorialization at part of what is at play in the Wikileaks revolutions, this compression of time and space offers promise for distributed governance as well as turbulence. Time travel through wormholes, by the way, is another example of an inception – Carl Sagan was looking for a reasonable way to transport his fictional aliens in Contact, called Cal Tech physicist Skip Thorne for help, and Thorne came up with the idea.

Q: The film Vanilla Sky explored the notion of a scientifically-induced lucid dream where we can live forever and our world is built out of our memories and ”sculpted moment to moment and lived with the romantic abandon of a summer day or the feeling of a great movie or a pop song you always loved”. Can we sculpt ‘real’ reality as if it were a “lucid dream”

Richard Doyle:Some traditions model reality as a lucid dream. The Diamond Sutra tells us that to be enlightened we must view reality as “a phantom, a dew drop, a bubble.”  This does not mean, of course, that reality does not exist, only that appearance has no more persistence than a dream and that what we call “reality” is our map of reality. When we wake up, the dream that had been so compelling is seen to be what it was: a dream, nothing more or less. Dreams do not lack reality – they are real patterns of information. They just aren’t what we usually think they are. Ditto for “ordinary” reality. Lucid dreaming has been practiced by multiple traditions for a long time – we can no doubt learn new ways of doing so. In the meantime, by recognizing and acting according to the practice of looking beyond appearances, we can find perhaps a smidgeon more creative freedom to manifest our intentions in reality.

Q: Paola Antonelli, design curator of MoMa, has written about Existenz Maximum, the ability of portable music devices like the ipod to create”customized realities”, imposing a soundtrack on the movie of our own life. This sounds empowering and godlike- can you expand on this notion? How is technology helping us design every aspect of both our external reality as well as our internal, psychological reality?

Richard Doyle: Well, the Upanishads and the Book of Luke both suggest that we “get our inner Creator on”, the former by suggesting that “Tat Tvam Asi” - there is an aspect of you that is connected to Everything, and the latter by recommending that we look not here or there for the Kingdom of God, but “within.” So if this sounds “god like”, it is part of a long and persistent tradition. I personally find the phrase “customized realities” redundant given the role of our always unique programs and metaprograms. So what we need to focus on his: to which aspect of ourselves do we wish to give this creative power? These customized realities could be enpowering and god like for corporations that own the material, or they could enpower our planetary aspect that unites all of us, and everything in between. It is, as always, the challenge of the magus and the artist to decide how we want to customize reality once we know that we can.

Q: The Imaginary Foundation says that "to understand is to perceive patterns"… Some advocates of psychedelic therapy have said that certain chemicals heighten our perception of patterns..They help! us “see more”.  What exactly are they helping us understand? 

Richard Doyle: Understanding! One of the interesting bits of knowledge that I found in my research was some evidence that psychonauts scored better on the Witkin Embedded Figure test, a putative measure of a human subject’s ability to “distinguish a simple geometrical figure embedded in a complex colored figure.” When we perceive the part within the whole, we can suddenly get context, understanding.

Q: An article pointing to the use of psychedelics as catalysts for breakthrough innovation in silicon valley says that users …

"employ these cognitive catalysts, de-condition their thinking periodically and come up with the really big connectivity ideas arrived at wholly outside the linear steps of argument. These are the gestalt-perceiving, asterism-forming “aha’s!” that connect the dots and light up the sky with a new archetypal pattern."

This seems to echo what other intellectuals have been saying for ages.  You referred to Cannabis as “an assassin of referentiality, inducing a butterfly effect in thought. Cannabis induces a parataxis wherein sentences resonate together and summon coherence in the bardos between one statement and another.”

Baudelaire also wrote about cannabis as inducing an artificial paradise of thought:  

“…It sometimes happens that people completely unsuited for word-play will improvise an endless string of puns and wholly improbable idea relationships fit to outdo the ablest masters of this preposterous craft. […and eventually]… Every philosophical problem is resolved. Every contradiction is reconciled. Man has surpassed the gods.”

Anthropologist Henry Munn wrote that:

"Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth… At times… the words leap to mind, one after another, of themselves without having to be searched for: a phenomenon similar to the automatic dictation of the surrealists except that here the flow of consciousness, rather than being disconnected, tends to be coherent: a rational enunciation of meanings.  The spontaneity they liberate is not only perceptual, but linguistic, the spontaneity of speech, of fervent, lucid discourse, astonishing. […] For the inspired one, it is as if existence were uttering itself through him […]

Can you expand a bit on how certain ecodelics (as well as marijuana) can help us de-condition our thinking, have creative breakthroughs as well as intellectual catharsis? How is it that “intoxication” could, under certain conditions, actually improve our cognition and creativity and contribute to the collective intelligence of the species?

Richard Doyle: I would point, again, to Pahnke's description of ego death. This is by definition an experience when our maps of the world are humbled. In the breakdown of our ordinary worldview - such as when a (now formerly)  secular being such as myself finds himself  feeling unmistakably sacred - we get a glimpse of reality without our usual filters. It is just not possible to use the old maps, so we get even an involuntary glimpse of reality. This is very close to the Buddhist practice of exhausting linguistic reference through chanting or Koans - suddenly we see the world through something besides our verbal mind.

Ramana Maharshi says that in the silence of the ego we perceive reality - reality IS the breakdown of the ego. Aldous Huxley, who was an extraordinarily adroit and eloquent writer with knowledge of increasingly rare breadth and depth, pointed to a quote by William Blake when trying to sum up his experience: the doors of perception were cleansed. This is a humble act, if you think about it: Huxley, faced with the beauty and grandeur of his mescaline experience, offers the equivalent of ‘What he said!”. Huxley also said that psychedelics offered a respite from “the throttling embrace of the self”, suggesting that we see the world without the usual filters of our egoic self. (…)

And if you look carefully at the studies by pioneers such as Myron Stolaroff and Willis Harman that you reference, as I do in the book, you will see that great care was taken to compose the best contexts for their studies. Subjects, for example, were told not to think about personal problems but to focus on their work at hand, and, astonishingly enough, it seems to have worked. These are very sensitive technologies and we really need much more research to explore their best use. This means more than studying their chemical function - it means studying the complex experiences human beings have with them. Step one has to be accepting that ecodelics are and always have been an integral part of human culture for some subset of the population. (…)

Q: Kevin Kelly refers to technological evolution as following the momentum begun at the big bang - he has stated:

"…there is a continuum, a connection back all the way to the Big Bang with these self-organizing systems that make the galaxies, stars, and life, and now is producing technology in the same way. The energies flowing through these things are, interestingly, becoming more and more dense. If you take the amount of energy that flows through one gram per second in a galaxy, it is increased when it goes through a star, and it is actually increased in life…We don’t realize this. We think of the sun as being a hugely immense amount of energy. Yet the amount of energy running through a sunflower per gram per second of the livelihood, is actually greater than in the sun. Actually, it’s so dense that when it’s multiplied out, the sunflower actually has a higher amount of energy flowing through it. "..

Animals have even higher energy usage than the plant, and a jet engine has even higher than an animal. The most energy-dense thing that we know about in the entire universe is the computer chip in your computer. It is sending more energy per gram per second through that than anything we know. In fact, if it was to send it through any faster, it would melt or explode. It is so energy-dense that it is actually at the edge of explosion.”…  

Can you comment on the implications of what he’s saying here?

Richard Doyle: I think maps of “continuity” are crucial and urgently needed. We can model the world as either “discrete” - made up of parts - or “continuous” - composing a whole - to powerful effect. Both are in this sense true. This is not “relativism” but a corollary of that creative freedom to choose our models that seems to be an attribute of consciousness. The mechanistic worldview extracts, separates and reconnects raw materials, labor and energy in ways that produce astonishing order as well as disorder (entropy).

By mapping the world as discrete – such as the difference between one second and another – and uniform – to a clock, there is no difference between one second and another – we have transformed the planet. Consciousness informed by discrete maps of reality has been an actual geological force in a tiny sliver of time. In so doing, we have have transformed the biosphere. So you can see just how actual this relation between consciousness, its maps, and earthly reality is. This is why Vernadsky, a geophysicist, thought we needed a new term for the way consciousness functions as a geological force: noosphere.

These discrete maps of reality are so powerful that we forget that they are maps. Now if the world can be cut up into parts, it is only because it forms a unity. A Sufi author commented that the unity of the world was both the most obvious and obscure fact. It is obvious because our own lives and the world we inhabit can be seen to continue without any experienced interruption – neither the world nor our lives truly stops and starts. This unity can be obscure because in a literal sense we can’t perceive it with our senses – this unity can only be “perceived” by our minds. We are so effective as separate beings that we forget the whole for the part.

The world is more than a collection of parts, and we can quote Carl Sagan: “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.” Equally beautiful is what Sagan follows up with: “The cosmos is also within us. We are made of star stuff.” Perhaps this is why models such as Kelly’s feel so powerful: reminding ourselves that there is a continuity between the Big Bang and ourselves means we are an aspect of something unfathomably grand, beautiful, complex and unbroken. This is perhaps the “grandeur” Darwin was discussing. And when we experience that grandeur it can help us think and act in aways appropriate to a geological force.

I am not sure about the claims for energy that Kelly is making – I would have to see the context and the source of his data – but I do know that when it comes to thermodynamics, what he is saying rings true. We are dissipative structures far from equilibrium, meaning that we fulfill the laws of thermodynamics. Even though biological systems such as ourselves are incredibly orderly – and we export that order through our maps onto and into the world – we also yield more entropy than our absence. Living systems, according to an emerging paradigm of Stanley Salthe, Rob Swenson, the aforementioned Margulis and Sagan, Eric Schneider, James J. kay and others, maximize entropy, and the universe is seeking to dissipate ever greater amounts of entropy.

Order is a way to dissipate yet more energy. We’re thermodynamic beings, so we are always on the prowl for new ways to dissipate energy as heat and create uncertainty (entropy), and consciousness helps us find ever new ways to do so. (In case you are wondering, Consciousness is the organized effort to model reality that yields ever increasing spirals of uncertainty in Deep Time. But you knew that.) It is perhaps in this sense that, again following Carl Sagan, “ We are a way for the cosmos to know itself.” That is pretty great map of continuity.

What I don’t understand in Kelly’s work, and I need to look at with more attention, is the discontinuity he posits between biology and technology. In my view our maps have made us think of technology as different in kind from biology, but the global mycelial web of fungi suggests otherwise, and our current view of technology seems to intensify this sense of separation even as we get interconnected through technology. I prefer Noosphere to what Kelly calls the Technium because it reminds us of the ways we are biologically interconnected with our technosocial realities. Noosphere sprouts from biosphere.

Q: There is this notion of increasing complexity… Yet in a universe where entropy destroys almost everything, here we are, the cutting edge of evolution, taking the reigns and accelerating this emergent complexity.. Kurzweil says that this makes us “very important”: 

“…It turns out that we are central, after all.  Our ability to create models—virtual realities—in our brains, combined with ou modest-looking thumbs, has been sufficient to usher in another form of evolution: technology. That development enabled the persistence of the accelerating pace that started with biological evolution. It will continue until the entire universe is at our fingertips.”   

What do you think?

Richard Doyle: Well, I think from my remarks already you can see that I agree with Kurzweil here and can only suggest that it is for this very reason that we must be very creative, careful and cunning with our models. Do we model the technologies that we are developing according to the effects they will have on the planetary whole? Only rarely, though this is what we are trying to do at the Penn State Center for Nanofutures, as are lots of people involved in Science, Technology and Society as well as engineering education. When we develop technologies - and that is the way psychedelics arrived in modern culture, as technologies -  we must model their effects not only on the individuals who use them, but on the whole of our ecosystem and planetary society.

If our technological models are based on the premise that this is a dead planet – and most of them very much are, one is called all kinds of names if you suggest otherwise - animist, vitalist, Gaian intelligence agent, names I wear with glee – then we will end up with a asymptotically dead planet. Consciousness will, of course, like the Terminator, “Be Back” should we perish, but let us hope that it learns to experiment better with its maps and learns to notice reality just a little bit more. I am actually an optimist on this front and think that a widespread “aha” moment is occurring where there is a collective recognition of the feedback loops that make up our technological & biological evolution.

Again, I don’t know why Kurzweil seems to think that technological evolution is discontinuous with biological evolution – technology is nested within the network of “wetwares” that make it work, and our wetwares are increasingly interconnected with our technological infrastructure, as the meltdowns in Japan demonstrate along with the dependence of many of us – we who are more bacterial than human by dry weight - upon a network of pharmaceuticals and electricity for continued life. The E. coli outbreak in Europe is another case in point – our biological reality is linked with the technological reality of supply chain management. Technological evolution is biological evolution enabled by the maps of reality forged by consciousness. (…)

Whereas technology for many promised the “disenchantment” of the world –the rationalization of this world of the contemplative spirit as everything became a Machine – here was mystical contemplative experience manifesting itself directly within what sociologist Max Weber called the “iron cage of modernity”, Gaia bubbling up through technological “Babylon.”

Now many contemplatives have sought to share their experiences through writing – pages and pages of it. As we interconnect through information technology, we perhaps have the opportunity to repeat this enchanted contemplative experience of radical interconnection on another scale, and through other means. Just say Yes to the Noosphere!”

Richard Doyle, Professor of English Affiliate Faculty, Information Science and Technology at Pennsylvania State University, in conversation with Jason Silva, Creativity, evolution of mind and the “vertigo of freedom”, Big Think, June 21, 2011. (Illustrations: 1) Randy Mora, Artífices del sonido, 2) Noosphere)

See also:

☞ RoseRose, Google and the Myceliation of Consciousness
Kevin Kelly on Why the Impossible Happens More Often
Luciano Floridi on the future development of the information society
Luciano Floridi on The Digital Revolution as a Fourth Revolution: “P2P does’t mean Pirate to Pirate but Platonist to Platonist”
The Rise of the Conversation Society: information, communication and collaboration
Keen On… James Gleick: Why Cyberspace, As a Mode of Being, Will Never Go Away (TCTV), (video) TechCrunch, Jun 23, 2011
Timothy Leary on cybernetics and a new global culture
Mark Changizi on Humans, Version 3.0.
Cyberspace tag on Lapidarium

Mar
18th
Sun
permalink

Are We “Meant” to Have Language and Music? How Language and Music Mimicked Nature and Transformed Ape to Man

                

"We’re fish out of water, living in radically unnatural environments and behaving ridiculously for a great ape. So, if one were interested in figuring out which things are fundamentally part of what it is to be human, then those million crazy things we do these days would not be on the list. (…)

At the top of the list of things we do that we’re supposed to be doing, and that are at the core of what it is to be human rather than some other sort of animal, are language and music. Language is the pinnacle of usefulness, and was key to our domination of the Earth (and the Moon). And music is arguably the pinnacle of the arts. Language and music are fantastically complex, and we’re brilliantly capable at absorbing them, and from a young age. That’s how we know we’re meant to be doing them, i.e., how we know we evolved brains for engaging in language and music.

But what if this gets language and music all wrong? What if we’re not, in fact, meant to have language and music? What if our endless yapping and music-filled hours each day are deeply unnatural behaviors for our species? (…)

I believe that language and music are, indeed, not part of our core—that we never evolved by natural selection to engage in them. The reason we have such a head for language and music is not that we evolved for them, but, rather, that language and music evolved—culturally evolved over millennia—for us. Our brains aren’t shaped for these pinnacles of humankind. Rather, these pinnacles of humankind are shaped to be good for our brains.

But how on Earth can one argue for such a view? If language and music have shaped themselves to be good for non-linguistic and amusical brains, then what would their shapes have to be?

They’d have to possess the auditory structure of…nature. That is, we have auditory systems which have evolved to be brilliantly capable at processing the sounds from nature, and language and music would need to mimic those sorts of sounds in order to harness—to “nature-harness,” as I call it—our brain.

And language and music do nature-harness. (…) The two most important classes of auditory stimuli for humans are (i) events among objects (most commonly solid objects), and (ii) events among humans (i.e., human behavior). And, in my research I have shown that the signature sounds in these two auditory domains drive the sounds we humans use in (i) speech and (ii) music, respectively.

For example, the principal source of modulation of pitch in the natural world comes from the Doppler shift, where objects moving toward you have a high pitch and objects moving away have a low pitch; from these pitch modulations a listener can hear an object’s direction of movement relative to his or her position. In the book I provide a battery of converging evidence that melody in music has culturally evolved to sound like the (often exaggerations of) Doppler shifts of a person moving in one’s midst. Consider first that a mover’s pitch will modulate within a fixed range, the top and bottom pitches occurring when the mover is headed, respectively, toward and away from you. Do melodies confine themselves to fixed ranges? They tend to, and tessitura is the musical term to refer to this range. In the book I run through a variety of specific predictions.

Here’s one. If melody is “trying” to sound like the Doppler shifts of a mover—and thereby convey to the auditory system the trajectory of a fictional mover—then a faster mover will have a greater difference between its top and bottom pitch. Does faster music tend to have a wider tessitura? That is, does music with a faster tempo—more beats, or footsteps, per second—tend to have a wider tessitura? Notice that the performer of faster tempo music would ideally like the tessitura to narrow, not widen! But what we found is that, indeed, music having a greater tempo tends to have a wider tessitura, just what one would expect if the meaning of melody is the direction of a mover in your midst.

The preliminary conclusion of the research is that, human speech sounds like solid objects events, and music sounds like human behavior!

That’s just what we expect if we were never meant to do language and music. Language and music have the fingerprints of being unnatural (i.e., of not having their origins via natural selection)…and the giveaway is, ironically, that their shapes are natural (i.e., have the structure of natural auditory events).

We also find this for another core capability that we know we’re not “meant” to do: reading. Writing was invented much too recently for us to have specialized reading mechanisms in the brain (although there are new hints of early writing as old as 30,000 years), and yet reading has the hallmarks of instinct. As I have argued in my research and in my second book, The Vision Revolution, writing slides so well into our brain because it got shaped by cultural evolution to look “like nature,” and, specifically, to have the signature contour-combinations found in natural scenes (which consists mostly of opaque objects strewn about).

My research suggests that language and music aren’t any more part of our biological identity than reading is. Counterintuitively, then, we aren’t “supposed” to be speaking and listening to music. They aren’t part of our “core” after all.

Or, at least, they aren’t part of the core of Homo sapiens as the species originally appeared. But, it seems reasonable to insist that, whether or not language and music are part of our natural biological history, they are indeed at the core of what we take to be centrally human now. Being human today is quite a different thing than being the original Homo sapiens.

So, what is it to be human? Unlike Homo sapiens, we’re grown in a radically different petri dish. Our habitat is filled with cultural artifacts—the two heavyweights being language and music—designed to harness our brains’ ancient capabilities and transform them into new ones.

Humans are more than Homo sapiens. Humans are Homo sapiens who have been nature-harnessed into an altogether novel creature, one designed in part via natural selection, but also in part via cultural evolution.

Mark Changizi, an evolutionary neurobiologist, Are We “Meant” to Have Language and Music?, Discover Magazine, March 15th, 2012. (Illustration: Harnessed)

See also:

Mark Changizi, Music Sounds Like Moving People, Science 2.0, Jan 10, 2010.
☞ Mark Changizi, How To Put Art And Brain Together
☞ Mark Changizi, How we read
Mark Changizi on brain’s perception of the world
A brief history of writing, Lapidarium notes
Mark Changizi on Humans, Version 3.0.

Jan
21st
Sat
permalink

'Human beings are learning machines,' says philosopher (nature vs. nurture)

                       

"The point is that in scientific writing (…) suggest a very inflexible view of human nature, that we are determined by our biology. From my perspective the most interesting thing about the human species is our plasticity, our flexibility. (…)

It is striking in general that human beings mistake the cultural for the natural; you see it in many domains. Take moral values. We assume we have moral instincts: we just know that certain things are right and certain things are wrong. When we encounter people whose values differ from ours we think they must be corrupted or in some sense morally deformed. But this is clearly an instance where we mistake our deeply inculcated preferences for natural law. (…)

Q: At what point with morality does biology stop and culture begin?

One important innate contribution to morality is emotions. An aggressive response to an attack is not learned, it is biological. The question is how emotions that are designed to protect each of us as individuals get extended into generalised rules that spread within a group. One factor may be imitation. Human beings are great imitative learners. Rules that spread in a family can be calibrated across a whole village, leading to conformity in the group and a genuine system of morality.

Nativists will say that morality can emerge without instruction. But with innate domains, there isn’t much need for instruction, whereas in the moral domain, instruction is extensive. Kids learn through incessant correction. Between the ages of 2 and 10, parents correct their children’s behaviour every 8 minutes or so of waking life. In due course, our little monsters become little angels, more or less. This gives us reason to think morality is learned.

Q: One of the strongest arguments for innateness comes from linguists such as Noam Chomsky, who argue that humans are born with the basic rules of grammar already in place. But you disagree with them?

Chomsky singularly deserves credit for giving rise to the new cognitive sciences of the mind. He was instrumental in helping us think about the mind as a kind of machine. He has made some very compelling arguments to explain why everybody with an intact brain speaks grammatically even though children are not explicitly taught the rules of grammar.

But over the past 10 years we have started to see powerful evidence that children might learn language statistically, by unconsciously tabulating patterns in the sentences they hear and using these to generalise to new cases. Children might learn language effortlessly not because they possess innate grammatical rules, but because statistical learning is something we all do incessantly and automatically. The brain is designed to pick up on patterns of all kinds.

Q: How hard has it been to put this alternative view on the table, given how Chomskyan thought has dominated the debate in recent years?

Chomsky’s views about language are so deeply ingrained among academics that those who take statistical learning seriously are subject to a kind of ridicule. There is very little tolerance for dissent. This has been somewhat limiting, but there is a new generation of linguists who are taking the alternative very seriously, and it will probably become a very dominant position in the next generation.

Q: You describe yourself as an “unabashed empiricist” who favours nurture over nature. How did you come to this position, given that on many issues the evidence is still not definitive either way?

Actually I think the debate has been settled. You only have to stroll down the street to see that human beings are learning machines. Sure, for any given capacity the debate over biology versus culture will take time to resolve. But if you compare us with other species, our degree of variation is just so extraordinary and so obvious that we know prior to doing any science that human beings are special in this regard, and that a tremendous amount of what we do is as a result of learning. So empiricism should be the default position. The rest is just working out the details of how all this learning takes place.

Q: What are the implications of an empirical understanding of human nature for the way we go about our lives. How should it affect the way we behave?

In general, we need to cultivate a respect for difference. We need to appreciate that people with different values to us are not simply evil or ignorant, and that just like us they are products of socialisation. This should lead to an increase in international understanding and respect. We also need to understand that group differences in performance are not necessarily biologically fixed. For example, when we see women performing less well than men in mathematics, we should not assume that this is because of a difference in biology.

Q: How much has cognitive science contributed to our understanding of what it is to be human, traditionally a philosophical question?

Cognitive science is in the business of settling long-running philosophical debates on human nature, innate knowledge and other issues. The fact that these theories have been churning about for a couple of millennia without any consensus is evidence that philosophical methods are better at posing questions than answering them. Philosophy tells us what is possible, and science tells us what is true.

Cognitive science has transformed philosophy. At the beginning of the 20th century, philosophers changed their methodology quite dramatically by adopting logic. There has been an equally important revolution in 21st-century philosophy in that philosophers are turning to the empirical sciences and to some extent conducting experimental work themselves to settle old questions. As a philosopher, I hardly go a week without conducting an experiment.

My whole working day has changed because of the infusion of science.”

Jesse Prinz is a distinguished professor of philosophy at the City University of New York, specialising in the philosophy of psychology. He is a pioneer in experimental philosophy, using findings from the cognitive sciences, anthropology and other fields to develop empiricist theories of how the mind works. He is the author of The Emotional Construction of Morals (Oxford University Press, 2007), Gut Reactions (OUP, 2004) and Furnishing the Mind (MIT Press, 2002) and Beyond Human Nature: How culture and experience make us who we are, 'Human beings are learning machines,' says philosopher, NewScientist, Jan 20, 2012. (Illustration: Fritz Kahn, British Library)

See also:

Jesse Prinz: Morality is a Culturally Conditioned Response
Human Nature. Sapolsky, Maté, Wilkinson, Gilligan, discuss on human behavior and the nature vs. nurture debate

Jan
19th
Thu
permalink

Cognitive scientists develop new take on old problem: why human language has so many words with multiple meanings

           

“Why did language evolve? While the answer might seem obvious — as a way for individuals to exchange information — linguists and other students of communication have debated this question for years. Many prominent linguists, including MIT’s Noam Chomsky, have argued that language is, in fact, poorly designed for communication. Such a use, they say, is merely a byproduct of a system that probably evolved for other reasons — perhaps for structuring our own private thoughts.

As evidence, these linguists point to the existence of ambiguity: In a system optimized for conveying information between a speaker and a listener, they argue, each word would have just one meaning, eliminating any chance of confusion or misunderstanding. Now, a group of MIT cognitive scientists has turned this idea on its head. In a new theory, they claim that ambiguity actually makes language more efficient, by allowing for the reuse of short, efficient sounds that listeners can easily disambiguate with the help of context.

“Various people have said that ambiguity is a problem for communication,” says Ted Gibson, an MIT professor of cognitive science and senior author of a paper describing the research to appear in the journal Cognition. “But once we understand that context disambiguates, then ambiguity is not a problem — it’s something you can take advantage of, because you can reuse easy [words] in different contexts over and over again.” (…)

What do you ‘mean’?

For a somewhat ironic example of ambiguity, consider the word “mean.” It can mean, of course, to indicate or signify, but it can also refer to an intention or purpose (“I meant to go to the store”); something offensive or nasty; or the mathematical average of a set of numbers. Adding an ‘s’ introduces even more potential definitions: an instrument or method (“a means to an end”), or financial resources (“to live within one’s means”).

But virtually no speaker of English gets confused when he or she hears the word “mean.” That’s because the different senses of the word occur in such different contexts as to allow listeners to infer its meaning nearly automatically.

Given the disambiguating power of context, the researchers hypothesized that languages might harness ambiguity to reuse words — most likely, the easiest words for language processing systems. Building on observation and previous studies, they posited that words with fewer syllables, high frequency and the simplest pronunciations should have the most meanings.

To test this prediction, Piantadosi, Tily and Gibson carried out corpus studies of English, Dutch and German. (In linguistics, a corpus is a large body of samples of language as it is used naturally, which can be used to search for word frequencies or patterns.) By comparing certain properties of words to their numbers of meanings, the researchers confirmed their suspicion that shorter, more frequent words, as well as those that conform to the language’s typical sound patterns, are most likely to be ambiguous — trends that were statistically significant in all three languages.

To understand why ambiguity makes a language more efficient rather than less so, think about the competing desires of the speaker and the listener. The speaker is interested in conveying as much as possible with the fewest possible words, while the listener is aiming to get a complete and specific understanding of what the speaker is trying to say. But as the researchers write, it is “cognitively cheaper” to have the listener infer certain things from the context than to have the speaker spend time on longer and more complicated utterances. The result is a system that skews toward ambiguity, reusing the “easiest” words. Once context is considered, it’s clear that “ambiguity is actually something you would want in the communication system,” Piantadosi says.

      

Tom Wasow, a professor of linguistics and philosophy at Stanford University, calls the paper “important and insightful.”

“You would expect that since languages are constantly changing, they would evolve to get rid of ambiguity,” Wasow says. “But if you look at natural languages, they are massively ambiguous: Words have multiple meanings, there are multiple ways to parse strings of words. This paper presents a really rigorous argument as to why that kind of ambiguity is actually functional for communicative purposes, rather than dysfunctional.

Implications for computer science

The researchers say the statistical nature of their paper reflects a trend in the field of linguistics, which is coming to rely more heavily on information theory and quantitative methods.

“The influence of computer science in linguistics right now is very high,” Gibson says, adding that natural language processing (NLP) is a major goal of those operating at the intersection of the two fields.

Piantadosi points out that ambiguity in natural language poses immense challenges for NLP developers. “Ambiguity is only good for us [as humans] because we have these really sophisticated cognitive mechanisms for disambiguating,” he says. “It’s really difficult to work out the details of what those are, or even some sort of approximation that you could get a computer to use.”

But, as Gibson says, computer scientists have long been aware of this problem. The new study provides a better theoretical and evolutionary explanation of why ambiguity exists, but the same message holds: “Basically, if you have any human language in your input or output, you are stuck with needing context to disambiguate,” he says.”

Emily Finn, The advantage of ambiguity, MIT news, Jan 19, 2012. (Illustration source: 1, 2)

See also:

☞ S. T. Piantadosi, H. Tily, E. Gibson, The communicative function of ambiguity in language (pdf), Department of Brain and Cognitive Sciences, MIT

"We present a general information-theoretic argument that all efficient communication systems will be ambiguous, assuming that context is informative about meaning. We also argue that ambiguity additionally allows for greater ease of processing by allowing efficient linguistic units to be re-used. We test predictions of this theory in English, German, and Dutch. Our results and theoretical analysis suggest that ambiguity is a functional property of language that allows for greater communicative efficiency. (…)

Our results argue for a rational explanation of ambiguity and demonstrate that ambiguity is not mysterious when language is considered as a cognitive system designed in part for communication.”

☞ B. Juba, A. Tauman, K. Sanjeev Khanna, M. Sudan, Compression without a common prior: an information-theoretic justification for ambiguity in language (pdf), Harvard University, MIT

"Compression is a fundamental goal of both human language and digital communication, yet natural language is very different from compression schemes employed by modern computers. We partly explain this difference using the fact that information theory generally assumes a common prior probability distribution shared by the encoder and decoder, whereas human communication has to be robust to the fact that a speaker and listener may have different prior beliefs about what a speaker may say. We model this information-theoretically using the following question: what type of compression scheme would be effective when the encoder and decoder have (boundedly) different prior probability distributions. The resulting compression scheme resembles natural language to a far greater extent than existing digital communication protocols. We also use information theory to justify why ambiguity is necessary for the purpose of compression."

Language tag on Lapidarium notes

Jan
6th
Fri
permalink

Why Do Languages Die? Urbanization, the state and the rise of nationalism

       

"The history of the world’s languages is largely a story of loss and decline. At around 8000 BC, linguists estimate that upwards of 20,000 languages may have been in existence. Today the number stands at 6,909 and is declining rapidly. By 2100, it is quite realistic to expect that half of these languages will be gone, their last speakers dead, their words perhaps recorded in a dusty archive somewhere, but more likely undocumented entirely. (…)

The problem with globalization in the latter sense is that it is the result, not a cause, of language decline. (…) It is only when the state adopts a trade language as official and, in a fit of linguistic nationalism, foists it upon its citizens, that trade languages become “killer languages.” (…)

Most importantly, what both of the above answers overlook is that speaking a global language or a language of trade does not necessitate the abandonment of one’s mother tongue. The average person on this planet speaks three or four languages. (…)

The truth is, most people don’t “give up” the languages they learn in their youth. (…) To wipe out a language, one has to enter the home and prevent the parents from speaking their native language to their children.

Given such a preposterous scenario, we return to our question — how could this possibly happen?

One good answer is urbanization. If a Gikuyu and a Giryama meet in Nairobi, they won’t likely speak each other’s mother tongue, but they very likely will speak one or both of the trade languages in Kenya — Swahili and English. Their kids may learn a smattering of words in the heritage languages from their parents, but by the third generation any vestiges of those languages in the family will likely be gone. In other cases, extremely rural communities are drawn to the relatively easier lifestyle in cities, until sometimes entire villages are abandoned. Nor is this a recent phenomenon.

The first case of massive language die-off was probably during the Agrarian (Neolithic) Revolution, when humanity first adopted farming, abandoned the nomadic lifestyle, and created permanent settlements. As the size of these communities grew, so did the language they spoke. But throughout most of history, and still in many areas of the world today, 500 or fewer speakers per language has been the norm. Like the people who spoke them, these languages were constantly in flux. No language could grow very large, because the community that spoke it could only grow so large itself before it fragmented. The language followed suit, soon becoming two languages. Permanent settlements changed all this, and soon larger and larger populations could stably speak the same language. (…)

"In primitive times every migration causes not only geographical but also intellectual separation of clans and tribes. Economic exchanges do not yet exist; there is no contact that could work against differentiation and the rise of new customs. The dialect of each tribe becomes more and more different from the one that its ancestors spoke when they were still living together. The splintering of dialects goes on without interruption. The descendants no longer understand one other.… A need for unification in language then arises from two sides. The beginnings of trade make understanding necessary between members of different tribes. But this need is satisfied when individual middlemen in trade achieve the necessary command of language.”

Ludwig von Mises, Nation, State, and Economy (Online edition, 1919; 1983), Ludwig von Mises Institute, p. 46–47.

Thus urbanization is an important factor in language death. To be sure, the wondrous features of cities that draw immigrants — greater economies of scale, decreased search costs, increased division of labor — are all made possible with capitalism, and so in this sense languages may die for economic reasons. But this is precisely the type of language death that shouldn’t concern us (unless you’re a linguist like me), because urbanization is really nothing more than the demonstrated preferences of millions of people who wish to take advantage of all the fantastic benefits that cities have to offer.

In short, these people make the conscious choice to leave an environment where network effects and sociological benefits exist for speaking their native language, and exchange it for a greater range of economic possibilities, but where no such social benefits for speaking the language exist. If this were the only cause of language death — or even just the biggest one — then there would be little more to say about it. (…)

Far too many well-intentioned individuals are too quick to substitute their valuations for those of the last speakers of indigenous languages this way. Were it up to them, these speakers would be resigned to misery and poverty and deprived of participation in the world’s advanced economies in order that their language might be passed on. To be sure, these speakers themselves often fall victim to the mistaken ideology that one language necessarily displaces or interferes with another.

Although the South African Department of Education is trying to develop teaching materials in the local African languages, for example, many parents are pushing back; they want their children taught only in English. In Dominica, the parents go even further and refuse to even speak the local language, Patwa, to their children.[1] Were they made aware of the falsity of this notion of language displacement, perhaps they would be less quick to stop speaking their language to their children. But the decision is ultimately theirs to make, and theirs alone.

Urbanization, however, is not the only cause of language death. There is another that, I’m sad to say, almost none of the linguists who work on endangered languages give much thought to, and that is the state. The state is the only entity capable of reaching into the home and forcibly altering the process of language socialization in an institutionalized way.

How? The traditional method was simply to kill or remove indigenous and minority populations, as was done as recently as 1923 in the United States in the last conflict of the Indian War. More recently this happens through indirect means — whether intentional or otherwise — the primary method of which has been compulsory state schooling.

There is no more pernicious assault on the cultural practices of minority populations than a standardized, Anglified, Englicized compulsory education. It is not just that children are forcibly removed from the socialization process in the home, required to speak an official language and punished (often corporally) for doing otherwise. It is not just that schools redefine success, away from those things valued by the community, and towards those things that make someone a better citizen of the state. No, the most significant impact of compulsory state education is that it ingrains in children the idea that their language and their culture is worthless, of no use in the modern classroom or society, and that it is something that merely serves to set them apart negatively from their peers, as an object of their vicious torment.

But these languages clearly do have value, if for no other reason than simply because people value them. Local and minority languages are valued by their speakers for all sorts of reasons, whether it be for use in the local community, communicating with one’s elders, a sense of heritage, the oral and literary traditions of that language, or something else entirely. Again, the praxeologist is not in a position to evaluate these beliefs. The praxeologist merely notes that free choice in language use and free choice in association, one not dictated by the edicts of the state, will best satisfy the demand of individuals, whether for minority languages or lingua francas. What people find useful, they will use.

By contrast, the state values none of these things. For the state, the goal is to bind individuals to itself, to an imagined homogeneous community of good citizens, rather than their local community. National ties trump local ones in the eyes of the state. Free choice in association is disregarded entirely. And so the state forces many indigenous people to become members of a foreign community, where they are a minority and their language is scorned, as in the case of boarding schools. Whereas at home, mastering the native language is an important part of functioning in the community and earning prestige, and thus something of value, at school it becomes a black mark and a detriment. Given the prisonlike way schools are run, and how they exhibit similar intense (and sometimes dangerous) pressures from one’s peers, minority-language-speaking children would be smart to disassociate themselves as quickly as possible from their cultural heritage.

Mises himself, though sometimes falling prey to common fallacies regarding language like linguistic determinism and ethnolinguistic isomorphism, was aware of this distinction between natural language decline and language death brought on by the state. (…)

This is precisely what the Bureau of Indian Affairs accomplished by coercing indigenous children into attending boarding schools. Those children were cut off from their culture and language — their nation — until they had effectively assimilated American ideologies regarding minority languages, namely, that English is good and all else is bad.

Nor is this the only way the state affects language. The very existence of a modern nation-state, and the ideology it encompasses, is antithetical to linguistic diversity. It is predicated on the idea of one state, one nation, one people. In Nation, State, and Economy, Mises points out that, prior to the rise of nationalism in the 17th and 18th centuries, the concept of a nation did not refer to a political unit like state or country as we think of it today.

A “nation” instead referred to a collection of individuals who share a common history, religion, cultural customs and — most importantly — language. Mises even went so far as to claim that “the essence of nationality lies in language.”[2] The “state” was a thing apart, referring to the nobility or princely state, not a community of people (hence Louis XIV’s famous quip, “L’état c’est moi.”).[3] In that era, a state might consist of many nations, and a nation might subsume many states.

The rise of nationalism changed all this. As Robert Lane Greene points out in his excellent book, You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity,

The old blurry linguistic borders became inconvenient for nationalists. To build nations strong enough to win themselves a state, the people of a would-be nation needed to be welded together with a clear sense of community. Speaking a minority dialect or refusing to assimilate to a standard wouldn’t do.[4]

Mises himself elaborated on this point. Despite his belief in the value of a liberal democracy, which would remain with him for the rest of his life, Mises realized early on that the imposition of democracy over multiple nations could only lead to hegemony and assimilation:

In polyglot territories, therefore, the introduction of a democratic constitution does not mean the same thing at all as introduction of democratic autonomy. Majority rule signifies something quite different here than in nationally uniform territories; here, for a part of the people, it is not popular rule but foreign rule. If national minorities oppose democratic arrangements, if, according to circumstances, they prefer princely absolutism, an authoritarian regime, or an oligarchic constitution, they do so because they well know that democracy means the same thing for them as subjugation under the rule of others.[5]

From the ideology of nationalism was also born the principle of irredentism, the policy of incorporating historically or ethnically related peoples into the larger umbrella of a single state, regardless of their linguistic differences. As Greene points out, for example,

By one estimate, just 2 or 3 percent of newly minted “Italians” spoke Italian at home when Italy was unified in the 1860s. Some Italian dialects were as different from one another as modern Italian is from modern Spanish.[6]

This in turn prompted the Italian statesman Massimo D’Agelizo (1798–1866) to say, “We have created Italy. Now we need to create Italians.” And so these Italian languages soon became yet another casualty of the nation-state.

Mises once presciently predicted that,

If [minority nations] do not want to remain politically without influence, then they must adapt their political thinking to that of their environment; they must give up their special national characteristics and their language.[7]

This is largely the story of the world’s languages. It is, as we have seen, the history of the state, a story of nationalistic furor, and of assimilation by force. Only when we abandon this socialist and utopian fantasy of one state, one nation, one people will this story begin to change.”

Danny Hieber is a linguist working to document and revitalize the world’s endangered languages, Why Do Languages Die?, Ludwig von Mises Institute, Jan 04, 2012. (Illustration: The Evolution of the Armenian Alphabet)

[1] Amy L. Paugh, Playing With Languages: Children and Change in a Caribbean Village (2012), Berghahn Books.
[2] Ludwig von Mises, Human Action: A Treatise on Economics (Scholar’s Edition, 2010) Auburn, AL: Ludwig von Mises Institute, p.37.
[3] “I am the state.”
[4] Robert Lane Greene, You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity (Kindle Edition, 2011), Delacorte Press, p. 132.
[5] Mises, Nation, State, and Economy, p. 77.
[6] Greene, You Are What You Speak, p. 141.
[7] Mises, Nation, State, and Economy, p. 77.

“Isn’t language loss a good thing, because fewer languages mean easier communication among the world’s people? Perhaps, but it’s a bad thing in other respects. Languages differ in structure and vocabulary, in how they express causation and feelings and personal responsibility, hence in how they shape our thoughts. There’s no single purpose “best” language; instead, different languages are better suited for different purposes.

For instance, it may not have been an accident that Plato and Aristotle wrote in Greek, while Kant wrote in German. The grammatical particles of those two languages, plus their ease in forming compound words, may have helped make them the preeminent languages of western philosophy.

Another example, familiar to all of us who studied Latin, is that highly inflected languages (ones in which word endings suffice to indicate sentence structure) can use variations of word order to convey nuances impossible with English. Our English word order is severely constrained by having to serve as the main clue to sentence structure. If English becomes a world language, that won’t be because English was necessarily the best language for diplomacy.”

— Jared Diamond, American scientist and author, currently Professor of Geography and Physiology at UCLA, The Third Chimpanzee: The Evolution & Future of the Human Animal, Hutchinson Radius, 1991.

See also:

Lists of endangered languages, Wiki
☞ Salikoko S. Mufwene, How Languages Die (pdf), University of Chicago, 2006
☞ K. David Harrison, When Languages Die. The Extinction of the World’s Languages and the Erosion of Human Knowledge (pdf), Oxford University Press, 2007

"It is commonly agreed by linguists and anthropologists that the majority of languages spoken now around the globe will likely disappear within our lifetime. The phenomenon known as language death has started to accelerate as the world has grown smaller. "This extinction of languages, and the knowledge therein, has no parallel in human history. K. David Harrison’s book is the first to focus on the essential question, what is lost when a language dies? What forms of knowledge are embedded in a language’s structure and vocabulary? And how harmful is it to humanity that such knowledge is lost forever?"

Nicholas Ostler on The Last Lingua Franca. English Until the Return of Babel, Lapidarium notes
☞ Henry Hitchings, What’s the language of the future?, Salon, Nov 6, 2011.

Dec
27th
Tue
permalink

Do thoughts have a language of their own? The language of thought hypothesis

            
                                      The language of thought drawing by Robert Horvitz

"We dissect nature along lines laid down by our native languages. The categories and types that we isolate from the world of phenomena we do not find there because they stare the observer in the face; on the contrary, the world is presented in a kaleidoscopic flux of impressions which has to be organized by our minds-and this means largely by the linguistic systems of our minds.”

Benjamin Lee Whorf, American linguist (1897-1941), 1956, p. 213, cited in Does language determine thought? Boroditsky’s (2001) research on Chinese speakers’ conception of time (pdf)

"The mind thinks its thoughts in ‘Mentalese,’ codes them in the localnatural language, and then transmits them (say, by speaking them out loud) to the hearer. The hearer has a Cryptographer in his head too, of course, who thereupon proceeds to decode the ‘message.’ In this picture, natural language, far from being essential to thought, is merely a vehicle for the communication of thought.”

Hilary Putnam, American philosopher, mathematician and computer scientist, Representation and reality, A Bradford Book, 1991, p. 10-11.

"According to one school of philosophy, our thoughts have a language-like structure that is independent of natural language: this is what students of language call the language of thought (LOT) hypothesis. According to the LOT hypothesis, it is because human thoughts already have a linguistic structure that the emergence of common, natural languages was possible in the first place. (…)

Many - perhaps most - psychologists end up concluding that ordinary people do not use the rules of logic in everyday life.

There is an alternative way of seeing this: that there is a language of thought, and that it has a more logical form than ordinary natural language. This view has an added bonus: it tells us that, if you want to express yourself more clearly and more effectively in natural language, then you should express yourself in a form that is closer to computational logic - and therefore closer to the language of thought. Dry legalese never looked so good.”

Robert Kowalski, British logician and computer scientist, Do thoughts have a language of their own?, New Scientist, 8 Dec 2011

"In philosophy of mind, the language of thought hypothesis (LOTH) put forward by American philosopher Jerry Fodor describes thoughts as represented in a “language” (sometimes known as mentalese) that allows complex thoughts to be built up by combining simpler thoughts in various ways. In its most basic form the theory states that thought follows the same rules as language: thought has syntax.

Using empirical data drawn from linguistics and cognitive science to describe mental representation from a philosophical vantage-point, the hypothesis states that thinking takes place in a language of thought (LOT): cognition and cognitive processes are only ‘remotely plausible’ when expressed as a system of representations that is “tokened” by a linguistic or semantic structure and operated upon by means of a combinatorial syntax. Linguistic tokens used in mental language describe elementary concepts which are operated upon by logical rules establishing causal connections to allow for complex thought. Syntax as well as semantics have a causal effect on the properties of this system of mental representations.

These mental representations are not present in the brain in the same way as symbols are present on paper; rather, the LOT is supposed to exist at the cognitive level, the level of thoughts and concepts. LOTH has wide-ranging significance for a number of domains in cognitive science. It relies on a version of functionalist materialism, which holds that mental representations are actualized and modified by the individual holding the propositional attitude, and it challenges eliminative materialism and connectionism. It implies a strongly rationalist model of cognition in which many of the fundamentals of cognition are innate. (…)

Some philosophers have argued that our public language is our mental language, that a person who speaks English thinks in English. Others contend that people who do not know a public language (e.g. babies, aphasics) can think, and that therefore some form of mentalese must be present innately. (…)

Tim Crane, in his book The Mechanical Mind, states that, while he agrees with Fodor, his reason is very different. A logical objection challenges LOTH’s explanation of how sentences in natural languages get their meaning. That is the view that “Snow is white” is TRUE if and only if P is TRUE in the LOT, where P means the same thing in LOT as “Snow is white” means in the natural language. Any symbol manipulation is in need of some way of deriving what those symbols mean. If the meaning of sentences is explained in terms of sentences in the LOT, then the meaning of sentences in LOT must get their meaning from somewhere else. There seems to be an infinite regress of sentences getting their meaning. Sentences in natural languages get their meaning from their users (speakers, writers).  Therefore sentences in mentalese must get their meaning from the way in which they are used by thinkers and so on ad infinitum. This regress is often called the homunculus regress.

Daniel Dennett accepts that homunculi may be explained by other homunculi and denies that this would yield an infinite regress of homunculi. Each explanatory homunculus is “stupider” or more basic than the homunculus it explains but this regress is not infinite but bottoms out at a basic level that is so simple that it does not need interpretation. John Searle points out that it still follows that the bottom-level homunculi are manipulating some sorts of symbols.

LOTH implies that the mind has some tacit knowledge of the logical rules of inference and the linguistic rules of syntax (sentence structure) and semantics (concept or word meaning). If LOTH cannot show that the mind knows that it is following the particular set of rules in question then the mind is not computational because it is not governed by computational rules. Also, the apparent incompleteness of this set of rules in explaining behavior is pointed out. Many conscious beings behave in ways that are contrary to the rules of logic. Yet this irrational behavior is not accounted for by any rules, showing that there is at least some behavior that does not act in accordance with this set of rules.”

Wiki

Inner Speech as a Language

"A definition of language is always, implicitly or explicitly, a definition of human beings in the world."

Raymond Williams, Welsh academic, novelist and critic (1921-1988)

"A set (finite or infinite) of sentences, each finite in length and constructed out of a finite set of elements."

Noam Chomsky, American linguist, philosopher, cognitive scientist

"People often talk silently to themselves, engaging in what is called inner speech, internal conversation, inner dialogue, self talk and so on. This seems to be an inherent characteristic of human beings, commented on as early as Plato, who regarded thought as inner speech. The American pragmatists thought the inner dialogue was the defining feature of the self. For them the self is an internal community or network, communicating within itself in a field of meaning.

The idea that ordinary language is the language of thought however is not the only linquistic theory of thought. Since Saint Augustine there has been the idea that thought is itself a language of pure abstractions. This “mental language” as it was called differs from ordinary language by consisting solely of meanings, i.e. as signifieds without signifiers to use Saussure’s language (Ashworth 2003). This hypothesis peaked in the writings of William of Occam and declined when Hobbes introduced a purely computational, hedonistic theory of thought (Normore 2005).

A second competitor to the ordinary language theory of thought is the “mentalese” hypothesis of Noam Chomsky (1968) and Jerry Fodor (1975). This approach, which sometimes uses the computer as a metaphor for the mind, resembles the Scholastic’s theory in envisioning a purely abstract language of thought. Whatever processes of ordinary language might accompany it are viewed as epiphenomenal, gloss or what might be called “fluff.” Ordinary language, according to this view, is a pale shadow of the actual language of thought. In addition mentalese is regarded as both innate and unconscious. It is a faculty that is claimed to be present at birth and one which operates below the awareness of the mind.

There are then three language of thought hypotheses, the ordinary language or inner speech version, the now marginalized Augustine-Occam mental language and the computer-based, Chomsky-Fodor theory of mentalese. There seem to be no comparisons of the Scholastic and the mentalese theories except in Panaccio (1992, pp. 267–272). However there is a vigorous debate between the ordinary language theory and that of mentalese (for two collections see Carruthers and Boucher 1998 and Preston 1997). A major weak spot of mentalese is that, being unconscious, there is no empirical way of verifying it. The weak spot of the inner speech approach is that there are several examples of non-linguistic thought, e.g. in infants, animals, brain damaged people and ordinary people under conditions of high speed thought.

Still, all three of these language of thought hypotheses are alive and under
discussion in contemporary thought. (…) [p.319]

I will argue that inner speech is even more referential than outer speech in some respects, but also even more differential in other respects. In other words its semantic system is polarized between the differential and the referential.

Considering the peculiarities of inner speech, I think its vocabulary would be more differentially defined, i.e. more “structural”, than outer speech. First let me recall the special qualities of inner speech as silent, elliptical, embedded and egocentric. These qualities make it relatively private, both in the words and their meanings. And these privacy walls push things together, creating links and dependencies among the words.

Let us take the analogy of an intimate relationship, one that has some degree of deviance, with consequent secrecy. The mini culture of the relationship tends, due to secrecy, to be cut off from society at large. This culture gets isolated. There is the relationship time, the place, the transportation, the talk, the rituals, etc. The relationship elements are cut off from the outside world, and they inevitably share in that “relationship” feeling. They also imply each other, causally, sequentially, symbolically, etc. The relationship meanings are defined more differentially than, perhaps, items in a less deviant relationship. It is the privacy that melds things
together.

This internal language though is not only solitary and private, it is also much more self styled than outer language. Ordinary language has a smoothed over or idealized version, which Saussure refered to as language or “langue.” And it also has a more stylized, idiosyncratic version. This is its spoken variety, which Saussure referred to as parole or speech. Parole is more heterogeneous than langue, given that the speaking process reflects the unique mentalities of individuals and sub-cultures.

But by the same logic inner speech is even more individualized and heterogeneous than outer speech. Your spoken or outer speech is somewhat different from mine, and both are different from purified or formalized language. But your inner speech, given its elliptical, embedded and egocentric qualities, is even more different from mine, and both are quite different from the outer langue. In other words the gap between outer langue and inner speech is greater than that between outer langue and outer speech.

The peculiarities of inner speech are so stitched into the psyche, so personalitydependent, that they differ considerably from person to person. This does not seem to be primarily a reference-driven variation, for everyone’s inner speech has roughly the same, generic world of reference. The variation in the internal dialogue is largely due to the personal qualities of the speaker, to that person’s particular ego needs and short cuts.

We are little gods in the world of inner speech. We are the only ones, we run the show, we are the boss. This world is almost a little insane, for it lacks the usual social controls, and we can be as bad or as goofy as we want. On the other hand inner speech does have a job to do, it has to steer us through the world. That function sets up outer limits, even though within those limits we have a free rein to construct this language as we like.

There are similarities to the idealist world view in inner speech. The philosophical idealists, especially Berkeley, reduced the outer world to some version of an inner world. They internalized the external, each doing it somewhat differently, as though it were all a dream. For them all speech would be inner, since there is no outer. And since everything would be radiating from the self, everything would be connected via the self.

The Saussurean theory of linguistic differences [pdf], whether Saussure actually held it or not, is very much like idealistic metaphysics. In both cases everything is dangling from the same string. And some kind of self is pulling the string. The late l9th century British idealists thought all of reality was in relationship, and given that they had only an inner world, they referred to these as “internal relations.”

Saussure used this same phrase, internal relations, to refer to the differences among signifiers and signifieds. And whether he was aligning himself with the idealists or not, there is a similarity between his self-enclosed linguistic world and that of the idealists. It is the denial of reference, of an external world, that underlies this similarity. For Saussure this denial is merely a theoretical move, an “as if ” assumption, and not an assertion about the real world. The idealists said there actually was no external world, and Saussure said he would pretend, for methodological reasons, that there was no external world. But regardless of how they get there, they end up in the same place.

If there is no reference, no external world, then the only way language can be defined is internally, by a system of differences. Saussure’s purely differential theory of meaning follows from the loss of the referential. But if there is an external world, even for inner speech, then we are back to the dualistic semantic theory, i.e. to some sort of balance between referential and differential streams.

Although inner speech is not idealism, in some ways it seems to be a more differentially defined universe than outer speech. Linguistic context is even more important than in outer speech. One reason is that meaning is so condensed on the two axes. But a second is that inner language is so pervaded with emotion. We censor our emotions in ordinary interpersonal speech, hiding our fear, our shame, our jealousy, our gloating. It takes a while for little children to learn this, but when they grow up they are all, men and women alike, pretty good at it. Inner speech is another matter, for it is brutally honest. And its emotional life is anything goes. We can scream, whoop and holler to ourselves. Or we can sob on a wailing wall. In fact we probably emote more in inner speech to compensate for the restrictions on outer speech. Emotions pervade large stretches of inner speech, and they heighten the importance of internal relations.

The determinants of meaning in inner speech seem much more stark and unarguable than in outer speech. Inner speech is enclosed within us, and this seems to make it a more dense set of internal relations, both because of the intense privacy and the more spontaneous emotions. In these respects inner speech gives a rich example of Saussure’s differential meaning system.

On the other hand inner speech is also more obviously referential than outer speech. Ordinary speech is quite conventional or arbitrary, and when we say dog or apple pie, the sign has no resemblance to its object. In inner speech, though, the signs are often images of their objects, bearing an iconic or mirroring relation to them. In other words, as mentioned before, there can be a heavy dependency on sensory imagery in forming an internal sentence. (…)

In conclusion Saussure’s theory of semantics works well for some aspects of inner speech and quite poorly for others, i.e. the more referential ones. [signs of external objects, color coordination] (…) On the other hand inner speech is quite different from outer speech, and the Saussurean issues must be handled in special ways. Inner speech is only partially fitting to Saussure’s theories. And new ideas are needed to resolve Saussure’s questions. (…)

Saussure’s binaries were meant to simplify the study of language. The paradigmatic-syntagmatic distinction showed two axes of meaning, and it prepared the way for his differential theory of meaning. The history-systematics distinction was meant to justify the exclusion of history. The speech-language distinction was meant to get rid of speech. And the differential-referential distinction was meant to exclude reference. Saussure’s approach then is largely a pruning device which chopped off many traditional parts of linguistics.

My analysis suggests that this pruning apparatus does not work for inner speech. The two axes are useful but they do not prepare the way for the differential theory of meaning. History cannot be excluded, for it is too important for inner speech. Speech should be restored, and in fact langue applies only weakly to inner speech. And that capstone of Saussure and cultural studies, the differential theory of meaning, does not seem adequate for inner speech. Referential theory is also needed to make sense of its meaning system.

Ethnomethodology

Inner speech then is a distinct variation or dialect of ordinary language, and the characteristics I have pointed out seem to be central to its structure. (…)

Inner speech is quite similar to ethnomethodology in its use of short cuts and normalizing practices. Garfinkel (1967) and Cicourel (1974) discovered ethnomethodology by examining interpersonal or intersubjective communication. A great many economies and condensations of interpersonal conversation are similar to ones we use when we talk to ourselves. If I say to myself “shop on the way home,” this is a condensation of the fairly elaborate shopping list I mentioned earlier, but if I say to my wife “I’ll shop on the way home” she may understand something much like that same, implicit shopping list. In other words we are constantly using “etcetera clauses” to speed up our internal conversations. And, being both communicator and communicatee, we may understand these references even more accurately than we do in social conversations. (…)

The self is also a sort of family gathering with similar problems of maintaining and restoring solidarity. Much inner speech is a kind of Durkheimian self soothing ritual where we try to convince ourselves that everything’s fine, even when it is not. In this way we can comfort ourselves when we are frightened, restore some pride when we are ashamed, or find a silver lining when we are disappointed. Such expressions as “you can do it,” “you’re doing great,” and “this looks harder than it is” give us confidence and energy when the going is tough.

In sum inner speech helps one see the importance of ethnomethods. The fact that we engage in these practices in our deepest privacy shows they are rooted in our psychology as well as in our social life. And the fact that they run parallel in intra- and inter-subjective communication shows them to be a feature of communication as such.

Privacy

In philosophy Wittgenstein provoked a widespread and complex discussion of private language. By this he meant a language that is not only de facto but also inherently private. No one but the private language user would be able to fully understand it, even if the meanings were publically available. To constitute a private language such a tongue would not need to be completely private. If only a single word or sentence were inherently private, it would qualify as a private language in Wittgenstein’s sense.

It seems to me inner speech is clearly a private language, at least in some of its utterances. This language is so rooted in the unique self that an eavesdropper, could there be one, would not fully understand it. It has so much of one’s person in it, a listener would have to be another you to follow it. And if someone invented a window into consciousness, a mind-reading machine, that could invade one’s privacy, would they be able to understand the, now revealed, inner speech? I think not. They might be able to understand most of the words, but the non-linguistic or imagistic elements would be too much a personal script to follow. If this eavesdropper watched you, including your consciousness, for your whole life, had access to your memory and knew your way of combining non-linguistic representations with words, they might have your code, but this is another way of saying they would be another you. In practical terms inner speech would be inaccessible in its meaning even if it were accessible in its signifying forms.

Of course this semantic privacy does not prevent one from describing one’s own inner speech to another, at least to a substantial extent. Something is lost all right in the translation from first to third person representations. When, in footnote 2, I talked about the inner speech cluster I called “Tom,” I obviously left out some of the affect and all of the sensory imagery. But I was still able to communicate the gist of it, in other words to transform first to third person meanings. So even though this is a private language it can to some extent be made public and used for research purposes.

The importance of private language is that it sheds light on what a human being is. We are inherently private animals, and we become more so the more self-aware and internally communicative we are. This zone of privacy may well be the foundation for the moral (and legal) need people have for privacy. In any case the hidden individuality or uniqueness of each human being is closely related to the what the person says to him or her self.

Agency

One of the thorniest problems of the humanities and social sciences is human agency. Humans are the authors of their actions to a great extent, but the way this process works is difficult to understand. I would suggest that inner speech is both the locus and platform for agency.

Charles Sanders Peirce was under the impression that we guide our lives with inner speech. We choose internally in the zone of inner speech, and then we choose externally in the zone of practical action and the outer world. The first choice leads to the second choice. Peirce even thought we could make and break habits by first modelling them in our internal theater. Here we could visualize the performance of a particular action and also choose to perform this action. The visualization and the choice could give the energy for designing and moulding one’s life. (…)

More generally the self directing process, including planning, anticipating, rehearsing, etc. seems to be largely a product of inner speech. This includes both what one will do and how one will do it. Picturing one’s preferred action as the lesser evil or greater good, even if one fudges a bit on the facts, is probably also a powerful way of producing a given action, and possibly even a new habit. (…)

I showed that inner speech does not qualify as a public language, though it has a distinct structural profile as a semi-private language or perhaps as a dialect. This structure suggests the access points or research approaches that this language is amenable to. As examples of how this research might proceed I took a quick look at three issues: ethnomethodology, privacy and agency.”

Norbert Wiley, professor emeritus of Sociology at University of Illinois Urbana-Champaign, Illinois, Visiting Scholar at the University of California, Berkley. He is a prize-winning sociologist who has published on both the history and systematics of theory, to read full essay click Inner Speech as a Language: A Saussurean Inquiry (pdf), Journal for the Theory of Social Behaviour 36:3 0021–8308, 2006.

See also:

The Language of Thought Hypothesis, Stanford Encyclopedia of Philosophy
Private language argument, Wiki
Private Language, Stanford Encyclopedia of Philosophy
☞ Jerry A. Fodor, Why there still has to be a language of thought?
Robert Kowalski, British logician and computer scientist, Do thoughts have a language of their own?, New Scientist, 8 Dec 2011
☞ Jerry A. Fodor, The language of thoughtHarvard University Press, 1975
☞ Ned Block, The Mind as the Software of the Brain, New York University 
Antony, Louise M, What are you thinking? Character and content in the language of thought (pdf)
Ansgar Beckermann, Can there be a language of thought? (pdf) In G. White, B. Smith & R. Casati (eds.), Philosophy and the Cognitive Sciences. Proceedings of the 16th International Wittgenstein Symposium. Hölder-Pichler-Tempsky.
Edouard Machery, You don’t know how you think: Introspection and language of thought, British Journal for the Philosophy of Science 56 (3): 469-485, (2005)
☞ Christopher Bartel, Musical Thought and Compositionality (pdf), King’s College London
Psycholinguistics/Language and Thought, Wikiversity
MindPapers: The Language of Thought - A Bibliography of the Philosophy of Mind and the Science of Consciousness, links Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University

Sue Savage-Rumbaugh on Human Language—Human Consciousness. A personal narrative arises through the vehicle of language, Lapidarium notes
The time machine in our mind. The imagistic mental machinery that allows us to travel through time, Lapidarium notes

Dec
17th
Sat
permalink

Infinite Stupidity. Social evolution may have sculpted us not to be innovators and creators as much as to be copiers


A review of some big events

"Obviously one of the big events in our history was the origin of our planet, about 4.5 billion years ago. And what’s fascinating is that about 3.8 billion years ago, only about seven or eight hundred million years after the origin of our planet, life arose. That life was simple replicators, things that could make copies of themselves. And we think that life was a little bit like the bacteria we see on earth today. It would be the ancestors of the bacteria we see on earth today.

That life ruled the world for 2 billion years, and then about 1.5 billion years ago, a new kind of life emerged. These were the eukaryotic cells. They were a little bit different kind of cell from bacteria. And actually the kind of cells we are made of. And again, these organisms that were eukaryotes were single-celled, so even 1.5 billion years ago, we still just had single-celled organisms on earth. But it was a new kind of life.

It was another 500 million years before we had anything like a multicellular organism, and it was another 500 million years after that before we had anything really very interesting. So, about 500 million years ago, the plants and the animals started to evolve. And I think everybody would agree that this was a major event in the history of the world, because, for the first time, we had complex organisms.

After about 500 million years ago, things like the plants evolved, the fish evolved, lizards and snakes, dinosaurs, birds, and eventually mammals. And then it was really just six or seven million years ago, within the mammals, that the lineage that we now call the hominins arose. And they would be direct descendants of us. And then, within that lineage that arose about six or seven million years ago, it was only about 200,000 years ago that humans finally evolved.

Idea of idea evolution

And so, this is really just 99.99 percent of the way through the history of this planet, humans finally arose. But in that 0.01 percent of life on earth, we’ve utterly changed the planet. And the reason is that, with the arrival of humans 200,000 years ago, a new kind of evolution was created. The old genetical evolution that had ruled for 3.8 billion years now had a competitor, and that new kind of evolution was ideas.

It was a true form of evolution, because now ideas could arise, and they could jump from mind to mind, without genes having to change. So, populations of humans could adapt at the level of ideas. Ideas could accumulate. We call this cumulative cultural adaptation. And so, cultural complexity could emerge and arise orders and orders of magnitude faster than genetic evolution.

Now, I think most of us take that utterly for granted, but it has completely rewritten the way life evolves on this planet because, with the arrival of our species, everything changed. Now, a single species, using its idea evolution, that could proceed apace independently of genes, was able to adapt to nearly every environment on earth, and spread around the world where no other species had done that. All other species are limited to places on earth that their genes adapt them to. But we were able to adapt at the level of our cultures to every place on earth. (…)

If we go back in our lineage 2 million years or so, there was a species known as Homo erectus. Homo erectus is an upright ape that lived on the African savannah. It could make tools, but they were very limited tools, and those tools, the archaeological record tells us, didn’t change for about 1.5 million years. That is, until about the time they went extinct. That is, they made the same tools over and over and over again, without any real changes to them.

If we move forward in time a little bit, it’s not even clear that our very close cousins that we know are related to us 99.5 or 99.6 percent in the sequences of their genes, the Neanderthals, it’s not even clear that they had what we call idea evolution. Sure enough, their tools that they made were more complex than our tools. But the 300,000 or so years that they spent in Europe, their toolkit barely changed. So there’s very little evolution going on.

So there’s something really very special about this new species, humans, that arose and invented this new kind of evolution, based on ideas. And so it’s useful for us to ask, what is it about humans that distinguishes them? It must have been a tiny genetic difference between us and the Neanderthals because, as I said, we’re so closely related to them genetically, a tiny genetic difference that had a vast cultural potential.

That difference is something that anthropologists and archaeologists call social learning. It’s a very difficult concept to define, but when we talk about it, all of us humans know what it means. And it seems to be the case that only humans have the capacity to learn complex new or novel behaviors, simply by watching and imitating others. And there seems to be a second component to it, which is that we seem to be able to get inside the minds of other people who are doing things in front of us, and understand why it is they’re doing those things. These two things together, we call social learning.

Many people respond that, oh, of course the other animals can do social learning, because we know that the chimpanzees can imitate each other, and we see all sorts of learning in animals like dolphins and the other monkeys, and so on. But the key point about social learning is that this minor difference between us and the other species forms an unbridgeable gap between us and them. Because, whereas all of the other animals can pick up the odd behavior by having their attention called to something, only humans seem to be able to select, among a range of alternatives, the best one, and then to build on that alternative, and to adapt it, and to improve upon it. And so, our cultures cumulatively adapt, whereas all other animals seem to do the same thing over and over and over again.

Even though other animals can learn, and they can even learn in social situations, only humans seem to be able to put these things together and do real social learning. And that has led to this idea evolution. What’s a tiny difference between us genetically has opened up an unbridgeable gap, because only humans have been able to achieve this cumulative cultural adaptation. (…)

I’m interested in this because I think this capacity for social learning, which we associate with our intelligence, has actually sculpted us in ways that we would have never anticipated. And I want to talk about two of those ways that I think it has sculpted us. One of the ways has to do with our creativity, and the other has to do with the nature of our intelligence as social animals.

One of the first things to be aware of when talking about social learning is that it plays the same role within our societies, acting on ideas, as natural selection plays within populations of genes. Natural selection is a way of sorting among a range of genetic alternatives, and finding the best one. Social learning is a way of sifting among a range of alternative options or ideas, and choosing the best one of those. And so, we see a direct comparison between social learning driving idea evolution, by selecting the best ideas —we copy people that we think are successful, we copy good ideas, and we try to improve upon them — and natural selection, driving genetic evolution within societies, or within populations.

I think this analogy needs to be taken very seriously, because just as natural selection has acted on genetic populations, and sculpted them, we’ll see how social learning has acted on human populations and sculpted them.

What do I mean by “sculpted them”? Well, I mean that it’s changed the way we are. And here’s one reason why. If we think that humans have evolved as social learners, we might be surprised to find out that being social learners has made us less intelligent than we might like to think we are. And here’s the reason why.

If I’m living in a population of people, and I can observe those people, and see what they’re doing, seeing what innovations they’re coming up with, I can choose among the best of those ideas, without having to go through the process of innovation myself. So, for example, if I’m trying to make a better spear, I really have no idea how to make that better spear. But if I notice that somebody else in my society has made a very good spear, I can simply copy him without having to understand why.

What this means is that social learning may have set up a situation in humans where, over the last 200,000 years or so, we have been selected to be very, very good at copying other people, rather than innovating on our own. We like to think we’re a highly inventive, innovative species. But social learning means that most of us can make use of what other people do, and not have to invest the time and energy in innovation ourselves.

Now, why wouldn’t we want to do that? Why wouldn’t we want to innovate on our own? Well, innovation is difficult. It takes time. It takes energy. Most of the things we try to do, we get wrong. And so, if we can survey, if we can sift among a range of alternatives of people in our population, and choose the best one that’s going at any particular moment, we don’t have to pay the costs of innovation, the time and energy ourselves. And so, we may have had strong selection in our past to be followers, to be copiers, rather than innovators.

This gives us a whole new slant on what it means to be human, and I think, in many ways, it might fit with some things that we realize are true about ourselves when we really look inside ourselves. We can all think of things that have made a difference in the history of life. The first hand axe, the first spear, the first bow and arrow, and so on. And we can ask ourselves, how many of us have had an idea that would have changed humanity? And I think most of us would say, well, that sets the bar rather high. I haven’t had an idea that would change humanity. So let’s lower the bar a little bit and say, how many of us have had an idea that maybe just influenced others around us, something that others would want to copy? And I think even then, very few of us can say there have been very many things we’ve invented that others would want to copy.

This says to us that social evolution may have sculpted us not to be innovators and creators as much as to be copiers, because this extremely efficient process that social learning allows us to do, of sifting among a range of alternatives, means that most of us can get by drawing on the inventions of others.

The formation of social groups

Now, why do I talk about this? It sounds like it could be a somewhat dry subject, that maybe most of us are copiers or followers rather than innovators. And what we want to do is imagine that our history over the last 200,000 years has been a history of slowly and slowly and slowly living in larger and larger and larger groups.

Early on in our history, it’s thought that most of us lived in bands of maybe five to 25 people, and that bands formed bands of bands that we might call tribes. And maybe tribes were 150 people or so on. And then tribes gave way to chiefdoms that might have been thousands of people. And chiefdoms eventually gave way to nation-states that might have been tens of thousands or even hundreds of thousands, or millions, of people. And so, our evolutionary history has been one of living in larger and larger and larger social groups.

What I want to suggest is that that evolutionary history will have selected for less and less and less innovation in individuals, because a little bit of innovation goes a long way. If we imagine that there’s some small probability that someone is a creator or an innovator, and the rest of us are followers, we can see that one or two people in a band is enough for the rest of us to copy, and so we can get on fine. And, because social learning is so efficient and so rapid, we don’t need all to be innovators. We can copy the best innovations, and all of us benefit from those.

But now let’s move to a slightly larger social group. Do we need more innovators in a larger social group? Well, no. The answer is, we probably don’t. We probably don’t need as many as we need in a band. Because in a small band, we need a few innovators to get by. We have to have enough new ideas coming along. But in a larger group, a small number of people will do. We don’t have to scale it up. We don’t have to have 50 innovators where we had five in the band, if we move up to a tribe. We can still get by with those three or four or five innovators, because all of us in that larger social group can take advantage of their innovations.

Language is the way we exchange ideas

And here we can see a very prominent role for language. Language is the way we exchange ideas. And our eyes allow us to see innovations and language allows us to exchange ideas. And language can operate in a larger society, just as efficiently as it can operate in a small society. It can jump across that society in an instant.

You can see where I’m going. As our societies get larger and larger, there’s no need, in fact, there’s even less of a need for any one of us to be an innovator, whereas there is a great advantage for most of us to be copiers, or followers. And so, a real worry is that our capacity for social learning, which is responsible for all of our cumulative cultural adaptation, all of the things we see around us in our everyday lives, has actually promoted a species that isn’t so good at innovation. It allows us to reflect on ourselves a little bit and say, maybe we’re not as creative and as imaginative and as innovative as we thought we were, but extraordinarily good at copying and following.

If we apply this to our everyday lives and we ask ourselves, do we know the answers to the most important questions in our lives? Should you buy a particular house? What mortgage product should you have? Should you buy a particular car? Who should you marry? What sort of job should you take? What kind of activities should you do? What kind of holidays should you take? We don’t know the answers to most of those things. And if we really were the deeply intelligent and imaginative and innovative species that we thought we were, we might know the answers to those things.

And if we ask ourselves how it is we come across the answers, or acquire the answers to many of those questions, most of us realize that we do what everybody else is doing. This herd instinct, I think, might be an extremely fundamental part of our psychology that was perhaps an unexpected and unintended, you might say, byproduct of our capacity for social learning, that we’re very, very good at being followers rather than leaders. A small number of leaders or innovators or creative people is enough for our societies to get by.

Now, the reason this might be interesting is that, as the world becomes more and more connected, as the Internet connects us and wires us all up, we can see that the long-term consequences of this is that humanity is moving in a direction where we need fewer and fewer and fewer innovative people, because now an innovation that you have somewhere on one corner of the earth can instantly travel to another corner of the earth, in a way that it would have never been possible to do 10 years ago, 50 years ago, 500 years ago, and so on. And so, we might see that there has been this tendency for our psychology and our humanity to be less and less innovative, at a time when, in fact, we may need to be more and more innovative, if we’re going to be able to survive the vast numbers of people on this earth.

That’s one consequence of social learning, that it has sculpted us to be very shrewd and intelligent at copying, but perhaps less shrewd at innovation and creativity than we’d like to think. Few of us are as creative as we’d like to think we are. I think that’s been one perhaps unexpected consequence of social learning.

Another side of social learning I’ve been thinking about - it’s a bit abstract, but I think it’s a fascinating one -goes back again to this analogy between natural selection, acting on genetic variation, and social learning, acting on variation in ideas. And any evolutionary process like that has to have both a sorting mechanism, natural selection, and what you might call a generative mechanism, a mechanism that can create variety.

We all know what that mechanism is in genes. We call it mutation, and we know that from parents to offspring, genes can change, genes can mutate. And that creates the variety that natural selection acts on. And one of the most remarkable stories of nature is that natural selection, acting on this mindlessly-generated genetic variation, is able to find the best solution among many, and successively add those solutions, one on top of the other. And through this extraordinarily simple and mindless process, create things of unimaginable complexity. Things like our cells, eyes and brains and hearts, and livers, and so on. Things of unimaginable complexity, that we don’t even understand and none of us could design. But they were designed by natural selection.

Where do ideas come from?

Now let’s take this analogy of a mindless process and take - there’s a parallel between social learning driving evolution at the idea level and natural selection driving evolution at the genetic level - and ask what it means for the generative mechanism in our brains.

Well, where do ideas come from? For social learning to be a sorting process that has varieties to act on, we have to have a variety of ideas. And where do those new ideas come from?

The idea that I’ve been thinking about, that I think is worth contemplating about our own minds is what is the generative mechanism? If we do have any creativity at all and we are innovative in some ways, what’s the nature of that generative mechanism for creating new ideas?

This is a question that’s been asked for decades. What is the nature of the creative process? Where do ideas come from? And let’s go back to genetic evolution and remember that, there, the generative mechanism is random mutation.

Now, what do we think the generative mechanism is for idea evolution? Do we think it’s random mutation of some sort, of ideas? Well, all of us think that it’s better than that. All of us think that somehow we can come up with good ideas in our minds. And whereas natural selection has to act on random variation, social learning must be acting on directed variation. We know what direction we’re going.

But, we can go back to our earlier discussion of social learning, and ask the question, well, if you were designing a new hand axe, or a new spear, or a new bow and a new arrow, would you really know how to make a spear fly better? Would you really know how to make a bow a better bow? Would you really know how to shape an arrowhead so that it penetrated its prey better? And I think most of us realize that we probably don’t know the answers to those questions. And that suggests to us that maybe our own creative process rests on a generative mechanism that isn’t very much better than random itself.

And I want to go further, and suggest that our mechanism for generating ideas maybe couldn’t even be much better than random itself. And this really gives us a different view of ourselves as intelligent organisms. Rather than thinking that we know the answers to everything, could it be the case that the mechanism that our brain uses for coming up with new ideas is a little bit like the mechanism that our genes use for coming up with new genetic variance, which is to randomly mutate ideas that we have, or to randomly mutate genes that we have.

Now, it sounds incredible. It sounds insane. It sounds mad. Because we think of ourselves as so intelligent. But when we really ask ourselves about the nature of any evolutionary process, we have to ask ourselves whether it could be any better than random, because in fact, random might be the best strategy.

Genes could never possibly know how to mutate themselves, because they could never anticipate the direction the world was going. No gene knows that we’re having global warming at the moment. No gene knew 200,000 years ago that humans were going to evolve culture. Well, the best strategy for any exploratory mechanism, when we don’t know the nature of the processes we’re exploring, is to throw out random attempts at understanding that field or that space we’re trying to explore.

And I want to suggest that the creative process inside our brains, which relies on social learning, that creative process itself never could have possibly anticipated where we were going as human beings. It couldn’t have anticipated 200,000 years ago that, you know, a mere 200,000 years later, we’d have space shuttles and iPods and microwave ovens.

What I want to suggest is that any process of evolution that relies on exploring an unknown space, such as genes or such as our neurons exploring the unknown space in our brains, and trying to create connections in our brains, and such as our brain’s trying to come up with new ideas that explore the space of alternatives that will lead us to what we call creativity in our social world, might be very close to random.

We know they’re random in the genetic case. We think they’re random in the case of neurons exploring connections in our brain. And I want to suggest that our own creative process might be pretty close to random itself. And that our brains might be whirring around at a subconscious level, creating ideas over and over and over again, and part of our subconscious mind is testing those ideas. And the ones that leak into our consciousness might feel like they’re well-formed, but they might have sorted through literally a random array of ideas before they got to our consciousness.

Karl Popper famously said the way we differ from other animals is that our hypotheses die in our stead; rather than going out and actually having to try out things, and maybe dying as a result, we can test out ideas in our minds. But what I want to suggest is that the generative process itself might be pretty close to random.

Putting these two things together has lots of implications for where we’re going as societies. As I say, as our societies get bigger, and rely more and more on the Internet, fewer and fewer of us have to be very good at these creative and imaginative processes. And so, humanity might be moving towards becoming more docile, more oriented towards following, copying others, prone to fads, prone to going down blind alleys, because part of our evolutionary history that we could have never anticipated was leading us towards making use of the small number of other innovations that people come up with, rather than having to produce them ourselves.

The interesting thing with Facebook is that, with 500 to 800 million of us connected around the world, it sort of devalues information and devalues knowledge. And this isn’t the comment of some reactionary who doesn’t like Facebook, but it’s rather the comment of someone who realizes that knowledge and new ideas are extraordinarily hard to come by. And as we’re more and more connected to each other, there’s more and more to copy. We realize the value in copying, and so that’s what we do.

And we seek out that information in cheaper and cheaper ways. We go up on Google, we go up on Facebook, see who’s doing what to whom. We go up on Google and find out the answers to things. And what that’s telling us is that knowledge and new ideas are cheap. And it’s playing into a set of predispositions that we have been selected to have anyway, to be copiers and to be followers. But at no time in history has it been easier to do that than now. And Facebook is encouraging that.

And then, as corporations grow … and we can see corporations as sort of microcosms of societies … as corporations grow and acquire the ability to acquire other corporations, a similar thing is happening, is that, rather than corporations wanting to spend the time and the energy to create new ideas, they want to simply acquire other companies, so that they can have their new ideas. And that just tells us again how precious these ideas are, and the lengths to which people will go to acquire those ideas.

A tiny number of ideas can go a long way, as we’ve seen. And the Internet makes that more and more likely. What’s happening is that we might, in fact, be at a time in our history where we’re being domesticated by these great big societal things, such as Facebook and the Internet. We’re being domesticated by them, because fewer and fewer and fewer of us have to be innovators to get by. And so, in the cold calculus of evolution by natural selection, at no greater time in history than ever before, copiers are probably doing better than innovators. Because innovation is extraordinarily hard. My worry is that we could be moving in that direction, towards becoming more and more sort of docile copiers.

But, these ideas, I think, are received with incredulity, because humans like to think of themselves as highly shrewd and intelligent and innovative people. But I think what we have to realize is that it’s even possible that, as I say, the generative mechanisms we have for coming up with new ideas are no better than random.

And a really fascinating idea itself is to consider that even the great people in history whom we associate with great ideas might be no more than we expect by chance. I’ll explain that. Einstein was once asked about his intelligence and he said, “I’m no more intelligent than the next guy. I’m just more curious.” Now, we can grant Einstein that little indulgence, because we think he was a pretty clever guy.

What does curiosity mean?

But let’s take him at his word and say, what does curiosity mean? Well, maybe curiosity means trying out all sorts of ideas in your mind. Maybe curiosity is a passion for trying out ideas. Maybe Einstein’s ideas were just as random as everybody else’s, but he kept persisting at them.

And if we say that everybody has some tiny probability of being the next Einstein, and we look at a billion people, there will be somebody who just by chance is the next Einstein. And so, we might even wonder if the people in our history and in our lives that we say are the great innovators really are more innovative, or are just lucky.

Now, the evolutionary argument is that our populations have always supported a small number of truly innovative people, and they’re somehow different from the rest of us. But it might even be the case that that small number of innovators just got lucky. And this is something that I think very few people will accept. They’ll receive it with incredulity. But I like to think of it as what I call social learning and, maybe, the possibility that we are infinitely stupid.”

Mark Pagel, Professor of Evolutionary Biology, Reading University, England and The Santa Fe Institute, Infinite Stupidity, Edge, Dec 16, 2011 (Illustration by John S. Dykes)

See also:

☞ Mark Pagel: How language transformed humanity



Biologist Mark Pagel shares an intriguing theory about why humans evolved our complex system of language. He suggests that language is a piece of “social technology” that allowed early human tribes to access a powerful new tool: cooperation. Mark Pagel: How language transformed humanity, TED.com, July 2011

The Kaleidoscopic Discovery Engine. ‘All scientific discoveries are in principle ‘multiples’’
Neal Gabler on The Elusive Big Idea - ‘We are living in a post ideas world where bold ideas are almost passé’

Nov
25th
Fri
permalink

Sue Savage-Rumbaugh on Human Language—Human Consciousness. A personal narrative arises through the vehicle of language


                                        Jamie Marie Waelchli, Thought Map No. 8

Human language, coupled with human maternal care, enables the consciousness to bifurcate very early and extensively. Without the self-reflective properties inherent in a reflexive agent- recipient language, and without the objectification of the human infant — a very different kind of humanity would arise.

Human consciousness, as constructed by human language, becomes the vehicle through which the self-reflective human mind envisions time. Language enables the viewer to reflect upon the actions of the doer (and the actions of one’s internal body), while projecting forward and backward — other possible bodily actions — into imagined space/time. Thus the projected and imagined space/time increasingly becomes the conscious world and reality of the viewer who imagines or remembers actions mapped onto that projected plan. The body thus becomes a physical entity progressing through the imaged world of the viewer. As the body progresses through this imaged world, the viewer also constructs a way to mark progress from one imagined event to another. Having once marked this imagined time into units, the conscious viewer begins to order the anticipated actions of the body into a linear progression of events.

A personal narrative then arises through the vehicle of language. Indeed a personal narrative is required, expected and placed upon every human being, by the very nature of human language. This personal narrative becomes organized around the anticipated bodily changes that it is imagined will take place from birth to old age. The power of the bifurcated mind, through linguistically encoded expectancies, shapes and molds all of human behavior. When these capacities are jointly executed by other similar minds — the substrate of human culture is manufactured.

Human culture, because it rides upon a manufactured space/time self-reflective substrate, is unique. Though it shares some properties with animal culture, it is not merely a natural Darwinian extension of animal culture. It is based on constructed time/space, constructed mental relationships, constructed moral responsibilities, and constructed personal narratives — and individuals, must, at all times, justify their actions toward another on the basis of their co-constructed expectancies.

Human Consciousness seems to burst upon the evolutionary scene in something of an explosion between 40,000 and 90,000 years ago. Trading emerges, art emerges, and symboling ability emerges with a kind of intensity not noted for any previous time in the archeological record. (…)

Humans came with a propensity to alter the world around them wherever they went. We were into object manipulation in all aspects of our existence, and wherever we went we altered the landscape. We did not accept the natural world as we found it — we set about refashioning our worlds according to our own needs and desires. From the simple act of intentionally setting fires to eliminate underbrush, to the exploration of outer space, humanity manifested the view that it was here to control its own destiny, by changing the world around it, as well as by individuals’ changing their own appearances.

We put on masks and masqueraded about the world, seeking to make the world conform to our own desires, in a way no other species emulated. In brief, the kind of language that emerged between 40,000 and 90,000 years ago, riding upon the human anatomical form, changed us forever, and we began to pass that change along to future generations.

While Kanzi and family are bonobos, the kind of language they have acquired — even if they have not manifested all major components yet — is human language as you and I speak it and know it. Therefore, although their biology remains that of apes, their consciousness has begun to change as a function of the language, the marks it leaves on their minds and the epigenetic marks it leaves on the next generation. (Epigenetic: chemical markers which become attached to segments of genes during the lifetime of an individual are passed along to future generations, affecting which genes will be expressed in succeeding generations.) They explore art, they explore music, they explore creative linguistic negotiation, they have an autobiographical past and they think about the future. They don’t do all these things with human-like proficiency at this point, but they attempt them if given opportunity. Apes not so reared do not attempt to do these things.

What kind of power exists within the kind of language we humans have perfected? Does it have the power to change biology across time, if it impacts the biological form upon conception? Science has now become aware of the power of initial conditions, through chaos theory, the work of Mandelbrot with fractal geometric forms, and the work of Wolfram and the patterns that can be produced by digital reiterations of simple and only slightly different starting conditions. Within the fertilized egg lie the initial starting conditions of every human.

We also now realize that epigenetic markers from parental experience can set these initial starting conditions, determining such things as the order, timing, and patterning of gene expression profiles in the developing organism. Thus while the precise experience and learning of the parents is not passed along, the effects of those experiences, in the form of genetic markers that have the power to affect the developmental plan of the next generation during the extraordinarily sensitive conditions of embryonic development, are transmitted. Since language is the most powerful experience encountered by the human being and since those individuals who fail to acquire human language are inevitably excluded from (or somehow set apart in) the human community, it is reasonable to surmise that language will, in some form, transmit itself through epigenetic mechanisms.

When a human being enters into a group of apes and begins to participate in the rearing of offspring, different epigenetic markers have the potential to become activated. We already know, for example, that in human beings, expectancies or beliefs can affect gene activity. The most potent of the epigenetic markers would most probably arise from the major difference between human and ape infants. Human infants do not cling, ape infants do. When ape infants are carried like human infants, they begin to development eye/hand coordination from birth. This sets the developmental trajectory of the ape infant in a decidedly human direction — that of manipulating the world around it. Human mothers, unlike ape mothers, also communicate their intentions linguistically to the infant. Once an intention is communicated linguistically, it can be negotiated, so there arises an intrinsic motivation to tune into and understand such communications on the part of the ape infant. The ‘debate’ in ape language, which has centered around do they have or don’t they — has missed the point. This debate has ignored the key rearing variables that differ dramatically across the studies. Apart from Kanzi and family, all other apes in these studies are left alone at night and drilled on associative pairings during the day.”

Sue Savage-Rumbaugh, is a primatologist most known for her work with two bonobos, Kanzi and Panbanisha, investigating their use of “Great Ape language” using lexigrams and computer-based keyboards. Until recently based at Georgia State University’s Language Research Center in Atlanta.

To read full essay click Human Language—Human Consciousness, National Humanities Center, Jan 2nd, 2011

See also:

John Shotter on encounters with ‘Other’ - from inner mental representation to dialogical social practices
Do thoughts have a language of their own? The language of thought hypothesis, Lapidarium notes

Sep
14th
Wed
permalink

Steven Pinker on the mind as a system of ‘organs of computation’

                      

I present the mind as a system of “organs of computation” that allowed our ancestors to understand and outsmart objects, animals, plants, and each other. (…)

Most of the assumptions about the mind that underlie current discussions are many decades out of date. Take the hydraulic model of Freud, in which psychic pressure builds up in the mind and can burst out unless it’s channeled into appropriate pathways. That’s just false. The mind doesn’t work by fluid under pressure or by flows of energy; it works by information.

Or, look at the commentaries on human affairs by pundits and social critics. They say we’re “conditioned” to do this, or “brainwashed” to do that, or “socialized” to believe such and such. Where do these ideas come from? From the behaviorism of the 1920’s, from bad cold war movies from the 1950’s, from folklore about the effects of family upbringing that behavior genetics has shown to be false. The basic understanding that the human mind is a remarkably complex processor of information, an “organ of extreme perfection and complication,” to use Darwin’s phrase, has not made it into the mainstream of intellectual life. (…)

I see the mind as an exquisitely engineered device—not literally engineered, of course, but designed by the mimic of engineering that we see in nature, natural selection. That’s what “engineered” animals’ bodies to accomplish improbable feats, like flying and swimming and running, and it is surely what “engineered” the mind to accomplish its improbable feats. (…)

What research in psychology should be: a kind of reverse engineering. When you rummage through an antique store and come across a contraption built of many finely meshing parts, you assume that it was put together for a purpose, and that if you only understood that purpose, you’d have insight as to why it has the parts arranged the way they are. That’s true for the mind as well, though it wasn’t designed by a designer but by natural selection. With that insight you can look at the quirks of the mind and ask how they might have made sense as solutions to some problem our ancestors faced in negotiating the world. That can give you an insight into what the different parts of the mind are doing.

Even the seemingly irrational parts of the mind, like strong passions—jealousy, revenge, infatuation, pride—might very well be good solutions to problems our ancestors faced in dealing with one another. For example, why do people do crazy things like chase down an ex-lover and kill the lover? How could you win someone back by killing them? It seems like a bug in our mental software. But several economists have proposed an alternative. If our mind is put together so that under some circumstances we are compelled to carry out a threat regardless of the costs to us, the threat is made credible. When a person threatens a lover, explicitly or implicitly, by communicating “If you ever leave me I’ll chase you down,” the lover could call his bluff if she didn’t have signs that he was crazy enough to carry it out even though it was pointless. And so the problem of building a credible deterrent into creatures that interact with one another leads to irrational behavior as a rational solution. "Rational," that is, with respect to the "goal" of our genes to maximize the number of copies of themselves. It isn’t "rational," of course, with respect to the goal of whole humans and societies to maximize happiness and fairness. (…)

The paradoxes of happiness

There’s no absolute standard for well-being. A Paleolithic hunter-gatherer should not have fretted that he had no running shoes or central heating or penicillin. How can a brain know whether there is something worth striving for? Well, it can look around and see how well off other people are. If they can achieve something, maybe so can you. Other people anchor your well-being scale and tell you what you can reasonably hope to achieve. (…)

Another paradox of happiness is that losses are felt more keenly than gains. As Jimmy Connors said, “I hate to lose more than I like to win.” You are just a little happy if your salary goes up, but you’re really miserable if your salary goes down by the same amount. That too might be a feature of the mechanism designed to attain the attainable and no more. When we backslide, we keenly feel it because what we once had is a good estimate of what we can attain. But when we improve we have no grounds for knowing that we are as well off as we can hope to be. The evolutionary psychologist Donald Campbell called it “the happiness treadmill." No matter how much you gain in fame, wealth, and so on, you end up at the same level of happiness you began with—though to go down a level is awful. Perhaps it’s because natural selection has programmed our reach to exceed our grasp, but by just a little bit. (…)

The brain as a kind of computer; information processing system

I place myself among those who think that you can’t understand the mind only by looking directly at the brain. Neurons, neurotransmitters, and other hardware features are widely conserved across the animal kingdom, but species have very different cognitive and emotional lives. The difference comes from the ways in which hundreds of millions of neurons are wired together to process information. I see the brain as a kind of computer—not like any commercial computer made of silicon, obviously, but as a device that achieves intelligence for some of the same reasons that a computer achieves intelligence, namely processing of information. (…)

I also believe that the mind is not made of Spam—it has a complex, heterogeneous structure. It is composed of mental organs that are specialized to do different things, like seeing, controlling hands and feet, reasoning, language, social interaction, and social emotions. Just as the body is divided into physical organs, the mind is divided into mental organs.

That puts me in agreement with Chomsky and against many neural network modelers, who hope that a single kind of neural network, if suitably trained, can accomplish every mental feat that we do. For similar reasons I disagree with the dominant position in modern intellectual life—that our thoughts are socially constructed by how we were socialized as children, by media images, by role models, and by conditioning. (…)

Many people lump together the idea that the mind has a complex innate structure with the idea that differences between people have to be innate. But the ideas are completely different. Every normal person on the planet could be innately equipped with an enormous catalog of mental machinery, and all the differences between people—what makes John different from Bill—could come from differences in experience, of upbringing, or of random things that happened to them when they were growing up.

To believe that there’s a rich innate structure common to every member of the species is different from saying the differences between people, or differences between groups, come from differences in innate structure. Here’s an example. Look at number of legs—it’s an innate property of the human species that we have two legs as opposed to six like insects, or eight like spiders, or four like cats—so having two legs is innate. But if you now look at why some people have one leg, and some people have no legs, it’s completely due to the environment—they lost a leg in an accident, or from a disease. So the two questions have to be distinguished. And what’s true of legs is also true of the mind. (…)

Computer technology will never change the world as long as it ignores how the mind works. Why did people instantly start to use fax machines, and continue to use them even though electronic mail makes much more sense? There are millions of people who print out text from their computer onto a piece of paper, feed the paper into a fax machine, forcing the guy at the other end to take the paper out, read it, and crumples it up—or worse, scan it into his computer so that it becomes a file of bytes all over again. This is utterly ridiculous from a technological point of view, but people do it. They do it because the mind evolved to deal with physical objects, and it still likes to conceptualize entities that are owned and transferred among people as physical objects that you can lift and store in a box. Until computer systems, email, video cameras, VCR’s and so on are designed to take advantage of the way the mind conceptualizes reality, namely as physical objects existing at a location and impinged upon by forces, people are going to be baffled by their machines, and the promise of the computer revolution will not be fulfilled. (…)

Q: What is the significance of the Internet and today’s communications revolution for the evolution of the mind?

Probably not much. You’ve got to distinguish two senses of the word “evolution.” The sense used by me, Dawkins, Gould, and other evolutionary biologists refers to the changes in our biological makeup that led us to be the kind of organism we are today. The sense used by most other people refers to continuous improvement or progress. A popular idea is that our biological evolution took us to a certain stage, and our cultural evolution is going to take over—where evolution in both cases is defined as “progress.” I would like us to move away from that idea, because that the processes that selected the genes that built our brains are different form the processes that propelled the rise and fall of empires and the march of technology and.

In terms of strict biological evolution, it’s impossible to know where, if anywhere, our species is going. Natural selection generally takes hundreds of thousands of years to do anything interesting, and we don’t know what our situation will be like in ten thousand or even one thousand years. Also, selection adapts organism to a niche, usually a local environment, and the human species moves all over the place and lurches from life style to life style with dizzying speed on the evolutionary timetable. Revolutions in human life like the agricultural, industrial, and information revolutions occur so quickly that no one can predict whether the change they will have on our makeup, or even whether there will be a change.

The Internet does create a kind of supra-human intelligence, in which everyone on the planet can exchange information rapidly, a bit like the way different parts of a single brain can exchange information. This is not a new process; it’s been happening since we evolved language. Even non-industrial hunter-gatherer tribes pool information by the use of language.

That has given them remarkable local technologies—ways of trapping animals, using poisons, chemically treating plant foods to remove the bitter toxins, and so on. That is also a collective intelligence that comes from accumulating discoveries over generations, and pooling them amongst a group of people living at one time. Everything that’s happened since, such as writing, the printing press, and now the Internet, are ways of magnifying something that our species already knew how to do, which is to pool expertise by communication. Language was the real innovation in our biological evolution; everything since has just made our words travel farther or last longer.”

Steven Pinker, Canadian-American experimental psychologist, cognitive scientist and linguist, Organs of Computation, Edge, January 11, 1997 (Illustration source)

See also:

☞ Steven Pinker, Harvard University Cambridge, MA, So How Does the Mind Work? (pdf), Blackwell Publishing Ltd. 2005

Jul
3rd
Sun
permalink

Nicholas Ostler on The Last Lingua Franca. English Until the Return of Babel

                        

"By and large, lingua-francas are the languages of wider communication, such as enable vast empires to have a common administration, and also allow international contacts. (…)

In the second half of the 1st millennium BC, Greek persisted around the Mediterranean mostly as the result of Greek trading, reinforced by cultural prestige of its arts and literature, but was then massively reinforced by Alexander’s conquests. Persian spread within the eastern zones conquered by the Muslims, but it flowed back westward as Persian administration became common within the empire of the Caliphs. Then it was spread wider by Turkic-speaking armies, notably into modern Turkey and India, since they could not conceive of a cultured administration without it. The use of English as a widespread lingua franca began in India (actually, as it happened, replacing Persian), but was aided elsewhere by the schools which tended to accompany religious missions in new British colonies. Later (in the 20th century) it had become unchallenged as the common language of science, of international relations and business.

Q: Latin lasted a particularly long time. Why did it survive the collapse of the Roman Empire?

It kept changing its role. First it expanded to become the language of Christianity, replacing Greek: so Latin, the mother-tongue of Western Christian majority, began to be used to express their common faith. Then it survived because it was the language of the Roman Catholic Church, i.e. the Catholic lingua franca. (Gothic-speaking Arian Christians lost out to Catholics everywhere during the sixth century AD.) De facto, Latin became the lingua franca of Western Europe, because it was the only language taught in schools. This status continued for another 1,000 years, because it was so convenient to the elite. Only when European society began to be transformed in the 16th century, with the decline of the Church, and rising power of France and England (and their middle classes), as well as the opening of the world as a whole to European commercial interests, did Latin’s advantages seem outweighed by the costs of maintaining its status.

Q: What makes you suspect that English will not reign as long as Latin?

All the factors that have spread English have already peaked, and there is no stability of power and influence which might simply leave the status quo in place. There is no accepted common political dispensation in the world nowadays, comparable to the Catholic Church in Europe. Individual powers for which English is an alien burden (China, Russia, Brazil, the Arab world, Indonesia, Mexico, even India) are already stirring, and attempting to enhance their global roles.

Q: How much longer do you think English has as a global language?

It will continue to be used until there is a workable alternative, and not a moment longer. It appears that language technology will soon provide that alternative, allowing speakers to go on using every mother-tongue, and yet be understood by speakers of any other language. This will be available in a decade or two, and (since all the costs will fall as soon as the technical problem is clearly solved) will very soon spread to be universal. So it is very unlikely that global learning (and use) of English will still be popular by the middle of this century.

Q: Will Chinese or another language take its place?

Probably not. All languages that might compete (except French, whose global days have probably passed) are regionally focused, hence limited as to global utility; and I do not anticipate a new round of global colonization, say from China, India or Indonesia. Technology will probably make a single replacement unnecessary anyway. (…)

Q: If English declines in use as a lingua franca, how must Anglophones adjust? Will travelers have to take more Berlitz classes before going abroad?

It is unlikely much adjustment will be needed. Everyone will increasingly use their own languages, and the world - given the necessary information technology - will understand. But it may increasingly be incumbent on English-speakers to find ways of penetrating statements that are made in foreign languages without an English translation (much as the world’s diplomatic establishments used to do routinely). Foreigners will increasingly adopt a “take it or leave it” attitude to English-speakers, leaving them to sink or (make the effort to) swim. But all this is much as English-speakers have long done to the rest of the world. (…)

Q: Do you think America’s elitist attitude toward other languages is changing? Is there evidence that more Americans are studying foreign languages?

No. No. Quite the reverse, despite the panic about US ignorance of Middle Eastern languages supposedly caused by 9/11, and the wars to which it has led. (…)

Q: When English loses its dominance outside its mother tongue regions, are Americans likely to become even more open or more hostile toward learning other languages and toward immigrants speaking other languages in the U.S? (Is there any historical example to point one-way or the other?)

As I said, I think there will be more hostility against immigrants who do not adopt English. Such symbolic disloyalty (as it will be seen) will be more offensive to many, as it becomes apparent that the USA is losing its acknowledged dominance. Americans may, if anything, be more likely to “stand on ceremony” and insist militantly that others - even in foreign parts of the world - accommodate them by adopting the means to cope with English, while (perhaps, at least in the early days) resisting the need to make equal and opposite accommodations themselves.

The best recent model might be the reluctance, not to say ‘denial’, of the French in reacting to the decline in international use of their language post 1918. But it was also notable that the nations of northern and eastern Europe (the last to acquire Latin as a lingua-franca) tried to hang on to use of Latin longest in 18th and even 19th centuries, when French (and other major European vernaculars) had become established as media of international communication. It is not a direct parallel, but one recalls Valerius Maximus in the first century AD, congratulating the Roman magistrates who “persistently maintained the practice of replying only in Latin to the Greeks. And so they forced them to speak through interpreters, losing their linguistic fluency, their great strength, not just in our capital city but in Greece and Asia too, evidently to promote the honour of the Latin language throughout the world.”

Nicholas Ostler, British scholar and author. Ostler studied at Balliol College, Oxford, where he received degrees in Greek, Latin, philosophy, and economics. He later studied under Noam Chomsky at the Massachusetts Institute of Technology, where he earned his Ph.D. in linguistics and Sanskrit, The Last Lingua Franca. English Until the Return of Babel, Penguin Books, 2011 (Illustration source)

See also:

List of lingua francas
☞ Henry Hitchings, What’s the language of the future?, Salon, Nov 6, 2011.
Why Do Languages Die? Urbanization, the state and the rise of nationalism, Lapidarium notes

permalink

George Lakoff on metaphors, explanatory journalism and the ‘Real Rationality’

    

Metaphor is a fundamental mechanism of mind, one that allows us to use what we know about our physical and social experience to provide understanding of countless other subjects. Because such metaphors structure our most basic understandings of our experience, they are “metaphors we live by”—metaphors that can shape our perceptions and actions without our ever noticing them. (…)

We are neural beings, (…) our brains take their input from the rest of our bodies. What our bodies are like and how they function in the world thus structures the very concepts we can use to think. We cannot think just anything – only what our embodied brains permit. (…)

The mind is inherently embodied. Thought is mostly unconscious. Abstract concepts are largely metaphorical.”

George Lakoff, cited in Daniel Lende, Brainy Trees, Metaphorical Forests: On Neuroscience, Embodiment, and Architecture, Neuroanthropology, Jan 10, 2012.

"For Lakoff, language is not a neutral system of communication, because it is always based on frames, conceptual metaphors, narratives, and emotions. Political thought and language is inherently moral and emotional. (…)

The way people really reason — Real Rationality — coming new understandings of the brain—something that up-to-date marketers have already done. Enlightenment reason, we now know, was a false theory of rationality.

Most thought is unconscious. It doesn’t work by mathematical logic. You can’t reason directly about the world—because you can only conceptual what your brain and body allow, and because ideas are structured using frames.” Lakoff says. “As Charles Fillmore has shown, all words are defined in terms of conceptual frames, not in terms of some putative objective, mind-free world.”

“People really reason using the logic of frames, metaphors, and narratives, and real decision making requires emotion, as Antonio Damasio showed in Descartes’ Error.” 

“A lot of reason does not serve self interest, but is rather about empathizing with and connecting to others.”

People Don’t Decide Using ‘Just the Facts’

Contemporary explanatory journalism, in particular, is prone to the false belief that if the facts are presented to people clearly enough, they will accept and act upon them, Lakoff says. “In the ‘marketplace of ideas’ theory,  that the best factually based logical argument will always win. But this doesn’t actually happen.”

“Journalists always wonder, ‘We’ve reported on all the arguments, why do people vote wrong?’” Lakoff says. “They’ve missed the main event.”

Many journalists think that “framing” a story or issue is “just about choices of words and manipulation,” and that one can report factually and neutrally without framing. But language itself isn’t neutral. If you study the way the brain processes language, Lakoff says, “every word is defined with respect to frames. You’re framing all the time.” Morality and emotion are already embedded in the way people think and the way people perceive certain words—and most of this processing happens unconsciously. “You can only learn things that fit in with what your brain will allow,” Lakoff says.

A recent example? The unhappy phrase “public option.”

“When you say public, it means ‘government’ to conservatives,” Lakoff explains. “When you say ‘option,’ it means two things: it’s not necessary, it’s just an ‘option,’ and secondly it’s a public policy term, a bureaucratic term. To conservatives, ‘public option’ means government bureaucracy, the very worst thing you could have named this. They could have called it the America Plan. They could have called it doctor-patient care.”

According to Lakoff, because of the conservative success in shaping public discourse through their elaborate communication system, the most commonly used words often have been given a conservative meaning. “Tax relief,” for example, suggests that taxation is an affliction to be relieved.

Don’t Repeat the Language Politicians Use: Decode It

Instead of simply adopting the language politicians use to frame an issue, Lakoff argues, journalists need to analyze the language political figures use and explain the moral content of particular words and arguments.

That means, for example, not just quoting a politician about whether a certain policy infringes or supports American “liberty,” but explaining what he or she means by “liberty,” how this conception of liberty fits into the politician’s overall moral outlook, and how it contrasts with other conceptions of liberty.

It also means spelling out the full implications of the metaphors politicians choose. In the recent coverage of health care reform, Lakoff says, one of the “hidden metaphors” that needed to be explored was whether politicians we’re talking about healthcare as a commodity or as a necessity and a right.

Back on the 2007 presidential campaign trail, Lakoff pointed out, Rudy Giuliani called Obama’s health care plans “socialist,” while he himself compared buying health care to buying a flatscreen tv set, using the metaphor of health care as a commodity, not a necessity. A few liberal bloggers were outraged, but several newspapers reported his use of the metaphor without comment or analysis, rather than exploring what it revealed about Giuliani’s worldview. (…)

A Dictionary of the Real Meanings of Words

What would a nonpartisan explanatory journalism be like? To make nonpartisan decoding easier, Lakoff thinks journalists should create an online dictionary of the different meanings of words—“ not just a glossary, but a little Wikipedia-like website,” as he puts it. This site would have entries to explain the differences between the moral frameworks of conservatives and progressives, and what they each typically mean when they say words like “freedom.” Journalists across the country could link to the site whenever they sensed a contested word.

A project like this would generate plenty of resistance, Lakoff acknowledges. “What that says is most people don’t know what they think. That’s extremely scary…the public doesn’t want to be told, ‘You don’t know what you think.’” The fact is that about 98 percent of thought is unconscious.”

But, he says, people are also grateful when they’re told what’s really going on, and why political figures reason as they do. He would like to see a weekly column in the New York Times and other newspapers decoding language and framing, and analyzing what can and cannot be said politically, and he’d also like to see cognitive science and the study of framing added to journalism school curricula.

Ditch Objectivity, Balance, and ‘The Center ‘

Lakoff has two further sets of advice for improving explanatory journalism. The first is to ditch journalism’s emphasis on balance. Global warming and evolution are real. Unscientific views are not needed for “balance.”

“The idea that truth is balanced, that objectivity is balanced, is just wrong,” Lakoff says. Objectivity is a valuable ideal when it means unbiased reporting, Lakoff argues. But too often, the need for objectivity means that journalists hide their own judgments of an issue behind “public opinion.” The journalistic tradition of “always having to get a quote from somebody else” when the truth is obvious is foolish, Lakoff says.

So is the naïve reporting of poll data, since poll results can change drastically depending on the language and the framing of the questions. The framng of the questions should be part of reporting on polls.

Finally, Lakoff’s research suggests that many Americans, perhaps 20 per cent, are “biconceptuals” who have both conservative and liberal moral systems in their brains, but apply them to different issues. In some cases they can switch from one ideological position to another, based on the way an issue is framed. These biconceptuals occupy the territory that’s usually labeled “centrist.” “There isn’t such a thing as ‘the center.’ There are just people who are conservative on some issues and liberal on others, with lots of variations occurring. Journalists accept the idea of a “center” with its own ideology, and that’s just not the case,” he says.

Journalists tell “stories.” Those stories are often narratives framed from a particular moral or political perspective. Journalists need to be more upfront about the moral and political underpinnings of the stories they write and the angles they choose.

Journalism Isn’t Neutral–It’s Based on Empathy

“Democracy is based on empathy, with people not just caring, but acting on that care —having social as well as personal responsibility…That’s a view that many journalists have. That’s the reason they become journalists rather than stockbrokers. They have a certain view of democracy. That’s why a lot of journalists are liberals. They actually care about how politics can hurt people, about the social causes of harm. That’s a really different view than the conservative view: if you get hurt and you haven’t taken personal responsibility, then you deserve to get hurt—as when you sign on to a mortgage you can’t pay. Investigative journalism is very much an ethical enterprise, and I think journalists have to ask themselves, ‘What is the ethics behind the enterprise?’ and not be ashamed of it.” Good investigative journalism uncovers real facts, but is done, and should be done, with a moral purpose.

To make a moral story look objective, “journalists tend to pin moral reactions on other people: ‘I’m going to find someone around here who thinks it’s outrageous’…This can make outrageous moral action into a matter of public opinion rather than ethics.”

In some ways, Lakoff’s suggestions were in line with the kind of journalism that one of our partners,  the non-profit investigative journalism outlet ProPublica, already does. In its mission statement, ProPublica, makes its commitment to “moral force” explicit. “Our work focuses exclusively on truly important stories, stories with ‘moral force,’” the statement reads. “We do this by producing journalism that shines a light on exploitation of the weak by the strong and on the failures of those with power to vindicate the trust placed in them.”

He emphasized the importance of doing follow-ups to investigative stories, rather than letting the public become jaded by a constant succession of outrages that flare on the front page and then disappear. Most of ProPublica’s investigations are ongoing and continually updated on its site.

Cognitive Explanation:’ A Different Take on ProPublica’s Mission 

But Lakoff also had some very nontraditional suggestions about what it would mean for ProPublica to embark on a different kind of explanatory journalism project. “There are two different forms of explanatory journalism. One is material explanation — the kind of investigative reporting now done at ProPublica: who got paid what by whom, what actions resulted in harm, and so on. All crucial,” he noted. “But equally crucial, and not done, is cognitive and communicative explanation.”

“Cognitive explanation depends on what conceptual system lies behind political positions on issues and how the working of people’s brains explains their political behavior. For example, since every word of political discourse evokes a frame and the moral system behind it, the superior conservative communication system reaches most Americans 24/7/365. The more one hears conservative language and not liberal language, the more the brains of those listening get changed. Conservative communication with an absence of liberal communication exerts political pressure on Democrats whose constituents hear conservative language all day every day. Explanatory journalism should be reporting on the causal effects of conservative framing and the conservative communicative superiority.”

“ProPublica seems not to be explicit about conflicting views of what constitutes ‘moral force.’ ProPublica does not seem to be covering the biggest story in the country, the split over what constitutes morality in public policy. Nor is it clear that ProPublica studies the details of framing that permeate public discourse. Instead, ProPublica assumes a view of “moral force” in deciding what to cover and how to cover it.

“For example, ProPublica has not covered the difference in moral reasoning behind the conservative and progressive views on tax policy, health care, global warming and energy policy, and so on for major issue after major issue.

“ProPublica also is not covering a major problem in policy-making — the assumption of classical views of rationality and the ways they have been scientifically disproved in the cognitive and brain sciences.

“ProPublica has not reported on the disparity between the conservative and liberal communication systems, nor has it covered the globalization of conservatism — the international exportation of American conservative strategists, framing, training, and communication networks.

“When ProPublica uncovers facts about organ transplants and nursing qualifications, that’s fine. But where is ProPublica on the reasons for the schisms in our politics? Explanatory journalism demands another level of understanding.

“ProPublica, for all its many virtues, has room for improvement, in much the same way as journalism in general — especially in explanatory journalism. Cognitive and communicative explanation must be added to material explanation.”

What Works In the Brain: Narrative & Metaphor

As for creating Explanatory Journalism that resonates with the way people process information, Lakoff suggested two familiar tools: narrative and metaphor.

The trick to finding the right metaphors for complicated systems, he said, is to figure out what metaphors the experts themselves use in the way they think. “Complex policy is usually understood metaphorically by people in the field,” Lakoff says. What’s crucial is learning how to distinguish the useful frames from the distorting or overly-simplistic ones.

As for explaining policy, Lakoff says, “the problem with this is that policy is made in a way that is not understandable…Communication is always seen as last, as the tail on the dog, whereas if you have a policy that people don’t understand, you’re going to lose. What’s the point of trying to get support for a major health care reform if no one understands it?

One of the central problems with policy, Lakoff says, is that policy-makers tend to take their moral positions so much for granted that the policies they develop seem to them like the “merely practical” things to do.

Journalists need to restore the real context of policy, Lakoff says, by trying “to get people in the government and policy-makers in the think tanks to understand and talk about what the moral basis of their policy is, and to do this in terms that are understandable.”

George Lakoff, American cognitive linguist and professor of linguistics at the University of California, Berkeley, interviewed by Lois Beckett in Explain yourself: George Lakoff, cognitive linguist, explainer.net, 31 January, 2011 (Illustration source)

See also:

Professor George Lakoff: Reason is 98% Subconscious Metaphor in Frames & Cultural Narratives
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks, Lapidarium notes
☞ Metaphor tag on Lapidarium notes

May
27th
Fri
permalink

Susan Blackmore on memes and “temes”     

                 
                                              (Illustration credit: Collective Memes)

”[Darwin] had no concept of the idea of an algorithm. But that’s what he described in that book, and this is what we now know as the evolutionary algorithm. The principle is you just need those three things — variegation, selection and heredity. And as Dan Dennett puts it, if you have those then you must get evolution. Or design out of chaos without the aid of mind. (…)

The principle here applies to anything that is copied with variation and selection. We’re so used to thinking in terms of biology, we think about genes this way. Darwin didn’t of course, he didn’t know about genes. He talked mostly about animals and plants, but also about languages evolving and becoming extinct. But the principle of universal Darwinism is that any information that is varied and selected will produce design.

And this is what Richard Dawkins was on about in his 1976 bestseller, “The Selfish Gene.” The information that is copied, he called the replicator. It selfishly copies. (…)

Look around you, here will do, in this room. All around us, still clumsily drifting about in its primeval soup of culture, is another replicator. Information that we copy from person to person by imitation, by language, by talking, by telling stories, by wearing clothes, by doing things. This is information copied with variation and selection. This is design process going on. He wanted a name for the new replicator. So he took the Greek word mimeme, which means that which is imitated. (…)

There are two replicators now on this planet. From the moment that our ancestors, perhaps two and a half million years ago or so, began imitating, there was a new copying process. Copying with variation and selection. A new replicator was let loose, and it could never be — right from the start, it could never be that human beings who let loose this new creature, could just copy the useful, beautiful, true things, and not copy the other things. While their brains were having an advantage from being able to copy — lighting fires, keeping fires going, new techniques of hunting, these kinds of things — inevitably they were also copying putting feathers in their hair, or wearing strange clothes, or painting their faces, or whatever.

So you get an arms race between the genes which are trying to get the humans to have small economical brains and not waste their time copying all this stuff, and the memes themselves, like the sounds that people made and copied — in other words, what turned out to be language — competing to get the brains to get bigger and bigger. So the big brain on this theory is driven by the memes. (…)

Language is a parasite that we’ve adapted to, not something that was there originally for our genes, on this view. And like most parasites it can begin dangerous, but then it co-evolves and adapts and we end up with a symbiotic relationship with this new parasite.

And so from our perspective, we don’t realize that that’s how it began. So this is a view of what humans are. All other species on this planet are gene machines only, they don’t imitate at all well, hardly at all. We alone are gene machines and meme machines as well. The memes took a gene machine and turned it into a meme machine.

But that’s not all. We have new kind of memes now. I’ve been wondering for a long time, since I’ve been thinking about memes a lot, is there a difference between the memes that we copy — the words we speak to each other, the gestures we copy, the human things — and all these technological things around us? I have always, until now, called them all memes, but I do honestly think now we need a new word for technological memes.

Let’s call them technomemes or temes. Because the processes are getting different. We began, perhaps 5,000 years ago, with writing. We put the storage of memes out there on a clay tablet, but in order to get true temes and true teme machines, you need to get the variation, the selection and the copying, all done outside of humans. And we’re getting there. We’re at this extraordinary point where we’re nearly there, that there are machines like that. And indeed, in the short time I’ve already been at TED, I see we’re even closer than I thought we were before.

So actually, now the temes are forcing our brains to become more like teme machines. Our children are growing up very quickly learning to read, learning to use machinery. We’re going to have all kinds of implants, drugs that force us to stay awake all the time. We’ll think we’re choosing these things, but the temes are making us do it. So we’re at this cusp now of having a third replicator on our planet. Now, what about what else is going on out there in the universe? Is there anyone else out there? People have been asking this question for a long time. (…)

In 1961, Frank Drake made his famous equation, but I think he concentrated on the wrong things. It’s been very productive, that equation. He wanted to estimate N, the number of communicative civilizations out there in our galaxy. And he included in there the rate of star formation, the rate of planets, but crucially, intelligence.

I think that’s the wrong way to think about it. Intelligence appears all over the place, in all kinds of guises. Human intelligence is only one kind of a thing. But what’s really important is the replicators you have and the levels of replicators, one feeding on the one before. So I would suggest that we don’t think intelligence, we think replicators.

Think of the big brain. How many mothers do we have here? You know all about big brains. They’re dangerous to give birth to. Are agonizing to give birth to. My cat gave birth to four kittens, purring all the time. Ah, mm — slightly different.

But not only is it painful, it kills lots of babies, it kills lots of mothers, and it’s very expensive to produce. The genes are forced into producing all this myelin, all the fat to myelinate the brain. Do you know, sitting here, your brain is using about 20 percent of your body’s energy output for two percent of your body weight. It’s a really expensive organ to run. Why? Because it’s producing the memes. (…)

Well, we did pull through, and we adapted. But now, we’re hitting, as I’ve just described, we’re hitting the third replicator point. And this is even more dangerous — well, it’s dangerous again. Why? Because the temes are selfish replicators and they don’t care about us, or our planet, or anything else. They’re just information — why would they? They are using us to suck up the planet’s resources to produce more computers, and more of all these amazing things we’re hearing about here at TED. Don’t think, “Oh, we created the Internet for our own benefit.” That’s how it seems to us. Think temes spreading because they must. We are the old machines.

Now, are we going to pull through? What’s going to happen? What does it mean to pull through? Well, there are kind of two ways of pulling through. One that is obviously happening all around us now, is that the temes turn us into teme machines, with these implants, with the drugs, with us merging with the technology. And why would they do that? Because we are self-replicating. We have babies. We make new ones, and so it’s convenient to piggyback on us, because we’re not yet at the stage on this planet where the other option is viable. (…) Where the teme machines themselves will replicate themselves. That way, it wouldn’t matter if the planet’s climate was utterly destabilized, and it was no longer possible for humans to live here. Because those teme machines, they wouldn’t need — they’re not squishy, wet, oxygen-breathing, warmth-requiring creatures. They could carry on without us.

So, those are the two possibilities. The second, I don’t think we’re that close. It’s coming, but we’re not there yet. The first, it’s coming too. But the damage that is already being done to the planet is showing us how dangerous the third point is, that third danger point, getting a third replicator. And will we get through this third danger point, like we got through the second and like we got through the first? Maybe we will, maybe we won’t. I have no idea.”

Susan Blackmore, PhD, an English freelance writer, lecturer, and broadcaster on psychology, Susan Blackmore on memes and “temes”, TED.com, Feb 2008 (transcript)

See also:

What Defines a Meme? James Gleick: Our world is a place where information can behave like human genes and ideas can replicate, mutate and evolve
☞ Adam McNamara, Can we measure memes?, Department of Psychology, University of Surrey, UK

Apr
23rd
Sat
permalink

What Defines a Meme? James Gleick: Our world is a place where information can behave like human genes and ideas can replicate, mutate and evolve


With the rise of information theory, ideas were seen as behaving like organisms, replicating by leaping from brain to brain, interacting to form new ideas and evolving in what the scientist Roger Sperry called “a burstwise advance.” (Illustration by Stuart Bradford)

"When I muse about memes, I often find myself picturing an ephemeral flickering pattern of sparks leaping from brain to brain, screaming "Me, me!"Douglas Hofstadter (1983)

"Now through the very universality of its structures, starting with the code, the biosphere looks like the product of a unique event. (…) The universe was not pregnant with life, nor the biosphere with man. Our number came up in the Monte Carlo game. Is it any wonder if, like a person who has just made a million at the casino, we feel a little strange and a little unreal?"Jacques Monod (1970)

"What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions.”Richard Dawkins (1986)

"The cells of an organism are nodes in a richly interwoven communications network, transmitting and receiving, coding and decoding. Evolution itself embodies an ongoing exchange of information between organism and environment. “If you want to understand life,” [Richard] Dawkins wrote, “don’t think about vibrant, throbbing gels and oozes, think about information technology.” (…)

The rise of information theory aided and abetted a new view of life. The genetic code—no longer a mere metaphor—was being deciphered. Scientists spoke grandly of the biosphere: an entity composed of all the earth’s life-forms, teeming with information, replicating and evolving. And biologists, having absorbed the methods and vocabulary of communications science, went further to make their own contributions to the understanding of information itself.

Jacques Monod, the Parisian biologist who shared a Nobel Prize in 1965 for working out the role of messenger RNA in the transfer of genetic information, proposed an analogy: just as the biosphere stands above the world of nonliving matter, so an “abstract kingdom” rises above the biosphere. The denizens of this kingdom? Ideas.

Ideas have retained some of the properties of organisms,” he wrote. “Like them, they tend to perpetuate their structure and to breed; they too can fuse, recombine, segregate their content; indeed they too can evolve, and in this evolution selection must surely play an important role.”

Ideas have “spreading power,” he noted—“infectivity, as it were”—and some more than others. An example of an infectious idea might be a religious ideology that gains sway over a large group of people. The American neurophysiologist Roger Sperry had put forward a similar notion several years earlier, arguing that ideas are “just as real” as the neurons they inhabit. Ideas have power, he said:

"Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and thanks to global communication, in far distant, foreign brains. And they also interact with the external surroundings to produce in toto a burstwise advance in evolution that is far beyond anything to hit the evolutionary scene yet.

Monod added, “I shall not hazard a theory of the selection of ideas.” There was no need. Others were willing.

Richard Dawkins made his own jump from the evolution of genes to the evolution of ideas. For him the starring role belongs to the replicator, and it scarcely matters whether replicators were made of nucleic acid. His rule is “All life evolves by the differential survival of replicating entities.” Wherever there is life, there must be replicators. Perhaps on other worlds replicators could arise in a silicon-based chemistry—or in no chemistry at all.

What would it mean for a replicator to exist without chemistry? “I think that a new kind of replicator has recently emerged on this very planet,” Dawkins proclaimed near the end of his first book, The Selfish Gene, in 1976. “It is staring us in the face. It is still in its infancy, still drifting clumsily about in its primeval soup, but already it is achieving evolutionary change at a rate that leaves the old gene panting far behind.” That “soup” is human culture; the vector of transmission is language, and the spawning ground is the brain.

For this bodiless replicator itself, Dawkins proposed a name. He called it the meme, and it became his most memorable invention, far more influential than his selfish genes or his later proselytizing against religiosity. “Memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation,” he wrote. They compete with one another for limited resources: brain time or bandwidth. They compete most of all for attention. For example:

Ideas. Whether an idea arises uniquely or reappears many times, it may thrive in the meme pool or it may dwindle and vanish. The belief in God is an example Dawkins offers—an ancient idea, replicating itself not just in words but in music and art. The belief that Earth orbits the Sun is no less a meme, competing with others for survival. (Truth may be a helpful quality for a meme, but it is only one among many.)

Catchphrases. One text snippet, “What hath God wrought?” appeared early and spread rapidly in more than one medium. Another, “Read my lips,” charted a peculiar path through late twentieth-century America. “Survival of the fittest” is a meme that, like other memes, mutates wildly (“survival of the fattest”; “survival of the sickest”; “survival of the fakest”; “survival of the twittest”; … ).

Images. In Isaac Newton’s lifetime, no more than a few thousand peo­ple had any idea what he looked like, though he was one of England’s most famous men, yet now millions of people have quite a clear idea- based on replicas of copies of rather poorly painted portraits. Even more pervasive and indelible are the smile of Mona Lisa, The Scream of Edvard Munch, and the silhouettes of various fictional extraterrestrials. These are memes, living a life of their own, independent of any physical reality. “This may not be what George Washington looked like then,” a tour guide was overheard saying of the Gilbert Stuart painting at the Metropolitan Museum of Art, “but this is what he looks like now.” Exactly.

Memes emerge in brains and travel outward, establishing beachheads on paper and celluloid and silicon and anywhere else information can go. They are not to be thought of as elementary particles but as organisms. The number three is not a meme; nor is the color blue, nor any simple thought, any more than a single nucleotide can be a gene. Memes are complex units, distinct and memorable—units with staying power.

Also, an object is not a meme. The hula hoop is not a meme; it is made of plastic, not of bits. When this species of toy spread worldwide in a mad epidemic in 1958, it was the product, the physical manifestation, of a meme, or memes: the craving for hula hoops; the swaying, swinging, twirling skill set of hula-hooping. The hula hoop itself is a meme vehicle. So, for that matter, is each human hula hooper—a strikingly effective meme vehicle, in the sense neatly explained by the philosopher Daniel Dennett: “A wagon with spoked wheels carries not only grain or freight from place to place; it carries the brilliant idea of a wagon with spoked wheels from mind to mind.” Hula hoopers did that for the hula hoop’s memes—and in 1958 they found a new transmission vector, broadcast television, sending its messages immeasurably faster and farther than any wagon. The moving image of the hula hooper seduced new minds by hundreds, and then by thousands, and then by millions. The meme is not the dancer but the dance.

For most of our biological history memes existed fleetingly; their main mode of transmission was the one called “word of mouth.” Lately, however, they have managed to adhere in solid substance: clay tablets, cave walls, paper sheets. They achieve longevity through our pens and printing presses, magnetic tapes and optical disks. They spread via broadcast towers and digital networks. Memes may be stories, recipes, skills, legends or fashions. We copy them, one person at a time. Alternatively, in Dawkins’ meme-centered perspective, they copy themselves.

“I believe that, given the right conditions, replicators automatically band together to create systems, or machines, that carry them around and work to favor their continued replication,” he wrote. This was not to suggest that memes are conscious actors; only that they are entities with interests that can be furthered by natural selection. Their interests are not our interests. “A meme,” Dennett says, “is an information-packet with attitude.” When we speak of fighting for a principle or dying for an idea, we may be more literal than we know.

Tinker, tailor, soldier, sailor….Rhyme and rhythm help people remember bits of text. Or: rhyme and rhythm help bits of text get remembered. Rhyme and rhythm are qualities that aid a meme’s survival, just as strength and speed aid an animal’s. Patterned language has an evolutionary advantage. Rhyme, rhythm and reason—for reason, too, is a form of pattern. I was promised on a time to have reason for my rhyme; from that time unto this season, I received nor rhyme nor reason.

Like genes, memes have effects on the wide world beyond themselves. In some cases (the meme for making fire; for wearing clothes; for the resurrection of Jesus) the effects can be powerful indeed. As they broadcast their influence on the world, memes thus influence the conditions affecting their own chances of survival. The meme or memes comprising Morse code had strong positive feedback effects. Some memes have evident benefits for their human hosts (“Look before you leap,” knowledge of CPR, belief in hand washing before cooking), but memetic success and genetic success are not the same. Memes can replicate with impressive virulence while leaving swaths of collateral damage—patent medicines and psychic surgery, astrology and satanism, racist myths, superstitions and (a special case) computer viruses. In a way, these are the most interesting—the memes that thrive to their hosts’ detriment, such as the idea that suicide bombers will find their reward in heaven.

When Dawkins first floated the meme meme, Nicholas Humphrey, an evolutionary psychologist, said immediately that these entities should be considered “living structures, not just metaphorically but technically”:

When you plant a fertile meme in my mind you literally parasitize my brain, turning it into a vehicle for the meme’s propagation in just the way that a virus may parasitize the genetic mechanism of a host cell. And this isn’t just a way of talking-the meme for, say, “belief in life after death” is actually realized physically, millions of times over, as a structure in the nervous systems of individual men the world over.”

Most early readers of The Selfish Gene passed over memes as a fanciful afterthought, but the pioneering ethologist W. D. Hamilton, reviewing the book for Science, ventured this prediction:

"Hard as this term may be to delimit-it surely must be harder than gene, which is bad enough-I suspect that it will soon be in common use by biologists and, one hopes, by philosophers, linguists, and others as well and that it may become absorbed as far as the word “gene” has been into everyday speech."

Memes could travel wordlessly even before language was born. Plain mimicry is enough to replicate knowledge—how to chip an arrowhead or start a fire. Among animals, chimpanzees and gorillas are known to acquire behaviors by imitation. Some species of songbirds learn their songs, or at least song variants, after hearing them from neighboring birds (or, more recently, from ornithologists with audio players). Birds develop song repertoires and song dialects—in short, they exhibit a birdsong culture that predates human culture by eons. These special cases notwithstanding, for most of human history memes and language have gone hand in glove. (Clichés are memes.) Language serves as culture’s first catalyst. It supersedes mere imitation, spreading knowledge by abstraction and encoding.

Perhaps the analogy with disease was inevitable. Before anyone understood anything of epidemiology, its language was applied to species of information. An emotion can be infectious, a tune catchy, a habit contagious. “From look to look, contagious through the crowd / The panic runs,” wrote the poet James Thomson in 1730. Lust, likewise, according to Milton: “Eve, whose eye darted contagious fire.” But only in the new millennium, in the time of global electronic transmission, has the identification become second nature. Ours is the age of virality: viral education, viral marketing, viral e-mail and video and networking. Researchers studying the Internet itself as a medium—crowdsourcing, collective attention, social networking and resource allocation—employ not only the language but also the mathematical principles of epidemiology.

One of the first to use the terms “viral text” and “viral sentences” seems to have been a reader of Dawkins named Stephen Walton of New York City, corresponding in 1981 with the cognitive scientist Douglas Hofstadter. Thinking logically—perhaps in the mode of a computer—Walton proposed simple self-replicating sentences along the lines of “Say me!” “Copy me!” and “If you copy me, I’ll grant you three wishes!” Hofstadter, then a columnist for Scientific American, found the term “viral text” itself to be even catchier.

"Well, now, Walton’s own viral text, as you can see here before your eyes, has managed to commandeer the facilities of a very powerful host—an entire magazine and printing press and distribution service. It has leapt aboard and is now—even as you read this viral sentence—propagating itself madly throughout the ideosphere!”

Hofstadter gaily declared himself infected by the meme meme.

One source of resistance—or at least unease—was the shoving of us humans toward the wings. It was bad enough to say that a person is merely a gene’s way of making more genes. Now humans are to be considered as vehicles for the propagation of memes, too. No one likes to be called a puppet. Dennett summed up the problem this way: “I don’t know about you, but I am not initially attracted by the idea of my brain as a sort of dung heap in which the larvae of other people’s ideas renew themselves, before sending out copies of themselves in an informational diaspora…. Who’s in charge, according to this vision—we or our memes?”

He answered his own question by reminding us that, like it or not, we are seldom “in charge” of our own minds. He might have quoted Freud; instead he quoted Mozart (or so he thought):

“In the night when I cannot sleep, thoughts crowd into my mind…. Whence and how do they come? I do not know and I have nothing to do with it. Those which please me I keep in my head and hum them” (…)

Later Dennett was informed that this well—known quotation was not Mozart’s after all. It had taken on a life of its own; it was a fairly success­ful meme.

For anyone taken with the idea of memes, the landscape was changing faster than Dawkins had imagined possible in 1976, when he wrote, “The computers in which memes live are human brains.” By 1989, the time of the second edition of The Selfish Gene, having become an adept programmer himself, he had to amend that: “It was obviously predictable that manufactured electronic computers, too, would eventually play host to self-replicating patterns of information.” Information was passing from one computer to another “when their owners pass floppy discs around,” and he could see another phenomenon on the near horizon: computers connected in networks. “Many of them,” he wrote, “are literally wired up together in electronic mail exchange…. It is a perfect milieu for self-replicating programs to flourish.” Indeed, the Internet was in its birth throes. Not only did it provide memes with a nutrient-rich culture medium, it also gave wings to the idea of memes. Meme itself quickly became an Internet buzzword. Awareness of memes fostered their spread. (…)

Is this science? In his 1983 column, Hofstadter proposed the obvious memetic label for such a discipline: memetics. The study of memes has attracted researchers from fields as far apart as computer science and microbiology. In bioinformatics, chain letters are an object of study. They are memes; they have evolutionary histories. The very purpose of a chain letter is replication; whatever else a chain letter may say, it embodies one message: Copy me. One student of chain-letter evolution, Daniel W. VanArsdale, listed many variants, in chain letters and even earlier texts: “Make seven copies of it exactly as it is written” (1902); “Copy this in full and send to nine friends” (1923); “And if any man shall take away from the words of the book of this prophecy, God shall take away his part out of the book of life” (Revelation 22:19). Chain letters flourished with the help of a new 19th-century technology: “carbonic paper,” sandwiched between sheets of writing paper in stacks. Then carbon paper made a symbiotic partnership with another technology, the typewriter. Viral outbreaks of chain letters occurred all through the early 20th century. (…) Two subsequent technologies, when their use became widespread, provided orders-of-magnitude boosts in chain-letter fecundity: photocopying (c. 1950) and e-mail (c. 1995). (…)

Inspired by a chance conversation on a hike in the Hong Kong mountains, information scientists Charles H. Bennett from IBM in New York and Ming Li and Bin Ma from Ontario, Canada, began an analysis of a set of chain letters collected during the photocopier era. They had 33, all variants of a single letter, with mutations in the form of misspellings, omissions and transposed words and phrases. “These letters have passed from host to host, mutating and evolving,” they reported in 2003.

Like a gene, their average length is about 2,000 characters. Like a potent virus, the letter threatens to kill you and induces you to pass it on to your “friends and associates”—some variation of this letter has probably reached millions of people. Like an inheritable trait, it promises benefits for you and the people you pass it on to. Like genomes, chain letters undergo natural selection and sometimes parts even get transferred between coexisting “species.”

Reaching beyond these appealing metaphors, the three researchers set out to use the letters as a “test bed” for algorithms used in evolutionary biology. The algorithms were designed to take the genomes of various modern creatures and work backward, by inference and deduction, to reconstruct their phylogeny—their evolutionary trees. If these mathematical methods worked with genes, the scientists suggested, they should work with chain letters, too. In both cases the researchers were able to verify mutation rates and relatedness measures.

Still, most of the elements of culture change and blur too easily to qualify as stable replicators. They are rarely as neatly fixed as a sequence of DNA. Dawkins himself emphasized that he had never imagined founding anything like a new science of memetics. A peer-reviewed Journal of Memetics came to life in 1997—published online, naturally—and then faded away after eight years partly spent in self-conscious debate over status, mission and terminology. Even compared with genes, memes are hard to mathematize or even to define rigorously. So the gene-meme analogy causes uneasiness and the genetics-memetics analogy even more.

Genes at least have a grounding in physical substance. Memes are abstract, intangible and unmeasurable. Genes replicate with near-perfect fidelity, and evolution depends on that: some variation is essential, but mutations need to be rare. Memes are seldom copied exactly; their boundaries are always fuzzy, and they mutate with a wild flexibility that would be fatal in biology. The term “meme” could be applied to a suspicious cornucopia of entities, from small to large. For Dennett, the first four notes of Beethoven’s Fifth Symphony (quoted above) were “clearly” a meme, along with Homer’s Odyssey (or at least the idea of the Odyssey), the wheel, anti-Semitism and writing. “Memes have not yet found their Watson and Crick,” said Dawkins; “they even lack their Mendel.”

Yet here they are. As the arc of information flow bends toward ever greater connectivity, memes evolve faster and spread farther. Their presence is felt if not seen in herd behavior, bank runs, informational cascades and financial bubbles. Diets rise and fall in popularity, their very names becoming catchphrases—the South Beach Diet and the Atkins Diet, the Scarsdale Diet, the Cookie Diet and the Drinking Man’s Diet all replicating according to a dynamic about which the science of nutrition has nothing to say. Medical practice, too, experiences “surgical fads” and “iatro-epidemics”—epidemics caused by fashions in treatment—like the iatro-epidemic of children’s tonsillectomies that swept the United States and parts of Europe in the mid-20th century. Some false memes spread with disingenuous assistance, like the apparently unkillable notion that Barack Obama was not born in Hawaii. And in cyberspace every new social network becomes a new incubator of memes. Making the rounds of Facebook in the summer and fall of 2010 was a classic in new garb:

Sometimes I Just Want to Copy Someone Else’s Status, Word for Word, and See If They Notice.

Then it mutated again, and in January 2011 Twitter saw an outbreak of:

One day I want to copy someone’s Tweet word for word and see if they notice.

By then one of the most popular of all Twitter hashtags (the “hashtag” being a genetic—or, rather, memetic—marker) was simply the word “#Viral.”

In the competition for space in our brains and in the culture, the effective combatants are the messages. The new, oblique, looping views of genes and memes have enriched us. They give us paradoxes to write on Möbius strips. “The human world is made of stories, not people,” writes the novelist David Mitchell. “The people the stories use to tell themselves are not to be blamed.” Margaret Atwood writes: “As with all knowledge, once you knew it, you couldn’t imagine how it was that you hadn’t known it before. Like stage magic, knowledge before you knew it took place before your very eyes, but you were looking elsewhere.” Nearing death, John Updike reflected on

A life poured into words—apparent waste intended to preserve the thing consumed.

Fred Dretske, a philosopher of mind and knowledge, wrote in 1981: “In the beginning there was information. The word came later.” He added this explanation: “The transition was achieved by the development of organisms with the capacity for selectively exploiting this information in order to survive and perpetuate their kind.” Now we might add, thanks to Dawkins, that the transition was achieved by the information itself, surviving and perpetuating its kind and selectively exploiting organisms.

Most of the biosphere cannot see the infosphere; it is invisible, a parallel universe humming with ghostly inhabitants. But they are not ghosts to us—not anymore. We humans, alone among the earth’s organic creatures, live in both worlds at once. It is as though, having long coexisted with the unseen, we have begun to develop the needed extrasensory perception. We are aware of the many species of information. We name their types sardonically, as though to reassure ourselves that we understand: urban myths and zombie lies. We keep them alive in air-conditioned server farms. But we cannot own them. When a jingle lingers in our ears, or a fad turns fashion upside down, or a hoax dominates the global chatter for months and vanishes as swiftly as it came, who is master and who is slave?”

James Gleick, American author, journalist, biographer, Pulitzer Prize laureate, What Defines a Meme?, Smithsonian Magazine, May 2011. (Adapted from The Information: A History, A Theory, A Flood, by James Gleick)

See also:

Susan Blackmore on memes and “temes”
☞ Adam McNamara, Can we measure memes?, Department of Psychology, University of Surrey, UK
James Gleick: Bits and Bytes - How the language, information transformed humanity, (video) Fora.tv, May, 19, 2011
James Gleick on information: The basis of the universe isn’t matter or energy — it’s data

Apr
20th
Wed
permalink

The Chomsky School of Language (infographic)


                                                                 (click to enlarge)

Noam Chomsky, an American linguist, philosopher, cognitive scientist, and political activist. He is an Institute Professor and professor emeritus of linguistics at the Massachusetts Institute of Technology.

Infographic: Voxy, Apr 19, 2011