Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Sep
29th
Sun
permalink

Kevin Kelly: The Improbable is the New Normal

"The improbable consists of more than just accidents. The internets are also brimming with improbable feats of performance — someone who can run up a side of a building, or slide down suburban roof tops, or stack up cups faster than you can blink. Not just humans, but pets open doors, ride scooters, and paint pictures. The improbable also includes extraordinary levels of super human achievements: people doing astonishing memory tasks, or imitating all the accents of the world. In these extreme feats we see the super in humans.

Every minute a new impossible thing is uploaded to the internet and that improbable event becomes just one of hundreds of extraordinary events that we’ll see or hear about today. The internet is like a lens which focuses the extraordinary into a beam, and that beam has become our illumination. It compresses the unlikely into a small viewable band of everyday-ness. As long as we are online - which is almost all day many days — we are illuminated by this compressed extraordinariness. It is the new normal.

That light of super-ness changes us. We no longer want mere presentations, we want the best, greatest, the most extraordinary presenters alive, as in TED. We don’t want to watch people playing games, we want to watch the highlights of the highlights, the most amazing moves, catches, runs, shots, and kicks, each one more remarkable and improbable than the other.

We are also exposed to the greatest range of human experience, the heaviest person, shortest midgets, longest mustache — the entire universe of superlatives! Superlatives were once rare — by definition — but now we see multiple videos of superlatives all day long, and they seem normal. Humans have always treasured drawings and photos of the weird extremes of humanity (early National Geographics), but there is an intimacy about watching these extremities on video on our phones while we wait at the dentist. They are now much realer, and they fill our heads.

I see no end to this dynamic. Cameras are becoming ubiquitous, so as our collective recorded life expands, we’ll accumulate thousands of videos showing people being struck by lightening. When we all wear tiny cameras all the time, then the most improbable accident, the most superlative achievement, the most extreme actions of anyone alive will be recorded and shared around the world in real time. Soon only the most extraordinary moments of our 6 billion citizens will fill our streams. So henceforth rather than be surrounded by ordinariness we’ll float in extraordinariness. (…)

When the improbable dominates the archive to the point that it seems as if the library contains ONLY the impossible, then these improbabilities don’t feel as improbable. (…)

To the uninformed, the increased prevalence of improbable events will make it easier to believe in impossible things. A steady diet of coincidences makes it easy to believe they are more than just coincidences, right? But to the informed, a slew of improbably events make it clear that the unlikely sequence, the outlier, the black swan event, must be part of the story. After all, in 100 flips of the penny you are just as likely to get 100 heads in a row as any other sequence. But in both cases, when improbable events dominate our view — when we see an internet river streaming nothing but 100 heads in a row — it makes the improbable more intimate, nearer.

I am unsure of what this intimacy with the improbable does to us. What happens if we spend all day exposed to the extremes of life, to a steady stream of the most improbable events, and try to run ordinary lives in a background hum of superlatives? What happens when the extraordinary becomes ordinary?

The good news may be that it cultivates in us an expanded sense of what is possible for humans, and for human life, and so expand us. The bad news may be that this insatiable appetite for supe-superlatives leads to dissatisfaction with anything ordinary.”

Kevin Kelly, is the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, The Improbable is the New Normal, The Technium, 7 Jan, 2013. (Photo source)

Apr
27th
Sat
permalink

The Rise of Big Data. How It’s Changing the Way We Think About the World

       image

"In the third century BC, the Library of Alexandria was believed to house the sum of human knowledge. Today, there is enough information in the world to give every person alive 320 times as much of it as historians think was stored in Alexandria’s entire collection — an estimated 1,200 exabytes’ worth. If all this information were placed on CDs and they were stacked up, the CDs would form five separate piles that would all reach to the moon. (…)

Using big data will sometimes mean forgoting the quest for why in return for knowing what. (…)

There will be a special need to carve out a place for the human: to reserve space for intuition, common sense, and serendipity. (…)

Datafication is not the same as digitization, which takes analog content — books, films, photographs — and converts it into digital information, a sequence of ones and zeros that computers can read. Datafication is a far broader activity: taking all aspects of life and turning them into data. Google’s augmented-reality glasses datafy the gaze. Twitter datafies stray thoughts. LinkedIn datafies professional networks.

Once we datafy things, we can transform their purpose and turn the information into new forms of value. For example, IBM was granted a U.S. patent in 2012 for “securing premises using surface-based computing technology” — a technical way of describing a touch-sensitive floor covering, somewhat like a giant smartphone screen. Datafying the floor can open up all kinds of possibilities. The floor could be able to identify the objects on it, so that it might know to turn on lights in a room or open doors when a person entered. Moreover, it might identify individuals by their weight or by the way they stand and walk. (…)

This misplaced trust in data can come back to bite. Organizations can be beguiled by data’s false charms and endow more meaning to the numbers than they deserve. That is one of the lessons of the Vietnam War. U.S. Secretary of Defense Robert McNamara became obsessed with using statistics as a way to measure the war’s progress. He and his colleagues fixated on the number of enemy fighters killed. Relied on by commanders and published daily in newspapers, the body count became the data point that defined an era. To the war’s supporters, it was proof of progress; to critics, it was evidence of the war’s immorality. Yet the statistics revealed very little about the complex reality of the conflict. The figures were frequently inaccurate and were of little value as a way to measure success. Although it is important to learn from data to improve lives, common sense must be permitted to override the spreadsheets. (…)

Ultimately, big data marks the moment when the “information society” finally fulfills the promise implied by its name. The data take center stage. All those digital bits that have been gathered can now be harnessed in novel ways to serve new purposes and unlock new forms of value. But this requires a new way of thinking and will challenge institutions and identities. In a world where data shape decisions more and more, what purpose will remain for people, or for intuition, or for going against the facts? If everyone appeals to the data and harnesses big-data tools, perhaps what will become the central point of differentiation is unpredictability: the human element of instinct, risk taking, accidents, and even error. If so, then there will be a special need to carve out a place for the human: to reserve space for intuition, common sense, and serendipity to ensure that they are not crowded out by data and machine-made answers.

This has important implications for the notion of progress in society. Big data enables us to experiment faster and explore more leads. These advantages should produce more innovation. But at times, the spark of invention becomes what the data do not say. That is something that no amount of data can ever confirm or corroborate, since it has yet to exist. If Henry Ford had queried big-data algorithms to discover what his customers wanted, they would have come back with “a faster horse,” to recast his famous line. In a world of big data, it is the most human traits that will need to be fostered — creativity, intuition, and intellectual ambition — since human ingenuity is the source of progress.

Big data is a resource and a tool. It is meant to inform, rather than explain; it points toward understanding, but it can still lead to misunderstanding, depending on how well it is wielded. And however dazzling the power of big data appears, its seductive glimmer must never blind us to its inherent imperfections. Rather, we must adopt this technology with an appreciation not just of its power but also of its limitations.”

Kenneth Neil Cukier and Viktor Mayer-Schoenberger, The Rise of Big Data, Foreign Affairs, May/June 2013. (Photo: John Elk)

See also:

Dirk Helbing on A New Kind Of Socio-inspired Technology
Information tag on Lapidarium notes

Mar
3rd
Sun
permalink

Rolf Fobelli: News is to the mind what sugar is to the body

   image

"We humans seem to be natural-born signal hunters, we’re terrible at regulating our intake of information. We’ll consume a ton of noise if we sense we may discover an added ounce of signal. So our instinct is at war with our capacity for making sense.”

Nicholas Carr, A little more signal, a lot more noise, Rough Type, May 30, 2012.

"When people struggle to describe the state that the Internet puts them in they arrive at a remarkably familiar picture of disassociation and fragmentation. Life was once whole, continuous, stable; now it is fragmented, multi-part, shimmering around us, unstable and impossible to fix. The world becomes Keats’s “waking dream,” as the writer Kevin Kelly puts it.”

Adam Gopnik on The Information and How the Internet gets inside us, 2011

"Our brains are wired to pay attention to visible, large, scandalous, sensational, shocking, peoplerelated, story-formatted, fast changing, loud, graphic onslaughts of stimuli. Our brains have limited attention to spend on more subtle pieces of intelligence that are small, abstract, ambivalent, complex, slow to develop and quiet, much less silent. News organizations systematically exploit this bias. News media outlets, by and large, focus on the highly visible. They display whatever information they can convey with gripping stories and lurid pictures, and they systematically ignore the subtle and insidious, even if that material is more important. News grabs our attention; that’s how its business model works. Even if the advertising model didn’t exist, we would still soak up news pieces because they are easy to digest and superficially quite tasty. The highly visible misleads us. (…)

  • Terrorism is overrated. Chronic stress is underrated.
  • The collapse of Lehman Brothers is overrated. Fiscal irresponsibility is underrated.
  • Astronauts are overrated. Nurses are underrated.
  • Britney Spears is overrated. IPCC reports are underrated.
  • Airplane crashes are overrated. Resistance to antibiotics is underrated.

(…)

Afraid you will miss “something important”? From my experience, if something really important happens, you will hear about it, even if you live in a cocoon that protects you from the news. Friends and colleagues will tell you about relevant events far more reliably than any news organization. They will fill you in with the added benefit of meta-information, since they know your priorities and you know how they think. You will learn far more about really important events and societal shifts by reading about them in specialized journals, in-depth magazines or good books and by talking to the people who know. (…)

The more “news factoids” you digest, the less of the big picture you will understand. (…)

Thinking requires concentration. Concentration requires uninterrupted time. News items are like free-floating radicals that interfere with clear thinking. News pieces are specifically engineered to interrupt you. They are like viruses that steal attention for their own purposes. (…)

This is about the inability to think clearly because you have opened yourself up to the disruptive factoid stream. News makes us shallow thinkers. But it’s worse than that. News severely affects memory. (…)

News is an interruption system. It seizes your attention only to scramble it. Besides a lack of glucose in your blood stream, news distraction is the biggest barricade to clear thinking. (…)

In the words of Professor Michael Merzenich (University of California, San Francisco), a pioneer in the field of neuroplasticity: “We are training our brains to pay attention to the crap.” (…)

Good professional journalists take time with their stories, authenticate their facts and try to think things through. But like any profession, journalism has some incompetent, unfair practitioners who don’t have the time – or the capacity – for deep analysis. You might not be able to tell the difference between a polished professional report and a rushed, glib, paid-by-the-piece article by a writer with an ax to grind. It all looks like news.

My estimate: fewer than 10% of the news stories are original. Less than 1% are truly investigative. And only once every 50 years do journalists uncover a Watergate.

Many reporters cobble together the rest of the news from other people’s reports, common knowledge, shallow thinking and whatever the journalist can find on the internet. Some reporters copy from each other or refer to old pieces, without necessarily catching up with any interim corrections. The copying and the copying of the copies multiply the flaws in the stories and their irrelevance. (…)

Overwhelming evidence indicates that forecasts by journalists and by experts in finance, social development, global conflicts and technology are almost always completely wrong. So, why consume that junk?

Did the newspapers predict World War I, the Great Depression, the sexual revolution, the fall of the Soviet empire, the rise of the Internet, resistance to antibiotics, the fall of Europe’s birth rate or the explosion in depression cases? Maybe, you’d find one or two correct predictions in a sea of millions of mistaken ones. Incorrect forecast are not only useless, they are harmful.

To increase the accuracy of your predictions, cut out the news and roll the dice or, if you are ready for depth, read books and knowledgeable journals to understand the invisible generators that affect our world. (…)

I have now gone without news for a year, so I can see, feel and report the effects of this freedom first hand: less disruption, more time, less anxiety, deeper thinking, more insights. It’s not easy, but it’s worth it.”

Table of Contents:

No 1 – News misleads us systematically
No 2 – News is irrelevant
No 3 – News limits understanding
No 4 – News is toxic to your body
No 5 – News massively increases cognitive errors
No 6 – News inhibits thinking
No 7 – News changes the structure of your brain
No 8 – News is costly
No 9 – News sunders the relationship between reputation and achievement
No 10 – News is produced by journalists
No 11 – Reported facts are sometimes wrong, forecasts always
No 12 – News is manipulative
No 13 – News makes us passive
No 14 – News gives us the illusion of caring
No 15 – News kills creativity

Rolf Dobelli, Swiss novelist, writer, entrepreneur and curator of zurich.minds, to read full essay click Avoid News. Towards a Healthy News Diet (pdf), 2010. (Illustration: Information Overload by taylorboren)

See also:

The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks
Nicholas Carr on the evolution of communication technology and our compulsive consumption of information
Does Google Make Us Stupid?
Nicholas Carr on what the internet is doing to our brains?
How the Internet Affects Our Memories: Cognitive Consequences of Having Information at Our Fingertips
☞ Dr Paul Howard-Jones, The impact of digital technologies on human wellbeing (pdf), University of Bristol
William Deresiewicz on multitasking and the value of solitude
Information tag on Lapidarium

Jan
22nd
Tue
permalink

Kevin Slavin: How algorithms shape our world

“But the Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?

People degrade themselves in order to make machines seem smart all the time. Before the crash, bankers believed in supposedly intelligent algorithms that could calculate credit risks before making bad loans. We ask teachers to teach to standardized tests so a student will look good to an algorithm. We have repeatedly demonstrated our species’ bottomless ability to lower our standards to make information technology look good. Every instance of intelligence in a machine is ambiguous.

The same ambiguity that motivated dubious academic AI projects in the past has been repackaged as mass culture today. Did that search engine really know what you want, or are you playing along, lowering your standards to make it seem clever? While it’s to be expected that the human perspective will be changed by encounters with profound new technologies, the exercise of treating machine intelligence as real requires people to reduce their mooring to reality.”

Jaron Lanier, You are Not a Gadget (2010)

Kevin Slavin argues that we’re living in a world designed for — and increasingly controlled by — algorithms. In this riveting talk from TEDGlobal, he shows how these complex computer programs determine: espionage tactics, stock prices, movie scripts, and architecture.

"We’re writing things (…) that we can no longer read. And we’ve rendered something illegible, and we’ve lost the sense of what’s actually happening in this world that we’ve made. (…)

“We’re running through the United States with dynamite and rock saws so that an algorithm can close the deal three microseconds faster, all for a communications framework that no human will ever know; that’s a kind of manifest destiny.”

Kevin Slavin, Entrepreneur, Raconteur Assistant Professor of Media Arts and Sciences, MIT Media Lab, Kevin Slavin: How algorithms shape our world, TED, July 2011.

See also:

☞ Jane Wakefield, When algorithms control the world, BBC, Aug 23, 2011.

permalink

Nicholas Carr on the meaning of ‘searching’ these days

        image

"All collected data had come to a final end. Nothing was left to be collected. But all collected data had yet to be completely correlated and put together in all possible relationships. A timeless interval was spent doing that."

— Isaac Asimov, “The Last Question”, cited in John Battelle's The Search

"When we talk about “searching” these days, we’re almost always talking about using Google to find something online. That’s quite a twist for a word that has long carried existential connotations, that has been bound up in our sense of what it means to be conscious and alive. We don’t just search for car keys or missing socks. We search for truth and meaning, for love, for transcendence, for peace, for ourselves. To be human is to be a searcher.

In its highest form, a search has no well-defined object. It’s open-ended, an act of exploration that takes us out into the world, beyond the self, in order to know the world, and the self, more fully. T. S. Eliot expressed this sense of searching in his famously eloquent lines from “Little Gidding”:

We shall not cease from exploration
And the end of all our exploring
Will be to arrive where we started
And know the place for the first time.

Google searches have always been more cut and dried, keyed as they are to particular words or phrases. But in its original conception, the Google search engine did transport us into a messy and confusing world—the world of the web—with the intent of helping us make some sense of it. It pushed us outward, away from ourselves. It was a means of exploration. That’s much less the case now. Google’s conception of searching has changed markedly since those early days, and that means our own idea of what it means to search is changing as well.

Google’s goal is no longer to read the web. It’s to read us. Ray Kurzweil, the inventor and AI speculator, recently joined the company as its director of research. His general focus will be on machine learning and natural language processing. But his particular concern, as he said in a recent interview, will entail reconfiguring the company’s search engine to focus not outwardly on the world but inwardly on the user:

“I envision some years from now that the majority of search queries will be answered without you actually asking. It’ll just know this is something that you’re going to want to see.” While it may take some years to develop this technology, Kurzweil added that he personally thinks it will be embedded into what Google offers currently, rather than as a stand-alone product necessarily.

(…) Back in 2006, Eric Schmidt, then the company’s CEO, said that Google’s “ultimate product” would be a service that would “tell me what I should be typing.” It would give you an answer before you asked a question, obviating the need for searching entirely. (…)

In its new design, Google’s search engine doesn’t push us outward; it turns us inward. It gives us information that fits the behavior and needs and biases we have displayed in the past, as meticulously interpreted by Google’s algorithms. Because it reinforces the existing state of the self rather than challenging it, it subverts the act of searching. We find out little about anything, least of all ourselves, through self-absorption. (…)

To be turned inward, to listen to speech that is only a copy, or reflection, of our own speech, is to keep the universe alone. To free ourselves from that prison — the prison we now call personalization — we need to voyage outward to discover “counter-love,” to hear “original response.” As Frost understood, a true search is as dangerous as it is essential. It’s about breaking the shackles of the self, not tightening them.

There was a time, back when Larry Page and Sergey Brin were young and naive and idealistic, that Google spoke to us with the voice of original response. Now, what Google seeks to give us is copy speech, our own voice returned to us.”

Nicholas Carr, American writer who has published books and articles on technology, business, and culture, The searchers, Rough Type, Jan 13, 2013.

See also:

The Filter Bubble: Eli Pariser on What the Internet Is Hiding From You
☞ Tim Adams, Google and the future of search: Amit Singhal and the Knowledge Graph, The Observer, 19 January 2013.

Oct
29th
Mon
permalink

A Book is Technology: An Interview with Tan Lin — “Reading is a kind of integrated software”

           

"People forget that a book or codex is a technology. My interest with HEATH and 7CV was to treat the book as a distinct medial platform through which a lot of ancillary information passes, much like a broadcast medium like TV or a narrow-cast medium like Twitter or Tumblr. Reading is information control, just as a metadata tag is a bibliographic control. So I wanted to highlight the book’s medial and time-based underpinnings. (…)

A book in Google Books, like someone’s search history, isn’t really a book; it’s data connected to other data, and it’s searchable. Reading, like autobiography, is a subset of a search function. (…)

"Reading is a kind of integrated software."

Integrated software is a genre of software that combines word processing, database management, and spreadsheet applications, and communications platforms. This genre has been superseded by various full-function office suites, but I was interested in reading modelled in that way, i.e., different kinds of reading, each with specific functions. I mean, you read Harlequin romances differently than recipes, and you read Lotus 1-2-3 spreadsheets differently than you read Excel, and you read experimental Japanese novels differently than you read text messages, and in terms of documents processed by software, you have distinctions between, say, end-user manuals, bills of sales, Unified Modeling Language models, and legal contracts. These are genres of reading, and they’re housed or processed in the same generic platform that I call “reading.” So reading is an application that processes or assembles varied kinds of material. I was interested in creating works of literature that could be read like recipes or spreadsheets or PowerPoint presentations.  (…)

I think it’s a way to talk about new modalities of reading. In software engineering, the authoring is sometimes implemented with what are called frames, where kinds of (reading or processing) functionality are packed into frames, and where a frame is “a generic component in a hierarchy of nested subassemblies” (Wikipedia). You’ll have word processing frames and graphics frames, etc., and these individual frames can be linked in a unified programming system. This enables you to embed graphics and spreadsheet functions into a text document, or you can have shared graphical contexts, where material pops up in multiple frames at the same time—this, I think, is what is happening in 7CV with its graphical elements, text elements, processing text instructions in the form of prefaces (so-called “source” material) and meta data tags. I also inserted other languages: Chinese and machine codes. 7CV has various things in it that look like captions or interfaces or even bits of source code, and I was interested in the difference between a caption and bit of machine code in a book. If you look at the handwritten Chinese text in 7CV (it was written by my mother) you’ll notice that it was put in upside down by the typesetter! This is not true of the machine-generated Chinese, provided by Google Translate. But at any rate you have a complex ecosystem of different languages in single publishing/reading platform.

I assembled both PowerPoint works similarly. Bibliographic Sound Track was compiled from SMS, IM chats, video game walk-throughs, Tweets, Tumblr entries, PowerPoint bullet points, photographic slides, the overhead transparency, the text box, the couplet, the book page, the fading film titling sequence, etc. PowerPoint is a multimedia ecosystem that encompasses a wide variety of reading practices, and where each slide or page is a frame: modular, linked to other frames, and encompassing various platform specific reading or communications functions. So here was a generic poem, where a poem is the most varied collection of different material that could be read continuously in a time-based manner with a definite run time. Reading can be looped. That, I think, is the definition of a poem today!

Q: What are the differences between your PowerPoint works and your print books?  

The most obvious difference is that when you read a book or codex, the only thing moving is your eye; with the PowerPoint works, both text and eye are moving. In this sense, PowerPoint makes reading autonomous and it sets it in motion, literally: Individual slides are animated, slide transitions are animated, and the piece overall is software that is processing information. That’s why we turned out the lights during the screening and projected large: No one expects to go to the cinema and read a book on the screen, one word at a time, but that’s kind of what I wanted to do. The most beautiful thing is a book that could read itself! So reading is a kind of integrated software or the frame technology that manufactures software, and a book is the software application that is manufactured.

But I think there are a lot of similarities between digital and print-based reading experiences. The PowerPoint pieces, like my books, all bracket reading in a larger perceptual (and social) field that includes smells and sounds, i.e., they situate reading in a larger geography or reading environment. People tend to forget that reading is a kind of all-over experience, and it takes place in a particular room or in a particular moment of childhood. So the idea was to not confine reading to a particular object (book) or platform (PowerPoint) but allow it to expand outwards into the social space around it. I was more interested in what might be called the general mood of reading: the overall atmosphere or medium in which we experience our daily thoughts and perform actions—what Heidegger termed Stimmung and the psychologist Daniel Stern calls affective or amodal attunements. Bibliographic Sound Track is a mood-based system, but so is HEATH. And these mood-based systems, which are common to Zen meditative states, are bottom-up, non-directed, allotropic modes of general receptiveness rather than top-down, attention-based focus on specific objects or things. A book, at bottom, is a very general and very generic thing (that we happen to be reading). (…)

I’m not so interested in knowledge in that teleological sense; I’m more interested in the dissipation of knowledge, unfocused attention, and generic receptiveness. It would be nice if a book could reduce the amount of knowledge in the air. I’m equally interested in the public and communal architecture of reading practices as they intersect with individuals and park benches, the subway and the seminar room. Why can’t a book be more like a perfume? Or a door? Or the year after we graduated from college? A perfume is a communications medium just as literature is. Moods, furniture, restaurants, and books are communications mediums. What is it that Warhol said? “I think the right hormones can make Chanel No. 5 smell very butch.” (…)

For me, I think of reading as data management rather than passive absorption on a couch, though these dichotomies are ultimately false. Reading is and probably always will be a bit of both. At any rate, ideas about information processing are altering the contours of printed and digital works. Suddenly the book is just one element in a larger system of textual controls, distribution models, and controlled vocabulary systems. This is certainly true of the two PowerPoint works. I mean what are they? Are they poems or are they more like Twitter feeds? They don’t seem like PowerPoint presentations because they’re weak didactically and they don’t make a point. They are inflected by communications devices, but they do have a rhythm, which poems tend to have! And likewise with Twitter. Is it a broadcast medium using a pull system much like an RSS feed? Or is it more of a storage device, like a scroll or a poem? The idea of a network as a platform for collaborative work (rather than software housed on an individual’s desktop) might be applied to a book, no longer regarded as discrete, stand-alone object but as something that gets updated on a periodic basis in a social network. But this may not be that new an idea. After all, David Hume praised the printing press because it made it possible to issue countless emendations, revisions, and new editions.

Q: Can you state briefly what you see as the future of the book?

Let’s return for a moment to the bootleg by Westphalie Verlag in Vienna. Did the publisher David Jourdan in this case create what, under U.S. copyright law, would be termed “strong” copyleft where the derivative work is “based on the program” and has a “clear will to extend it to “dynamic linkage”? At this point, we are talking about software development licensing, shared libraries, primary access to source code, site linkages, share and share alike provisions, and software pools. My question is: Can a book be made to look like the authoring of such software, caught in a complicated licensing and development system? I think so! Maybe that’s the future of the book: to look like a licensing agreement regarding the future dissemination of its own information.”

Tan Lin, Associate Professor of English and Creative Writing, New Jersey City University, A Book is Technology: An Interview with Tan Lin, Rhizome, Oct 24th, 2012. (Illustration source)

Jul
24th
Tue
permalink

Dirk Helbing on A New Kind Of Socio-inspired Technology

The big unexplored continent in science is actually social science, so we really need to understand much better the principles that make our society work well, and socially interactive systems. Our future information society will be characterized by computers that behave like humans in many respects. In ten years from now, we will have computers as powerful as our brain, and that will really fundamentally change society. Many professional jobs will be done much better by computers. How will that change society? How will that change business? What impacts does that have for science, actually?

There are two big global trends. One is big data. That means in the next ten years we’ll produce as many data, or even more data than in the past 1,000 years. The other trend is hyperconnectivity. That means we have networking our world going on at a rapid pace; we’re creating an Internet of things. So everyone is talking to everyone else, and everything becomes interdependent. What are the implications of that? (…)

But on the other hand, it turns out that we are, at the same time, creating highways for disaster spreading. We see many extreme events, we see problems such as the flash crash, or also the financial crisis. That is related to the fact that we have interconnected everything. In some sense, we have created unstable systems. We can show that many of the global trends that we are seeing at the moment, like increasing connectivity, increase in the speed, increase in complexity, are very good in the beginning, but (and this is kind of surprising) there is a turning point and that turning point can turn into a tipping point that makes the systems shift in an unknown way.

It requires two things to understand our systems, which is social science and complexity science; social science because computers of tomorrow are basically creating artificial social systems. Just take financial trading today, it’s done by the most powerful computers. These computers are creating a view of the environment; in this case the financial world. They’re making projections into the future. They’re communicating with each other. They have really many features of humans. And that basically establishes an artificial society, which means also we may have all the problems that we are facing in society if we don’t design these systems well. The flash crash is just one of those examples that shows that, if many of those components — the computers in this case — interact with each other, then some surprising effects can happen. And in that case, $600 billion were actually evaporating within 20 minutes.

Of course, the markets recovered, but in some sense, as many solid stocks turned into penny stocks within minutes, it also changed the ownership structure of companies within just a few minutes. That is really a completely new dimension happening when we are building on these fully automated systems, and those social systems can show a breakdown of coordination, tragedies of the commons, crime or cyber war, all these kinds of things will happen if we don’t design them right.

We really need to understand those systems, not just their components. It’s not good enough to have wonderful gadgets like smartphones and computers; each of them working fine in separation. Their interaction is creating a completely new world, and it is very important to recognize that it’s not just a gradual change of our world; there is a sudden transition in the behavior of those systems, as the coupling strength exceeds a certain threshold.

A traffic flow in a circle

I’d like to demonstrate that for a system that you can easily imagine: traffic flow in a circle. Now, if the density is high enough, then the following will happen: after some time, although every driver is trying hard to go at a reasonable speed, cars will be stopped by a so-called ‘phantom traffic jam.’ That means smooth traffic flow will break down, no matter how hard the drivers will try to maintain speed. The question is, why is this happening? If you would ask drivers, they would say, “hey, there was a stupid driver in front of me who didn’t know how to drive!” Everybody would say that. But it turns out it’s a systemic instability that is creating this problem.

That means a small variation in the speed is amplified over time, and the next driver has to brake a little bit harder in order to compensate for a delayed reaction. That creates a chain reaction among drivers, which finally stops traffic flow. These kinds of cascading effects are all over the place in the network systems that we have created, like power grids, for example, or our financial markets. It’s not always as harmless as in traffic jams. We’re just losing time in traffic jams, so people could say, okay, it’s not a very serious problem. But if you think about crowds, for example, we have this transition towards a large density of the crowd, then what will happen is a crowd disaster. That means people will die, although nobody wants to harm anybody else. Things will just go out of control. Even though there might be hundreds or thousands of policemen or security forces trying to prevent these things from happening.

This is really a surprising behavior of these kinds of strongly-networked systems. The question is, what implication does that have for other network systems that we have created, such as the financial system? There is evidence that the fact that now every bank is interconnected with every other bank has destabilized the system. That means that there is a systemic instability in place that makes it so hard to control, or even impossible to control. We see that the big players, and also regulators, have large difficulties to get control of these systems.  

That tells us something that we need to change our perspective regarding these systems. Those complex systems are not characterized anymore by the properties of their components. But they’re characterized by what is the outcome of the interactions between those components. As a result of those interactions, self-organization is going on in these systems. New emergent properties come up. They can be very surprising, actually, and that means we cannot understand those systems anymore, based on what we see, which is the components.

We need to have new instruments and tools to understand these kinds of systems. Our intuition will not work here. And that is what we want to create: we want to come up with a new information platform for everybody that is bringing together big data with exa-scale computing, with people, and with crowd sourcing, basically connecting the intelligence of the brains of the world.

One component that is going to measure the state of the world is called the Planetary Nervous System. That will measure not just the physical state of the world and the environmental situation, but it is also very important actually that we learn how to measure social capital, such as trust and solidarity and punctuality and these kinds of things, because this is actually very important for economic value generation, but also for social well-being.

Those properties as social capital, like trust, they result from social network interactions. We’ve seen that one of the biggest problems of the financial crisis was this evaporation of trust. It has burned tens of thousands of billion dollars. If we would learn how to stabilize trust, or build trust, that would be worth a lot of money, really. Today, however, we’re not considering the value of social capital. It can happen that we destroyed it or that we exploit it, such as we’ve exploited and destroyed our environment. If we learn how much is the value of social capital, we will start to protect it. Also we’ll take it into account in our insurance policies. Because today, no insurance is taking into account the value of social capital. It’s the material damage that we take into account, but not the social capital. That means, in some sense, we’re underinsured. We’re taking bigger risks than we should.

This is something that we want to learn, how to quantify the fundaments of society, to quantify the social footprint. It means to quantify the implications of our decisions and actions.

The second component, the Living Earth Simulator will be very important here, because that will look at what-if scenarios. It will take those big data generated by the Planetary Nervous System and allow us to look at different scenarios, to explore the various options that we have, and the potential side effects or cascading effects, and unexpected behaviors, because those interdependencies make our global systems really hard to understand. In many cases, we just overlook what would happen if we fix a problem over here: It might have unwanted side effects; in many cases, that is happening in other parts of our world.

We are using supercomputers today in all areas of our development. Like if we are developing a car, a plane or medical tracks or so, supercomputers are being used, also in the financial world. But we don’t have a kind of political or a business flight simulator that helps us to explore different opportunities. I think this is what we can create as our understanding of society progresses. We now have much better ideas of how social coordination comes about, what are the preconditions for cooperation. What are conditions that create conflict, or crime, or war, or epidemicspreading, in the good and the bad sense?

We’re using, of course, viral marketing today in order to increase the success of our products. But at the same time, also we are suffering from a quick spreading of emerging diseases, or of computer viruses, and Trojan horses, and so on. We need to understand these kinds of phenomena, and with the data and the computer power that is coming up, it becomes within reach to actually get a much better picture of these things.

The third component will be the Global Participatory Platform [pdf]. That basically makes those other tools available for everybody: for business leaders, for political decision-makers, and for citizens. We want to create an open data and modeling platform that creates a new information ecosystem that allows you to create new businesses, to come up with large-scale cooperation much more easily, and to lower the barriers for social, political and economic participation.

So these are the three big elements. We’ll furthermore  build exploratories of society, of the economy and environment and technology, in order to be able to anticipate possible crises, but also to see opportunities that are coming up. Those exploratories will bring these three elements together. That means the measurement component, the computer simulation component, and the participation, the interactiveness.

In some sense, we’re going to create virtual worlds that may look like our real world, copies of our world that allow us to explore polices in advance or certain kinds of planning in advance. Just to make it a little bit more concrete, we could, for example, check out a new airport or a new city quarter before it’s being built. Today we have these architectural plans, and competitions, and then the most beautiful design will have win. But then, in practice, it can happen that it doesn’t work so well. People have to stand in line in queues, or are obstructing each other. Many things may not work out as the architect imagined that.                 

What if we populated basically these architectural plans with real people? They could check it out, live there for some months and see how much they like it. Maybe even change the design. That means, the people that would use these facilities and would live in these new quarters of the city could actually participate in the design of the city. In the same sense, you can scale that up. Just imagine Google Earth or Google Street View filled with people, and have something like a serious kind of Second Life. Then we could have not just one history; we can check out many possible futures by actually trying out different financial architectures, or different decision rules, or different intellectual property rights and see what happens.                 

We could have even different virtual planets, with different laws and different cultures and different kinds of societies. And you could choose the planet that you like most. So in some sense, now a new age is opening up with almost unlimited resources. We’re, of course, still living in a material world, in which we have a lot of restrictions, because resources are limited. They’re scarce and there’s a lot of competition for these scarce resources. But information can be multiplied as much as you like. Of course, there is some cost, and also some energy needed for that, but it’s relatively low cost, actually. So we can create really almost infinite new possibilities for creativity, for productivity, for interaction. And it is extremely interesting that we have a completely new world coming up here, absolutely new opportunities that need to be checked out.

But now the question is: how will it all work? Or how would you make it work? Because the information systems that we have created are even more complex than our financial system. We know the financial system is extremely difficult to regulate and to control. How would you want to control an information system of this complexity? I think that cannot be done top-down. We are seeing now a trend that complex systems are run in a more and more decentralized way. We’re learning somehow to use self-organization principles in order to run these kinds of systems. We have seen that in the Internet, we are seeing t for smart grids, but also for traffic control.

I have been working myself on these new ways of self-control. It’s very interesting. Classically one has tried to optimize traffic flow. It’s so demanding that even our fastest supercomputers can’t do that in a strict sense, in real time. That means one needs to make simplifications. But in principle, what one is trying to do is to impose an optimal traffic light control top-down on the city. The supercomputer believes to know what is best for all the cars, and that is imposed on the system.                 

We have developed a different approach where we said: given that there is a large degree of variability in the system, the most important aspect is to have a flexible adaptation to the actual traffic conditions. We came up with a system where traffic flows control the traffic lights. It turns out this makes much better use of scarce resources, such as space and time. It works better for cars, it works better for public transport and for pedestrians and bikers, and it’s good for the environment as well.                 

The age of social innovation

There’s a new kind of socio-inspired technology coming up, now. Society has many wonderful self-organization mechanisms that we can learn from, such as trust, reputation, culture. If we can learn how to implement that in our technological system, that is worth a lot of money; billions of dollars, actually. We think this is the next step after bio-inspired technology.

The next big step is to focus on society. We’ve had an age of physics; we’re now in an age of biology. I think we are entering the age of social innovation as we learn to make sense of this even bigger complexity of society. It’s like a new continent to discover. It’s really fascinating what now becomes understandable with the availability of Big Data about human activity patterns, and it will open a door to a new future.

What will be very important in order to make sense of the complexity of our information society is to overcome the disciplinary silos of science; to think out of the box. Classically we had social sciences, we had economics, we had physics and biology and ecology, and computer science and so on. Now, our project is trying to bring those different fields together, because we’re deeply convinced that without this integration of different scientific perspectives, we cannot anymore make sense of these hyper-connected systems that we have created.                 

For example, computer science requires complexity science and social science to understand those systems that have been created and will be created. Why is this? Because the dense networking and to the complex interaction between the components creates self-organization, and emergent phenomena in those systems. The flash crash is just one example that shows that unexpected things can happen. We know that from many systems.

Complexity theory is very important here, but also social science. And why is that? Because the components of these information communication systems are becoming more and more human-like. They’re communicating with each other. They’re making a picture of the outside world. They’re projecting expectations into the future, and they are taking autonomous decisions. That means if those computers interact with each other, it’s creating an artificial social system in some sense.                 

In the same way, social science will need complexity science and computer science. Social science needs the data that computer science and information communication technology can provide. Now, and even more in the future, those data traces about human activities allow us eventually to detect patterns and kind of laws of human behavior. It will be only possible through the collaboration with computer science to get those data, and to make sense of what is happening actually in society. I don’t need to mention that obviously there are complex dynamics going on in society; that means complexity science is needed for social science as well.

In the same sense, we could say complexity science needs social science and computer science to become practical. To go a step beyond talking about butterfly effects and chaos and turbulence. And to make sure that the thinking of complexity science will pervade our thinking in the natural engineering and social sciences and allow us to understand the real problems of our world. That is kind of the essence: that we need to bring these different scientific fields together. We have actually succeeded to build up these integrated communities in many countries all over the world, ready to go, as soon as money becomes available for that.        

Big Data is not a solution per se. Even the most powerful machine learning algorithm will not be sufficient to make sense of our world, to understand the principles according to which our world is working. This is important to recognize. The great challenge is to marry data with theories, with models. Only then will we be able to make sense of the useful bits of data. It’s like finding a needle in the haystack. The more data you have, the more difficult it may be to find this needle, actually, to a certain extent. And there is this danger of over-fitting, of being distracted from important details. We are certainly already in an age where we’re flooded with information, and our attention span can actually not process all that information. That means there is a danger that this undermines our wisdom, if our attention is attracted by the wrong details of information. So we are confronted with the problem of finding the right institutions and tools and instruments for decision-making.        

The Living Earth Simulator will basically take the data that is gathered by the Internet, search requests, and created by sensor networks, and feed it into big computer simulations that are based on models of social and economic and technological behavior. In this way, we’ll be able to look at what-if scenarios. We hope to get a better understanding, for example, of financial systems and some answers to controversial questions such as how much leverage effect is good? Under what conditions is ‘naked short-selling’ beneficial? When does it destabilize markets? To what extent is high frequency trading good, or it can it also have side effects? All these kinds of questions, which are difficult to answer. Or how to deal best with the situation in Europe, where we have trouble, obviously, in Greece, but also kind of contagious effects on other countries and on the rest of the financial system. It would be very good to have the models and the data that allow us actually to simulate these kinds of scenarios and to take better-informed decisions. (…)

The idea is to have an open platform to create a data and model commons that everybody can contribute to, so people could upload data and models, and others could use that. People would also judge the quality of the data and models and rate them according to their criteria. And we also point out the criteria according to which they’re doing the rating. But in principle, everybody can contribute and everybody can use it. (…)                            

We have much better theories, also, that allows us to make sense of those data. We’re entering into an age where we can understand society and the economy much better, namely as complex self-organizing systems.           

It will be important to guide us into the future because we are creating very powerful systems. Information society will transform our society fundamentally and we shouldn’t just let it happen. We want to understand how that will change our society, and what are the different pathes that our society may take, and decide for the one that we want it to take. For that, we need to have a much better understanding.

Now a lot of social activity data are becoming available through Facebook and Twitter and Google search requests and so on. This is, of course, a huge opportunity for business. Businesses are talking about the new oil, personal data as new asset class. There’s something like a gold rush going on. That also, of course, has huge opportunities for science, eventually we can make sense of complex systems such as our society. There are different perspectives on this. They range from some people who think that information communication technologies eventually will create a God’s-eye view: systems that make sense of all human activities, and the interactions of people, while others are afraid of a Big Brother emerging.                 

The question is how to handle that situation. Some people say we don’t need privacy in society. Society is undergoing a transformation, and privacy is not anymore needed. I don’t actually share this point of view, as a social scientist, because public and private are two sides of the same coin, so they cannot exist without the other side. It is very important, for a society to work, to have social diversity. Today, we know to appreciate biodiversity, and in the same way we need to think about social diversity, because it’s a motor of innovation. It’s also an important factor for societal resilience. The question now is how all those data that we are creating, and also recommender system and personalized services are going to impact people’s decision-making behavior, and society overall.                 

This is what we need to look at now. How is people’s behavior changing through these kinds of data? How are people changing their behavior when they feel they’re being observed? Europe is quite sensitive about privacy. The project we are working on is actually trying to find a balance between the interest of companies and Big Data of governments and individuals. Basically we want to develop technologies that allow us to find this balance, to make sure that all the three perspectives actually are taken into account. That you can make big business, but also at the same time, the individual’s privacy is respected. That individuals have more control over their own data, know what is happening with them, have influence on what is happening with them. (…)           

In some sense, we want to create a new data and model commons, a new kind of language, a new public good that allows people to do new things. (…)

My feeling is that actually business will be made on top of this sea of data that’s being created. At the moment data is kind of the valuable resource, right? But in the future, it will probably be a cheap resource, or even a free resource to a certain extent, if we learn how to deal with openness of data. The expensive thing will be what we do with the data. That means the algorithms, the models, and theories that allow us to make sense of the data.”

Dirk Helbing, physicist, and professor of sociology at ETH Zurich – Swiss Federal Institute of Technology, in particular for modelling and simulation, A New Kind Of Socio-inspired Technology, Edge Conversation, June 19, 2012. (Illustration: WSF)

See also:

☞ Dirk Helbing, New science and technology to understand and manage our complex world in a more sustainable and resilient way (pdf) (presentation), ETH Zurich
Why does nature so consistently organize itself into hierarchies? Living Cells Show How to Fix the Financial System
Geoffrey West on Why Cities Keep Growing, Corporations and People Always Die, and Life Gets Faster
The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks
Networks tag on Lapidarium notes

Jun
3rd
Sun
permalink

Self as Symbol. The loopy nature of consciousness trips up scientists studying themselves

              
                                                          M.C. Escher’s “Drawing Hands”

"The consciousness problem remains popular on lists of problems that might never be solved.

Perhaps that’s because the consciousness problem is inherently similar to another famous problem that actually has been proved unsolvable: finding a self-consistent set of axioms for deducing all of mathematics. As the Austrian logician Kurt Gödel proved eight decades ago, no such axiomatic system is possible; any system as complicated as arithmetic contains true statements that cannot be proved within the system.

Gödel’s proof emerged from deep insights into the self-referential nature of mathematical statements. He showed how a system referring to itself creates paradoxes that cannot be logically resolved — and so certain questions cannot in principle be answered. Consciousness, in a way, is in the same logical boat. At its core, consciousness is self-referential awareness, the self’s sense of its own existence. It is consciousness itself that is trying to explain consciousness.

Self-reference, feedback loops, paradoxes and Gödel’s proof all play central roles in the view of consciousness articulated by Douglas Hofstadter in his 2007 book I Am a Strange Loop. Hofstadter is (among other things) a computer scientist, and he views consciousness through lenses unfamiliar to most neuroscientists. In his eyes, it’s not so bizarre to compare math and numbers to the mind and consciousness. Math is, after all, deeply concerned with logic and reason — the stuff of thought. Mathematical paradoxes, Hofstadter points out, open up “profound questions concerning the nature of reasoning — and thus concerning the elusive nature of thinking — and thus concerning the mysterious nature of the human mind itself.”

Enter the loop

In particular, Hofstadter seizes on Gödel’s insight that a mathematical formula — a statement about a number — can itself be represented by a number. So you can take the number describing a formula and insert that number into the formula, which then becomes a statement about itself. Such a self-referential capability introduces a certain “loopiness” into mathematics, Hofstadter notes, something like the famous Escher print of a right hand drawing a left hand, which in turn is drawing the right hand. This “strange loopiness” in math suggested to Hofstadter that something similar is going on in human thought.

So when he titled his book “I Am a Strange Loop,” Hofstadter didn’t mean that he was personally loopy, but that the concept of an individual — a persistent identity, an “I,” that accompanies what people refer to as consciousness — is a loop of a certain sort. It’s a feedback loop, like the circuit that turns a whisper into an ear-piercing screech when the microphone whispered into is too close to the loudspeaker emitting the sound.

But consciousness is more than just an ordinary feedback loop. It’s a strange loop, which Hofstadter describes as a loop capable of perceiving patterns in its environment and assigning common symbolic meanings to sufficiently similar patterns. An acoustic feedback loop generates no symbols, just noise. A human brain, though, can assign symbols to patterns. While patterns of dots on a TV screen are just dots to a mosquito, to a person, the same dots evoke symbols, such as football players, talk show hosts or NCIS agents. Floods of raw sensory data trigger perceptions that fall into categories designated by “symbols that stand for abstract regularities in the world,” Hofstadter asserts. Human brains create vast repertoires of these symbols, conferring the “power to represent phenomena of unlimited complexity and thus to twist back and to engulf themselves via a strange loop.”

Consciousness itself occurs when a system with such ability creates a higher-level symbol, a symbol for the ability to create symbols. That symbol is the self. The I. Consciousness. “You and I are mirages that perceive themselves,” Hofstadter writes.

This self-generated symbol of the self operates only on the level of symbols. It has no access to the workings of nerve cells and neurotransmitters, the microscopic electrochemical machinery of neurobiological life. The symbols that consciousness contemplates don’t look much like the real thing, the way a map of Texas conveys nothing of the grass and dirt and asphalt and bricks that cover the physical territory.

And just like a map of Texas remains remarkably stable over many decades — it doesn’t change with each new pothole in a Dallas street — human self-identity remains stable over a lifetime, despite constant changes on the micro level of proteins and cells. As an individual grows, matures, changes in many minute ways, the conscious self’s identity remains intact, just as Texas remains Texas even as new skyscrapers rise in the cities, farms grow different crops and the Red River sometimes shifts the boundary with Oklahoma a bit.

If consciousness were merely a map, a convenient shortcut symbol for a complex mess of neurobiological signaling, perhaps it wouldn’t be so hard to figure out. But its mysteries multiply because the symbol is generated by the thing doing the symbolizing. It’s like Gödel’s numbers that refer to formulas that represent truths about numbers; this self-referentialism creates unanswerable questions, unsolvable problems.

A typical example of such a Gödelian paradox is the following sentence: This sentence cannot be true.

Is that sentence true? Obviously not, because it says it isn’t true. But wait — then it is true. Except that it can’t be. Self-referential sentences seem to have it both ways — or neither way.

And so perceptual systems able to symbolize themselves — self-referential minds — can’t be explained just by understanding the parts that compose them. Simply describing how electric charges travel along nerve cells, how small molecules jump from one cell to another, how such signaling sends messages from one part of the brain to another — none of that explains consciousness any more than knowing the English alphabet letter by letter (and even the rules of grammar) will tell you the meaning of Shakespeare’s poetry.

Hofstadter does not contend, of course, that all the biochemistry and cellular communication is irrelevant. It provides the machinery for perceiving and symbolizing that makes the strange loop of consciousness possible. It’s just that consciousness does not itself deal with molecules and cells; it copes with thoughts and emotions, hopes and fears, ideas and desires. Just as numbers can represent the complexities of all of mathematics (including numbers), a brain can represent the complexities of experience (including the brain itself). Gödel’s proof showed that math is “incomplete”; it contains truths that can’t be proven. And consciousness is a truth of a sort that can’t be comprehended within a system of molecules and cells alone.

That doesn’t mean that consciousness can never be understood — Gödel’s work did not undermine human understanding of mathematics, it enriched it. And so the realization that consciousness is self-referential could also usher in a deeper understanding of what the word means — what it symbolizes.

Information handler

Viewed as a symbol, consciousness is very much like many of the other grand ideas of science. An atom is not so much a thing as an idea, a symbol for matter’s ultimate constituents, and the modern physical understanding of atoms bears virtually no resemblance to the original conception in the minds of the ancient Greeks who named them. Even Francis Crick’s gene made from DNA turned out to be much more elusive than the “unit of heredity” imagined by Gregor Mendel in the 19th century. The later coinage of the word gene to describe such units long remained a symbol; early 20th century experiments allowed geneticists to deduce a lot about genes, but nobody really had a clue what a gene was.

“In a sense people were just as vague about what genes were in the 1920s as they are now about consciousness,” Crick said in 1998. “It was exactly the same. The more professional people in the field, which was biochemistry at that time, thought that it was a problem that was too early to tackle.”

It turned out that with genes, their physical implementation didn’t really matter as much as the information storage and processing that genes engaged in. DNA is in essence a map, containing codes allowing one set of molecules to be transcribed into others necessary for life. It’s a lot easier to make a million copies of a map of Texas than to make a million Texases; DNA’s genetic mapping power is the secret that made the proliferation of life on Earth possible. Similarly, consciousness is deeply involved in representing information (with symbols) and putting that information together to make sense of the world. It’s the brain’s information processing powers that allow the mind to symbolize itself.

Koch believes that focusing on information could sharpen science’s understanding of consciousness. A brain’s ability to find patterns in influxes of sensory data, to send signals back and forth to integrate all that data into a coherent picture of reality and to trigger appropriate responses all seem to be processes that could be quantified and perhaps even explained with the math that describes how information works.

“Ultimately I think the key thing that matters is information,” Koch says. “You have these causal interactions and they can be quantified using information theory. Somehow out of that consciousness has to arrive.” An inevitable consequence of this point of view is that consciousness doesn’t care what kind of information processors are doing all its jobs — whether nerve cells or transistors.

“It’s not the stuff out of which your brain is made,” Koch says. “It’s what that stuff represents that’s conscious, and that tells us that lots of other systems could be conscious too.”

Perhaps, in the end, it will be the ability to create unmistakable features of consciousness in some stuff other than a biological brain that will signal success in the quest for an explanation. But it’s doubtful that experimentally exposing consciousness as not exclusively human will displace humankind’s belief in its own primacy. People will probably always believe that it can only be the strange loop of human consciousness that makes the world go ’round.

“We … draw conceptual boundaries around entities that we easily perceive, and in so doing we carve out what seems to us to be reality,” Hofstadter wrote. “The ‘I’ we create for each of us is a quintessential example of such a perceived or invented reality, and it does such a good job of explaining our behavior that it becomes the hub around which the rest of the world seems to rotate.”

Tom Siegfried, American journalist, author, Self as Symbol, Science News, Feb 11, 2012.

See also:

☞ Laura Sanders, Ph.D. in Molecular Biology from the University of Southern California in Los Angeles, Emblems of Awareness, Science News, Feb 11, 2012.

                                            Degress of thought

                                          (Credit: Stanford University)

"Awareness typically tracks with wakefulness — especially in normal states of consciousness (bold). People in coma or under general anesthesia score low on both measures, appearing asleep with no signs of awareness. Sometimes, wakefulness and awareness become uncoupled, such as among people in a persistent vegetative state. In this case, a person seems awake and is sometimes able to move but is unaware of the surroundings."  (…)

“Messages constantly zing around the brain in complex patterns, as if trillions of tiny balls were simultaneously dropped into a pinball machine, each with a prescribed, mission-critical path. This constant flow of information might be what creates consciousness — and interruptions might destroy it. (…)

“If you knock on a wooden table or a bucket full of nothing, you get different noises,” Massimini says. “If you knock on the brain that is healthy and conscious, you get a very complex noise.” (…)

In the same way that “life” evades a single, clear definition (growth, reproduction or a healthy metabolism could all apply), consciousness might turn out to be a collection of remarkable phenomena, Seth says. “If we can explain different aspects of consciousness, then my hope is that it will start to seem slightly less mysterious that there is consciousness at all in the universe.” (…)

Recipe for consciousness

Somehow a sense of self emerges from the many interactions of nerve cells and neurotransmitters in the brain — but a single source behind the phenomenon remains elusive.

            

                                                      Illustration: Nicolle Rager Fuller

1. Parietal cortex Brain activity in the parietal cortex is diminished by anesthetics, when people fall into a deep sleep and in people in a vegetative state or coma. There is some evidence suggesting that the parietal cortex is where first-person perspective is generated.

2. Frontal cortex Some researchers argue that parts of the frontal cortex (along with connections to the parietal cortex) are required for consciousness. But other scientists point to a few studies in which people with damaged frontal areas retain consciousness.

3. Claustrum An enigmatic, thin sheet of neural tissue called the claustrum has connections with many other regions. Though the structure has been largely ignored by modern scientists, Francis Crick became keenly interested in the claustrum’s role in consciousness just before his death in 2004.

4. Thalamus As one of the brain’s busiest hubs of activity, the thalamus is believed by many to have an important role in consciousness. Damage to even a small spot in the thalamus can lead to consciousness disorders.

5. Reticular activating system Damage to a particular group of nerve cell clusters, called the reticular activating system and found in the brain stem, can render a person comatose.”

☞ Bruce Hood, The Self Illusion: How the Brain Creates Identity
Theories of consciousness. Make Up Your Own Mind (visualization)
Malcolm MacIver on why did consciousness evolve, and how can we modify it?
Consciousness tag on Lapidarium

May
20th
Sun
permalink

The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks

    
                                                             Image: Library of Congress

“Knowledge is not a result merely of filtering or algorithms. It results from a far more complex process that is social, goal-driven, contextual, and culturally-bound. We get to knowledge — especially “actionable” knowledge — by having desires and curiosity, through plotting and play, by being wrong more often than right, by talking with others and forming social bonds, by applying methods and then backing away from them, by calculation and serendipity, by rationality and intuition, by institutional processes and social roles. Most important in this regard, where the decisions are tough and knowledge is hard to come by, knowledge is not determined by information, for it is the knowing process that first decides which information is relevant, and how it is to be used.”

David Weinberger, The Problem with the Data-Information-Knowledge-Wisdom Hierarchy, Harvard Business Review, Feb 2, 2010.

"The digitization of 21st-century media, Weinberger argues, leads not to the creation of a “global village" but rather to a new understanding of what knowledge is, to a change in the basic epistemology governing the universe. And this McLuhanesque transformation, in turn, reveals the general truth of the Heideggarian vision. Knowledge qua knowledge, Weinberger claims, is increasingly enmeshed in webs of discourse: culture-dependent and theory-free.

The causal force lying behind this massive sea change is, of course, the internet. Google search results — “9,560,000 results for ‘Heidegger’ in .71 seconds”) — taunt you with the realization that there are still another 950,000-odd pages of results to get through before you reach the end. The existence of hyperlinks is enough to convince even the most stubborn positivist that there is always another side to the story. And on the web, fringe believers can always find each other and marinate in their own illusions. The “web world” is too big to ever know. There is always another link. In the era of the Internet, Weinberger argues, facts are not bricks. They are networks. (…)

The most important aspect of Heidegger’s thought for our purposes is his understanding that human beings (or rather “Dasein,” “being-in-the-world”) are always thrown into a particular context, existing within already existing language structures and pre-determined meanings. In other words, the world is like the web, and we, Dasein, live inside the links. (…)

If our starting point is that all knowledge is networked, and always has been, then we are in a far better point to start talking about what makes today’s epistemological infastructure different from the infrastrucure in 1983. But we are also in a position to ask: if all knowledge was networked knowledge, even in 1983, than how did we not behave as if it was so? How did humanity carry on? Why did civilization not collapse into a morass of post-modern chaos? Weinberger’s answer is, once again, McLuhanesque. It was the medium in which knowledge was contained that created the difference. Stable borders around knowledge were built by books.

I would posit a different answer: if knowledge has always been networked knowledge, than facts have never had stable containers. Most of the time, though, we more or less act as if they do. Within philosophical subfield known as Actor-Network Theory (ANT) this “acting-as-if-stability-existed” is referred to as “black boxing.” One of the black boxes around knowledge might very well be the book. But black boxes can also include algorithms, census bureaus, libraries, laboratories, and news rooms. Black boxes emerge out of actually-existing knowledge networks, stabilize for a time, and unravel, and our goal as thinkers and scholars ought to be understanding how these nodes emerge and disappear. In other words, understanding changes to knowledge in this way leaves us far more sensitive to the operations of power than does the notoriously power-free perspective of Marshall McLuhan. (…)

Why don’t I care that the Google results page goes on towards infinity? If we avoid Marshall McLuhan’s easy answers to these complex questions, and retain the core of Heidegger’s brilliant insights while also adding a hefty dose of ontology to his largely immaterial philosophy, we might begin to understand the real operations of digital knowledge/power in a networked age.

Weinberger, however, does not care about power, and more or less admits this himself in a brilliant essay 2008 on the distinction between digital realists, utopians, and dystopians. Digital utopians, a group in which he includes himself, “point to the ways in which the Web has changed some of the basic assumptions about how we live together, removing old obstacles and enabling shiny new possibilities.” The realists, on the other hand, are rather dull: They argue that “the Web hasn’t had nearly as much effect as the utopians and dystopians proclaim. The Web carries with it certain possibilities and limitations, but (the realists say) not many more than other major communications medium.” Politically speaking, digital utopianism tantalizes us with the promise of what might be, and pushes us to do better. The political problem with the realist position, Weinberger argues, is that it “is … [a] decision that leans toward supporting the status quo because what-is is more knowable than what might be.”

The realist position, however, is not necessarily a position of quietude. Done well, digital realism can sensitize us to the fact that all networked knowledge systems eventually become brick walls, that these brick walls are maintained through technological, political, cultural, economic, and organizational forms of power. Our job, as thinkers and teachers, is not to stand back and claim that the all bricks have crumbled. Rather, our job is to understand how the wall gets built, and how we might try to build it differently.”

C.W. Anderson, Ph.D, an assistant professor in the Department of Media Culture at the College of Staten Island (CUNY), researcher at the Columbia University Graduate School of Journalism, The Difference Between Online Knowledge and Truly Open Knowledge, The Atlantic, Feb 3, 2012.

David Weinberger: ‘I think the Net generation is beginning to see knowledge in a way that is closer to the truth about knowledge’

"I think the Net generation is beginning to see knowledge in a way that is closer to the truth about knowledge — a truth we’ve long known but couldn’t instantiate. My generation, and the many generations before mine, have thought about knowledge as being the collected set of trusted content, typically expressed in libraries full of books. Our tradition has taken the trans-generational project of building this Library of Knowledge book by book as our God-given task as humans. Yet, for the coming generation, knowing looks less like capturing truths in books than engaging in never-settled networks of discussion and argument. That social activity — collaborative and contentious, often at the same time — is a more accurate reflection of our condition as imperfect social creatures trying to understand a world that is too big and too complex for even the biggest-headed expert.

This new topology of knowledge reflects the topology of the Net. The Net (and especially the Web) is constructed quite literally out of links, each of which expresses some human interest. If I link to a site, it’s because I think it matters in some way, and I want it to matter that way to you. The result is a World Wide Web with billions of pages and probably trillions of links that is a direct reflection of what matters to us humans, for better or worse. The knowledge networks that live in this new ecosystem share in that property; they are built out of, and reflect, human interest. Like our collective interests, the Web and the knowledge that resides there is at odds and linked in conversation. That’s why the Internet, for all its weirdness, feels so familiar and comfortable to so many of us. And that’s the sense in which I think networked knowledge is more “natural.” (…)

To make a smart room — a knowledge network — you have to have just enough diversity. And it has to be the right type of diversity. Scott Page in The Difference says that a group needs a diversity of perspectives and skill sets if it is going to be smarter than the smartest person in it. It also clearly needs a set of coping skills, norms, and procedures that enable it to deal with diversity productively. (…)

We humans can only see things from a point of view, and we can only understand things by appropriating them into our already-existing context. (…)

In fact, the idea of objectivity arose in response to the limitations of paper, as did so much of our traditional Western idea of knowledge. Paper is a disconnected medium. So, when you write a news story, you have to encapsulate something quite complex in just a relatively small rectangle of print. You know that the reader has no easy way to check what you’re saying, or to explore further on her own; to do so, she’ll have to put down the paper, go to a local library, and start combing through texts that are less current than the newspaper in which your article appears. The reporter was the one mediator of the world the reader would encounter, so the report had to avoid the mediator’s point of view and try to reflect all sides of contentious issues. Objectivity arose to address the disconnected nature of paper.

Our new medium is, of course, wildly connective. Now we can explore beyond the news rectangle just by clicking. There is no longer an imperative to squeeze the world into small, self-contained boxes. Hyperlinks remove the limitations that objectivity was invented to address.

Hyperlinks also enable readers to understand — and thus perhaps discount — the writer’s point of view, which is often a better way of getting past the writer’s prejudices than asking the writer to write as if she or he had none. This, of course, inverts the old model that assumed that if we knew about the journalist’s personal opinions, her or his work would be less credible. Now we often think that the work becomes more credible if the author is straightforward about his or her standpoint. That’s the sense in which transparency is the new objectivity.

There is still value in trying to recognize how one’s own standpoint and assumptions distort one’s vision of the world; emotional and conceptual empathy are of continuing importance because they are how we embody the truth that we share a world with others to home that world matters differently. But we are coming to accept that we can’t really get a view from nowhere, and if we could, we would have no idea what we’re looking at. (…)

Our new ability to know the world at a scale never before imaginable may not bring us our old type of understanding, but understanding and knowledge are not motivated only by the desire to feel that sudden gasp of insight. The opposite and ancient motive is to feel the breath of awe in the face of the almighty unknowability of our universe. A knowing that recognizes its object is so vast that it outstrips understanding makes us more capable of awe. (…)

Technodeterminism is the claim that technology by itself has predictable, determinant effects on people or culture. (…) We still need to be able to discuss how a technology is affecting a culture in general. Generalizations can be a vehicle of truth, so long as they are understood to be only generally true. (…) The new knowledge continues to find generalities that connect individual instances, but because the new ecosystem is hyperlinked, we can go from the generalities back to the individual cases. And those generalizations are themselves linked into a system of difference and disagreement.”

David Weinberger, Ph.D. from the University of Toronto, American technologist, professional speaker, and commentator, interviewed by Rebecca J. Rosen, What the Internet Means for How We Think About the World, The Atlantic, Jan 5 2012.

See also:

To Know, but Not Understand: David Weinberger on Science and Big Data, The Atlantic, Jan 3, 2012 
When science becomes civic: Connecting Engaged Universities and Learning Communities, University of California, Davis, September 11 - 12, 2001
The Filter Bubble: Eli Pariser on What the Internet Is Hiding From You
A story about the Semantic Web (Web 3.0) (video)
Vannevar Bush on the new relationship between thinking man and the sum of our knowledge (1945)
George Lakoff on metaphors, explanatory journalism and the ‘Real Rationality’
The Relativity of Truth - a brief résumé, Lapidarium notes

Apr
25th
Wed
permalink

Waking Life animated film focuses on the nature of dreams, consciousness, and existentialism



Waking Life is an American animated film (rotoscoped based on live action), directed by Richard Linklater and released in 2001. The entire film was shot using digital video and then a team of artists using computers drew stylized lines and colors over each frame.

The film focuses on the nature of dreams, consciousness, and existentialism. The title is a reference to philosopher George Santayana's maxim: “Sanity is a madness put to good uses; waking life is a dream controlled.”

Waking Life is about an unnamed young man in a persistent dream-like state that eventually progresses to lucidity. He initially observes and later participates in philosophical discussions of issues such as reality, free will, the relationship of the subject with others, and the meaning of life. Along the way the film touches on other topics including existentialism, situationist politics, posthumanity, the film theory of André Bazin, and lucid dreaming itself. By the end, the protagonist feels trapped by his perpetual dream, broken up only by unending false awakenings. His final conversation with a dream character reveals that reality may be only a single instant which the individual consciousness interprets falsely as time (and, thus, life) until a level of understanding is achieved that may allow the individual to break free from the illusion.

Ethan Hawke and Julie Delpy reprise their characters from Before Sunrise in one scene. (Wiki)

Eamonn Healy speaks about telescopic evolution and the future of humanity

We won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). (…) The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially).

So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the twentieth century saw only about 25 years of progress (again at today’s rate of progress) since we have been speeding up to current rates. So the twenty-first century will see almost a thousand times greater technological change than its predecessor.

Ray Kurzweil, American author, scientist, inventor and futurist, The Law of Accelerating Returns, KurzweilAI, March 7, 2001.

"If we’re looking at the highlights of human development, you have to look at the evolution of the organism and then at the development of its interaction with the environment. Evolution of the organism will begin with the evolution of life perceived through the hominid coming to the evolution of mankind. Neanderthal and Cro-Magnon man. Now, interestingly, what you’re looking at here are three strings: biological, anthropological — development of the cities — and cultural, which is human expression.

Now, what you’ve seen here is the evolution of populations, not so much the evolution of individuals. And in addition, if you look at the time scales that are involved here — two billion years for life, six million years for the hominid, 100,000 years for mankind as we know it — you’re beginning to see the telescoping nature of the evolutionary paradigm. And then when you get to agricultural, when you get to scientific revolution and industrial revolution, you’re looking at 10,000 years, 400 years, 150 years. Uou’re seeing a further telescoping of this evolutionary time. What that means is that as we go through the new evolution, it’s gonna telescope to the point we should be able to see it manifest itself within our lifetime, within this generation.

The new evolution stems from information, and it stems from two types of information: digital and analog. The digital is artificial intelligence. The analog results from molecular biology, the cloning of the organism. And you knit the two together with neurobiology. Before on the old evolutionary paradigm, one would die and the other would grow and dominate. But under the new paradigm, they would exist as a mutually supportive, noncompetitive grouping. Okay, independent from the external.

And what is interesting here is that evolution now becomes an individually centered process, emanating from the needs and desires of the individual, and not an external process, a passive process where the individual is just at the whim of the collective. So, you produce a neo-human, okay, with a new individuality and a new consciousness. But that’s only the beginning of the evolutionary cycle because as the next cycle proceeds, the input is now this new intelligence. As intelligence piles on intelligence, as ability piles on ability, the speed changes. Until what? Until we reach a crescendo in a way could be imagined as an enormous instantaneous fulfillment of human? human and neo-human potential. It could be something totally different. It could be the amplification of the individual, the multiplication of individual existences. Parallel existences now with the individual no longer restricted by time and space.

And the manifestations of this neo-human-type evolution, manifestations could be dramatically counter-intuitive. That’s the interesting part. The old evolution is cold. It’s sterile. It’s efficient, okay? And its manifestations of those social adaptations. We’re talking about parasitism, dominance, morality, okay? Uh, war, predation, these would be subject to de-emphasis. These will be subject to de-evolution. The new evolutionary paradigm will give us the human traits of truth, of loyalty, of justice, of freedom. These will be the manifestations of the new evolution. And that is what we would hope to see from this. That would be nice.”

Eamonn Healy, professor of chemistry at St. Edward’s University in Austin, Texas, where his research focuses on the design of structure-activity probes to elucidate enzymatic activity. He appears in Richard Linklater's 2001 film Waking Life discussing concepts similar to a technological singularity and explaining “telescopic evolution.”, Eamonn Healy speaks about telescopic evolution and the future of humanity from Brandon Sergent, Transcript

See also:

Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

Apr
11th
Wed
permalink

The Cognitive Limit of Organizations. The structure of a society is connected to its total amount of information
                                               Click image to enlarge

The vertical axis of this slide represents the total stock of information in the world. The horizontal axis represents time.

In the early days, life was simple. We did important things like make spears and arrowheads. The amount of knowledge needed to make these items, however, was small enough that a single person could master their production. There was no need for a large division of labor and new knowledge was extremely precious. If you got new knowledge, you did not want to share it. After all, in a world where most knowledge can fit in someone’s head, stealing ideas is easy, and appropriating the value of the ideas you generate is hard.

At some point, however, the amount of knowledge required to make things began to exceed the cognitive limit of a single human being. Things could only be done in teams, and sharing information among team members was required to build these complex items. Organizations were born as our social skills began to compensate for our limited cognitive skills. Society, however, kept on accruing more and more knowledge, and the cognitive limit of organizations, just like that of the spearmaker, was ultimately reached. (…)

Today, however, most products are combinations of knowledge and intellectual property that resides in different organizations. Our world is less and less about the single pieces of intellectual property and more and more about the networks that help connect these pieces. The total stock of information used in these ecosystems exceeds the capacity of single organizations because doubling the size of huge organizations does not double the capacity of that organization to hold knowledge and put it into productive use.

In a world in which implementing the next generation of ideas will increasingly require pulling resources from different organizations, barriers to collaboration will be a crucial constraint limiting the development of firms. Agility, context, and a strong network are becoming the survival traits where assets, control, and power used to rule. John Seely Brown refers to this as the “Power of Pull.”“

The Cognitive Limit of Organizations, MIT Media Lab, Oct 7, 2011.

Mar
26th
Mon
permalink

Science historian George Dyson: Unravelling the digital code
      
                                                     George Dyson (Photo: Wired)

"It was not made for those who sell oil or sardines."

— G. W. Leibniz, ca. 1674, on his calculating machine

A universe of self-replicating code

Digital organisms, while not necessarily any more alive than a phone book, are strings of code that replicate and evolve over time. Digital codes are strings of binary digits — bits. Google is a fantastically large number, so large it is almost beyond comprehension, distributed and replicated across all kinds of hosts. When you click on a link, you are replicating the string of code that it links to. Replication of code sequences isn’t life, any more than replication of nucleotide sequences is, but we know that it sometimes leads to life.

Q [Kevin Kelly]: Are we in that digital universe right now, as we talk on the phone?

George Dyson: Sure. You’re recording this conversation using a digital recorder — into an empty matrix of addresses on a microchip that is being filled up at 44 kilobytes per second. That address space full of numbers is the digital universe.

Q: How fast is this universe expanding?

G.D.: Like our own universe at the beginning, it’s more exploding than expanding. We’re all so immersed in it that it’s hard to perceive. Last time I checked, the digital universe was expanding at the rate of five trillion bits per second in storage and two trillion transistors per second on the processing side. (…)

Q: Where is this digital universe heading?

G.D.: This universe is open to the evolution of all kinds of things. It’s cycling faster and faster. Even with Google and YouTube and Facebook, we can’t consume it all. And we aren’t aware what this space is filling up with. From a human perspective, computers are idle 99 per cent of the time. While they’re waiting for us to come up with instructions, computation is happening without us, as computers write instructions for each other. As Turing showed, this space can’t be supervised. As the digital universe expands, so does this wild, undomesticated side.”

— George Dyson interviewed by Kevin Kelly in Science historian George Dyson: Unravelling the digital code, Wired, Mar 5, 2012.

"Just as we later worried about recombinant DNA, what if these things escaped? What would they do to the world? Could this be the end of the world as we know it if these self-replicating numerical creatures got loose?

But, we now live in a world where they did get loose—a world increasingly run by self-replicating strings of code. Everything we love and use today is, in a lot of ways, self-reproducing exactly as Turing, von Neumann, and Barricelli prescribed. It’s a very symbiotic relationship: the same way life found a way to use the self-replicating qualities of these polynucleotide molecules to the great benefit of life as a whole, there’s no reason life won’t use the self-replicating abilities of digital code, and that’s what’s happening. If you look at what people like Craig Venter and the thousand less-known companies are doing, we’re doing exactly that, from the bottom up. (…)

What’s, in a way, missing in today’s world is more biology of the Internet. More people like Nils Barricelli to go out and look at what’s going on, not from a business or what’s legal point of view, but just to observe what’s going on.

Many of these things we read about in the front page of the newspaper every day, about what’s proper or improper, or ethical or unethical, really concern this issue of autonomous self-replicating codes. What happens if you subscribe to a service and then as part of that service, unbeknownst to you, a piece of self-replicating code inhabits your machine, and it goes out and does something else? Who is responsible for that? And we’re in an increasingly gray zone as to where that’s going. (…)

Why is Apple one of the world’s most valuable companies? It’s not only because their machines are so beautifully designed, which is great and wonderful, but because those machines represent a closed numerical system. And they’re making great strides in expanding that system. It’s no longer at all odd to have a Mac laptop. It’s almost the normal thing.

But I’d like to take this to a different level, if I can change the subject… Ten or 20 years ago I was preaching that we should look at digital code as biologists: the Darwin Among the Machines stuff. People thought that was crazy, and now it’s firmly the accepted metaphor for what’s going on. And Kevin Kelly quoted me in Wired, he asked me for my last word on what companies should do about this. And I said, “Well, they should hire more biologists.”

But what we’re missing now, on another level, is not just biology, but cosmology. People treat the digital universe as some sort of metaphor, just a cute word for all these products. The universe of Apple, the universe of Google, the universe of Facebook, that these collectively constitute the digital universe, and we can only see it in human terms and what does this do for us?

We’re missing a tremendous opportunity. We’re asleep at the switch because it’s not a metaphor. In 1945 we actually did create a new universe. This is a universe of numbers with a life of their own, that we only see in terms of what those numbers can do for us. Can they record this interview? Can they play our music? Can they order our books on Amazon? If you cross the mirror in the other direction, there really is a universe of self-reproducing digital code. When I last checked, it was growing by five trillion bits per second. And that’s not just a metaphor for something else. It actually is. It’s a physical reality.

We’re still here at the big bang of this thing, and we’re not studying it enough. Who’s the cosmologist really looking at this in terms of what it might become in 10,000 years? What’s it going to be in 100 years? Here we are at the very beginning and we just may simply not be asking the right questions about what’s going on. Try looking at it from the other side, not from our side as human beings. Scientists are the people who can do that kind of thing. You can look at viruses from the point of view of a virus, not from the point of view of someone getting sick.

Very few people are looking at this digital universe in an objective way. Danny Hillis is one of the few people who is. His comment, made exactly 30 years ago in 1982, was that "memory locations are just wires turned sideways in time". That’s just so profound. That should be engraved on the wall. Because we don’t realize that there is this very different universe that does not have the same physics as our universe. It’s completely different physics. Yet, from the perspective of that universe, there is physics, and we have almost no physicists looking at it, as to what it’s like. And if we want to understand the sort of organisms that would evolve in that totally different universe, you have to understand the physics of the world in which they are in.  It’s like looking for life on another planet. Danny has that perspective. Most people say just, “well, a wire is a wire. It’s not a memory location turned sideways in time.” You have to have that sort of relativistic view of things.

We are still so close to the beginning of this explosion that we are still immersed in the initial fireball. Yet, in that short period of time, for instance, it was not long ago that to transfer money electronically you had to fill out paper forms on both ends and then wait a day for your money to be transferred. And, in a very few years, it’s a dozen years or so, most of the money in the world is moving electronically all the time.

The best example of this is what we call the flash crash of May 6th, two years ago, when suddenly, the whole system started behaving unpredictably. Large amounts of money were lost in milliseconds, and then the money came back, and we quietly (although the SEC held an investigation) swept it under the rug and just said, “well, it recovered. Things are okay.” But nobody knows what happened, or most of us don’t know.

There was a great Dutch documentary—Money and Speed: Inside the Black Box—where they spoke to someone named Eric Scott Hunsader who actually had captured the data on a much finer time scale, and there was all sorts of very interesting stuff going on. But it’s happening so quickly that it’s below what our normal trading programs are able to observe, they just aren’t accounting for those very fast things. And this could be happening all around us—not just in the world of finance. We would not necessarily even perceive it, that there’s a whole world of communication that’s not human communication. It’s machines communicating with machines. And they may be communicating money, or information that has other meaning—but if it is money, we eventually notice it. It’s just the small warm pond sitting there waiting for the spark.

It’s an unbelievably interesting time to be a digital biologist or a digital physicist, or a digital chemist. A good metaphor is chemistry. We’re starting to address code by template, rather than by numerical location—the way biological molecules do.

We’re living in a completely different world. The flash crash was an example: you could have gone out for a cup of coffee and missed the whole thing, and come back and your company lost a billion dollars and got back 999 million, while you were taking your lunch break. It just happened so fast, and it spread so quickly.

So, yes, the fear scenario is there, that some malevolent digital virus could bring down the financial system. But on the other hand, the miracle of this flash crash was not that it happened, but that it recovered so quickly. Yet, in those milliseconds, somebody made off with a lot of money. We still don’t know who that was, and maybe we don’t want to know.

The reason we’re here today (surrounded by this expanding digital universe) is because in 1936, or 1935, this oddball 23-year-old undergraduate student, Alan Turing, developed this theoretical framework to understand a problem in mathematical logic, and the way he solved that problem turned out to establish the model for all this computation. And I believe we wold have arrived here, sooner or later, without Alan Turing or John von Neumann, but it was Turing who developed the one-dimensional model, and von Neumann who developed the two-dimensional implementation, for this increasingly three-dimensional digital universe in which everything we do is immersed. And so, the next breakthrough in understanding will also I think come from some oddball. It won’t be one of our great, known scientists. It’ll be some 22-year-old kid somewhere who makes more sense of this.

But, we’re going back to biology, and of course, it’s impossible not to talk about money, and all these other ways that this impacts our life as human beings. What I was trying to say is that this digital universe really is so different that the physics itself is different. If you want to understand what types of life-like or self-reproducing forms would develop in a universe like that, you actually want to look at the sort of physics and chemistry of how that universe is completely different from ours. An example is how not only its time scale but how time operates is completely different, so that things can be going on in that world in microseconds that suddenly have a real effect on ours.

Again, money is a very good example, because money really is a sort of a gentlemen’s agreement to agree on where the money is at a given time. Banks decide, well, this money is here today and it’s there tomorrow. And when it’s being moved around in microseconds, you can have a collapse, where suddenly you hit the bell and you don’t know where the money is. And then everybody’s saying, “Where’s the money? What happened to it?” And I think that’s what happened. And there are other recent cases where it looks like a huge amount of money just suddenly disappeared, because we lost the common agreement on where it is at an exact point in time. We can’t account for those time periods as accurately as the computers can.

One number that’s interesting, and easy to remember, was in the year 1953, there were 53 kilobytes of high-speed memory on planet earth. This is random access high-speed memory. Now you can buy those 53 kilobytes for an immeasurably small, thousandth of one cent or something. If you draw the graph, it’s a very nice, clean graph. That’s sort of Moore’s Law; that it’s doubling. It has a doubling time that’s surprisingly short, and no end in sight, no matter what the technology does. We’re doubling the number of bits in a extraordinarily short time.

And we have never seen that. Or I mean, we have seen numbers like that, in epidemics or chain reactions, and there’s no question it’s a very interesting phenomenon. But still, it’s very hard not to just look at it from our point of view. What does it mean to us? What does it mean to my investments? What does it mean to my ability to have all the music I want on my iPhone? That kind of thing. But there’s something else going on. We’re seeing a fraction of one percent of it, and there’s this other 99.99 percent that people just aren’t looking at.

The beginning of this was driven by two problems. The problem of nuclear weapons design, and the problem of code breaking were the two drivers of the dawn of this computational universe. There were others, but those were the main ones.

What’s the driver today? You want one word? It’s advertising. And, you may think advertising is very trivial, and of no real importance, but I think it’s the driver. If you look at what most of these codes are doing, they’re trying to get the audience, trying to deliver the audience. The money is flowing as advertising.

And it is interesting that Samuel Butler imagined all this in 1863, and then in his book Erewhon. And then 1901, before he died, he wrote a draft for “Erewhon Revisited.” In there, he called out advertising, saying that advertising would be the driving force of these machines evolving and taking over the world. Even then at the close of 19th century England, he saw advertising as the way we would grant power to the machines.

If you had to say what’s the most powerful algorithm set loose on planet earth right now? Originally, yes, it was the Monte Carlo code for doing neutron calculations. Now it’s probably the AdWords algorithm. And the two are related: if you look at the way AdWords works, it is a Monte Carlo process. It’s a sort of statistical sampling of the entire search space, and a monetizing of it, which as we know, is a brilliant piece of work. And that’s not to diminish all the other great codes out there.

We live in a world where we measure numbers of computers in billions, and numbers of what we call servers, which are the equivalent of in the old days, of what would be called mainframes. Those are in the millions, hundreds of millions.

Two of the pioneers of this—to single out only two pioneers—were John Von Neumann and Alan Turing. If they were here today Turing would be 100. Von Neumann would be 109. I think they would understand what’s going on immediately—it would take them a few minutes, if not a day, to figure out, to understand what was going on. And, they both died working on biology, and I think they would be immediately fascinated by the way biological code and digital code are now intertwined. Von Neumann’s consuming passion at the end was self-reproducing automata. And Alan Turing was interested in the question of how molecules could self-organize to produce organisms.

They would be, on the other hand, astonished that we’re still running their machines, that we don’t have different computers. We’re still just running your straight Von Neumann/Turing machine with no real modification. So they might not find our computers all that interesting, but they would be diving into the architecture of the Internet, and looking at it.

In both cases, they would be amazed by the direct connection between the code running on computers and the code running in biology—that all these biotech companies are directly reading and writing nucleotide sequences in and out of electronic memory, with almost no human intervention. That’s more or less completely mechanized now, so there’s direct translation, and once you translate to nucleotides, it’s a small step, a difficult step, but, an inevitable step to translate directly to proteins. And that’s Craig Venter’s world, and it’s a very, very different world when we get there.

The question of how and when humans are going to expand into the universe, the space travel question, is, in my view, almost rendered obsolete by this growth of a digitally-coded biology, because those digital organisms—maybe they don’t exist now, but as long as the system keeps going, they’re inevitable—can travel at the speed of light. They can propagate. They’re going to be so immeasurably far ahead that maybe humans will be dragged along with it.

But while our digital footprint is propagating at the speed of light, we’re having very big trouble even getting to the eleven kilometers per second it takes to get into lower earth orbit. The digital world is clearly winning on that front. And that’s for the distant future. But it changes the game of launching things, if you no longer have to launch physical objects, in order to transmit life.”

George Dyson, author and historian of technology whose publications broadly cover the evolution of technology in relation to the physical environment and the direction of society, A universe of self-replicating code, Edge, Mar 26, 2012.

See also:

Jameson Dungan on information and synthetic biology
Vlatko Vedral: Decoding Reality: the universe as quantum information
Rethinking “Out of Africa: A Conversation with Christopher Stringer (2011)
A Short Course In Synthetic Genomics, The Edge Master Class with George Church & Craig Venter (2009)
Eat Me Before I Eat You! A New Foe For Bad Bugs: A Conversation with Kary Mullis (2010)
Mapping The Neanderthal Genome. A Conversation with Svante Pääbo (2009)
Engineering Biology”: A Conversation with Drew Endy (2008)
☞ “Life: A Gene-Centric View A Conversation in Munich with Craig Venter & Raichard Dawkins (2008)
Ants Have Algorithms: A Talk with Ian Couzin (2008)
Life: What A Concept, The Edge Seminar, Freeman Dyson, J. Craig Venter, George Church, Dimitar Sasselov, Seth Lloyd, Robert Shapiro (2007)
Code II J. Doyne Farmer v. Charles Simonyi (1998)
Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

Mar
21st
Wed
permalink

Richard Doyle on Creativity, evolution of mind and the rhetorical membrane between humans and an informational universe

              

Q [Jason Silva]: The Jesuit Priest and scientist Pierre Teilhard de Chardin spoke of the Noosphere very early on. A profile in WIRED Magazine article said, 

"Teilhard imagined a stage of evolution characterized by a complex membrane of information enveloping the globe and fueled by human consciousness”.. Teilhard saw the Net coming more than half a century before it arrived. He believed this vast thinking membrane would ultimately coalesce into “the living unity of a single tissue” containing our collective thoughts and experiences."  Teilhard wrote, "The living world is constituted by consciousness clothed in flesh and bone.

He argued that the primary vehicle for increasing complexity consciousness among living organisms was the nervous system. The informational wiring of a being, he argued - whether of neurons or electronics - gives birth to consciousness. As the diversification of nervous connections increases, evolution is led toward greater consciousness… thoughts?

Richard Doyle: Yes, he also called it this process of the evolution of consciousness “Omega Point”. The noosphere imagined here relied on a change in our relationship to  consciousness as much to any technological change and was part of evolution’s epic quest for self awareness. Here Teilhard is in accord with Julian Huxley (Aldous’ brother, a biologist) and Carl Sagan when they observed that “we are a way for the cosmos to know itself.” Sri Aurobindo’s The Life Divine traces out this evolution of consciousness as well through the greek and Sanskrit traditions as well as Darwinism and (relatively) modern philosophy. All are describing evolution’s slow and dynamic quest towards understanding itself.

         

I honestly think we are still grappling with the fact that our minds are distributed across a network by technology, and have been in a feedback loop between our brains and technologies at least since the invention of writing. As each new “mutation” occurs in the history of evolution of information technology, the very character of our minds shifts. McLuhan's Understanding Media is instructive here as well (he parsed it as the Global Village), and of course McLuhan was the bard who advised Leary on "Tune in, Turn on, Drop Out" and very influential on Terence McKenna.

One difference between now and Plato’s time is the infoquake through which we are all living. This radical increase in quantity no doubt has qualitative effects - it changes what it feels like to think and remember. Plato was working through the effect of one new information technology – writing – whereas today we “upgrade” every six months or so…Teilhard observes the correlative of this evolutionary increase in information - and the sudden thresholds it crosses - in the evolution of complexity and nervous systemsThe noosphere is a way of helping us deal with this “phase transition” of consciousness that may well be akin to the phase transition between liquid water and water vapor - a change in degree that effects a change in kind.

Darwin’s Pharmacy suggests that ecodelics were precisely such a mutation in information technology that increased sexually selective fitness through the capacity to process greater amounts of information, and that they are “extraordinarily sensitive to initial rhetorical traditions.” What this means is that because ecodelic experiences are so sensitive to the context in which we experience them, they can help make us aware of the effect of language and music etc on our consciousness, and thereby offer an awareness of our ability to effect our own consciousness through our linguistic and creative choices. This can be helpful when trying to browse the infoquake. Many other practices do so as well - meditation is the most well established practice for noticing the effects we can have on our own consciousness, and Sufi dervishes demonstrate this same outcome for dancing. I do the same on my bicycle, riding up a hill and chanting.

One problem I have with much of the discourse of “memes" is that it is often highly reductionistic - it often forgets that ideas have an ecology too, they must be "cultured." Here I would argue that drawing on Lawrence Lessig's work on the commons, the “brain” is a necessary but insufficient “spawning” ground for ideas that becomes actual. The commons is the spawning ground of ideas; brains are pretty obviously social as well as individual. Harvard biologist Richard Lewontin notes that there is no such thing as “self replicating” molecules, since they always require a context to be replicated. This problem goes back at last to computer scientist John Von Neumann's 1947 paper on Self reproducing automata.

I think Terence McKenna described the condition as "language is loose on planet three", and its modern version probably occurs first in the work of writer William S. Burroughs, whose notion of the "word virus" predates the "meme" by at least a decade. Then again this notion of "ideas are real" goes back to cosmologies that begin with the priority of consciousness over matter, as in "In the beginning was the word, and the word was god, and the word was with god." So even Burroughs could get a late pass for his idea. (…)

Q: Richard Dawkin's definition of a meme is quite powerful: 

“I think that a new kind of replicator has recently emerged on this very planet, […] already achieving evolutionary change at a rate that leaves the old gene panting far behind.” [the replicator is] human culture; the vector of transmission is language, and the spawning ground is the brain.”  

This notion that the ”the vector of transmission is language" is very compelling.. It seems to suggest that just as in biological evolution the vector of transmission has been the DNA molecule, in the noosphere, the next stage up, it is LANGUAGE that has become a major player in the transfer of information towards achieving evolutionary change.. Kind of affects how you think about the phrase “words have power”. This insight reminds me of a quote that describes, in words, the subjective ecstasy that a mind feels when upon having a transcendent realization that feels as if it advances evolution: 

"A universe of possibilities,

Grey infused by color,

The invisible revealed,

The mundane blown away

by awe” 

Is this what you mean by ‘the ecstasy of language’?

Richard Doyle: Above, I noted that ecodelics can make us aware of the feedback loops between our creative choices – should I eat mushrooms in a box? - Should I eat them with a fox? - and our consciousness. In other words, they can make us aware of the tremendous freedom we have in creating our own experience. Leary called this “internal freedom.” Becoming aware of the practically infinite choices we have to compose our lives, including the words we use to map them, can be overwhelming – we feel in these instances the “vertigo of freedom.” What to do? In ecodelic experience we can perceive the power of our maps. That moment in which we can learn to abide the tremendous creative choice we have, and take responsibility for it, is what I mean by the “ecstasy of language.” 

I would point out, though, that for those words you quote to do their work, they have to be read. The language does not do it "on its own" but as a result of the highly focused attention of readers. This may seem trivial but it is often left out, with some serious consequences. And “reading” can mean “follow up with interpretation”. I cracked up when I googled those lines above and found them in a corporate blog about TED, for example. Who knew that neo-romantic poetry was the emerging interface of the global corporate noosphere? (…)

Q: Buckminster Fuller described humans as "pattern integrities", Ray Kurzweil says we are "patterns of information". James Gleick's new book, The Information, says that “information may be more primary than matter”..  what do you make of this? And if we indeed are complex patterns, how can we hack the limitations of biology and entropy to preserve our pattern integrity indefinitely? 

Richard Doyle: First: It is important to remember that the history of the concept and tools of “information” is full of blindspots – we seem to be constantly tempted to underestimate the complexity of any given system needed to make any bit of information meaningful or useful. Caitlin, Kolmogorov Stephan Wolfram and John Von Neumann each came independently to the conclusion that information is only meaningful when it is “run” - you can’t predict the outcome of even many trivial programs without running the program. So to say that “information may be more primary than matter” we have to remember that “information” does not mean “free from constraints.” Thermodynamics – including entropy – remains.

Molecular and informatic reductionism – the view that you can best understand the nature of a biological system by cutting it up into the most significant bits, e.g. DNA – is a powerful model that enables us to do things with biological systems that we never could before. Artist Eduardo Kac collaborated with a French scientist to make a bioluminescent bunny. That’s new! But sometimes it is so powerful that we forget its limitations. The history of the human genome project illustrates this well. AND the human genome is incredibly interesting. It’s just not the immortality hack many thought it would be.

In this sense biology is not a limitation to be “transcended” (Kurzweil), but a medium of exploration whose constraints are interesting and sublime. On this scale of ecosystems, “death” is not a “limitation” but an attribute of a highly dynamic interactive system. Death is an attribute of life. Viewing biology as a “limitation” may not be the best way to become healthy and thriving beings.

Now, that said, looking at our characteristics as “patterns of information” can be immensely powerful, and I work with it at the level of consciousness as well as life. Thinking of ourselves as “dynamic patterns of multiply layered and interconnected self transforming information” is just as accurate of a description of human beings as “meaningless noisy monkeys who think they see god”, and is likely to have much better effects. A nice emphasis on this “pattern” rather than the bits that make it up can be found in Carl Sagan’s “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.”

Q: Richard Dawkins declared in 1986 that ”What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions, […] If you want to understand life,” Dawkins wrote, “don’t think about vibrant, throbbing gels and oozes, think about information technology.” How would you explain the relationship between information technology and the reality of the physical world?

Richard Doyle: Again, information is indeed physical. We can treat a sequence of information as abstraction and take it out of its context – like a quotation or a jellyfish gene spliced into a rabbit to enable it to glow. We can compress information, dwindling the resources it takes to store or process it. But “Information, words, instructions” all require physical instantiation to even be “information, words, instructions.” Researcher Rolf Landauer showed back in the 1960s that even erasure is physical. So I actually think throbbing gels and oozes and slime mold and bacteria eating away at the garbage gyre are very important when we wish to “understand” life. I actually think Dawkins gets it wrong here – he is talking about “modeling” life, not “understanding” it. Erwin Schrödinger, the originator of the idea of the genetic code and therefore the beginning of the “informatic” tradition of biology that Dawkins speaks in here, knew this very well and insisted on the importance of first person experience for understanding.

So while I find these metaphors useful, that is exactly what they are: metaphors. There is a very long history to the attempt to model words and action together: Again, John 1:1 is closer to Dawkin’s position here than he may be comfortable with: “In the Beginning was the word, and the word was god, and the word was with god” is a way of working with this capacity of language to bring phenomena into being. It is really only because we habitually think of language as “mere words” that we continually forget that they are a manifestation of a physical system and that they have very actual effects not limited to the physics of their utterance – the words “I love you” can have an effect much greater than the amount of energy necessary to utter them. Our experiences are highly tuneable by the language we use to describe them.

Q: Talk about the mycelial archetype. Author Paul Stamet compares the pattern of the mushroom mycelium with the overlapping information-sharing systems that comprise the Internet, with the networked neurons in the brain, and with a computer model of dark matter in the universe. All share this densely intertwingled filamental structure…. what is the connection? what is the pattern that connects here? 

Richard Doyle: First things first: Paul Stamets is a genius and we should listen to his world view carefully and learn from it. Along with Lynn Margulis and Dorion Sagan, whose work I borrow from extensively in Darwin’s Pharmacy (as well as many others), Stamets is asking us to contemplate and act on the massive interconnection between all forms of life. This is a shift in worldview that is comparable to the Copernican shift from a geocentric cosmos – it is a shift toward interconnection and consciousness of interconnection. And I like how you weave in Gregory Bateson's phrase “the pattern that connects” here, because Bateson (whose father, William Bateson, was one of the founders of modern genetics) continuously pointed toward the need to develop ways of perceiving the whole. The “mycelial archetype”, as you call it, is a reliable and rather exciting way to recall the whole: What we call “mushrooms” are really the fruiting bodies of an extensive network of cross connection.

That fuzz growing in an open can of tomato paste in your fridge – mycelium. So even opening our refrigerator – should we be lucky enough to have one, with food in it - can remind us that what we take to be reality is is an actuality only appearance – a sliver, albeit a significant one for our world, of the whole. That fuzz can remind us that (1) appearance and reality or not the same thing at all and (2) beyond appearance there is a massive interconnection in unity. This can help remind us who and what we really are. 

With the word ‘archetype”, you of course invoke the psychologist Carl Jung who saw archetypes as templates for understanding, ways of organizing our story of the world. There are many archetypes – the Hero, the Mother, the Trickster, the sage. They are very powerful because they help stitch together what can seem to be a chaotic world – that is both their strength and their weakness. It is a weakness because most of the time we are operating within an archetype and we don’t even know it, and we don’t know therefore that we can change our archetype

By experimenting with a different archetype – imagining, for example, the world through the lens of a 2400 year old organism that is mostly invisible to a very short lived and recent species becoming aware of its creative responsibility in altering the planet – is incredibly powerful, and in Darwin’s Pharmacy I am trying to offer a way to experiment with the idea of plant planet as well as “mycelium” archetype. One powerful aspect of the treating the mycelium as our archetype as humanity is that it is “distributed” - it does not operate via a center of control but through cross connection “distributed” over a space.

Anything we can do to remember both our individuation and our interconnection is timely – we experience the world as individuals, and our task is to discover our nature within the larger scale reality of our dense ecological interconnection. In the book I point to the Upanishad’s “Tat Tvam Asi as a way of comprehending how we can both be totally individual and an aspect of the whole.

Q: You’ve talked about the ecstasy of language and the role of rhetoric in shaping reality.. These notions echo some of Terence McKenna's ideas about language… He calls language an “ecstatic activity of signification”… and says that for the “inspired one, it is almost as if existence is uttering itself through him”… Can you expand on this? How does language create reality?? 

Richard Doyle: It’s incredibly fun and insightful to echo Terence McKenna. He’s really in this shamanic bard tradition that goes all the back to Empedocles at least, and is distributed widely across the planet. He’s got a bit of Whitman in him with his affirmation of the erotic aspects of enlightenment. He was Emerson speaking to a Lyceum crowd remixed through rave culture. Leary and McKenna were resonating with the irish bard archetype. And Terrence was echoing Henry Munn, who was echoing Maria Sabina, whose chants and poetics can make her seem like Echo herself – a mythological story teller and poet (literally “sound”) who so transfixes Hera (Zeus’s wife) that Zeus can consort with nymphs. Everywhere we look there are allegories of sexual selection’s role in the evolution of poetic & shamanic language! 

And Terrence embodies the spirit of eloquence, helping translate our new technological realities (e.g. virtual reality, a fractal view of nature, radical ecology) and the states of mind that were likely to accompany them. Merlin Donald writes of the effects of “external symbolic storage” on human culture – as a onetime student of McLuhan’s, Donald was following up on Plato’s insights I mentioned above that writing changes how we think, and therefore, who we are

Human culture is going through a fantastic “reality crisis” wherein we discover the creative role we play in nature. Our role in global climate change – not to mention our role in dwindling biodiversity – is the “shadow” side of our increasing awareness that humans have a radical creative responsibility for their individual and collective lives. And our lives are inseparable from the ecosystems with which we are enmeshed. THAT is reality. To the extent that we can gather and focus our attention on retuning our relation towards ecosystems in crisis, language can indeed shape reality. We’ll get the future we imagine, not necessarily the one we deserve.

Q: Robert Anton Wilson spoke about “reality tunnels”…. These ‘constructs’ can limit our perspectives and perception of reality, they can trap us, belittle us, enslave us, make us miserable or set us free… How can we hack our reality tunnel?  Is it possible to use rhetoric and/or psychedelics to “reprogram” our reality tunnel? 

Richard Doyle: We do nothing but program and reprogram our reality tunnelsSeriously, the Japanese reactor crisis follows on the BP oil spill as a reminder that we are deeply interconnected on the level of infrastructure – technology is now planetary in scale, so what happens here effects somebody, sometimes Everybody, there. These infrastructures – our food sheds, our energy grid, our global media - run on networks, protocols, global standards, agreements: language, software, images, databases and their mycelial networks.

The historian Michel Foucault called these “discourses”, but we need to connect these discourses to the nonhuman networks with which they are enmeshed, and globalization has been in part about connecting discourses to each other across the planet. Ebola ends up in Virginia, Starbucks in Hong Kong. This has been true for a long time, of course – Mutual Assured Destruction was planetary in scale and required a communication and control structure linking, for example, a Trident submarine under the arctic ice sheet – remember that? - to a putatively civilian political structure Eisenhower rightly warned us about: the military industrial complex. The moon missions illustrate this principle as well – we remember what was said as much as what else was done, and what was said, for a while, seem to induce a sense of truly radical and planetary possibility.

So if we think of words as a description of reality rather than part of the infrastructure of reality, we miss out on the way different linguistic patterns act as catalysts for different realities. I call these “rhetorical softwares”. In my first two books, before I really knew about Wilson’s work or had worked through Korzybski with any intensity, I called these “rhetorical softwares.”

Now the first layer of our reality tunnel is our implicit sense of self – this is the only empirical reality any of us experiences – what we subjectively experience. RAW was a brilliant analyst of the ways experience is shaped by the language we use to describe it. One of my favorite examples from his work is his observation that in English, “reality” is a noun, so we start to treat it as a “thing”, when in fact reality, this cosmos, is also quite well mapped as an action – a dynamic unfolding for 13.7 billion years. That is a pretty big mismatch between language and reality, and can give us a sense that reality is inert, dead, lifeless, “concrete”, and thus not subject to change. By experimenting with what Wilson, following scientist John Lilly, called “metaprograms”, we can change the maps that shape the reality we inhabit. (…)

Q: The film Inception explored the notion that our inner world can be a vivid, experiential dimension, and that we can hack it, and change our reality… what do you make of this? 

Richard Doyle: The whole contemplative tradition insists on this dynamic nature of consciousness. “Inner” and “outer” are models for aspects of reality – words that map the world only imperfectly. Our “inner world” - subjective experience – is all we ever experience, so if we change it obviously we will see a change in what we label “external” reality it is of course part of and not separable from. One of the maps we should experiment with, in my view, is this “inner” and “outer” one – this is why one of my aliases is “mobius.” A mobius strip helps makes clear that “inside” and “outside” are… labels. As you run your finger along a mobius strip, the “inside” becomes “outside” and the “outside” becomes “inside.”.

Q: Can we give put inceptions out into the world?

Richard Doyle: We do nothing but! And, it is crucial to add, so too does the rest of our ecosystem. Bacteria engage in quorum sensing, begin to glow, and induce other bacteria to glow – this puts their inceptions into the world. Thanks to the work of scientists like Anthony Trewavas, we know that plants engage in signaling behavior between and across species and even kingdoms: orchids “throw” images of female wasps into the world, attracting male wasps, root cells map the best path through the soil. The whole blooming confusion of life is signaling, mapping and informing itself into the world. The etymology of “inception” is “to begin, take in hand” - our models and maps are like imagined handholds on a dynamic reality.

Q: What is the relationship between psychedelics and information technology? How are ipods, computers and the internet related to LSD? 

Richard Doyle: This book is part of a trilogy on the history of information in the life sciences. So, first: psychedelics and biology. It turns out that molecular biology and psychedelics were important contexts for each other. I first started noticing this when I found that many people who had taken LSD were talking about their experiences in the language of molecular biology – accessing their DNA and so forth. When I learned that psychedelic experience was very sensitive to “set and setting” - the mindset and context of their use - I wanted to find out how this language of molecular biology was effecting people’s experiences of the compounds. In other words, how did the language affect something supposedly caused by chemistry? 

Tracking the language through thousands of pages, I found that both the discourse of psychedelics and molecular biology were part of the “informatic vision” that was restructuring the life sciences as well as the world, and found common patterns of language in the work of Timothy Leary (the Harvard psychologist) and Francis Crick (who won the Nobel prize with James Watson and Maurice Wilkins for determining the structure of DNA in 1954), so in 2002 I published an article describing the common “language of information” spoken by Leary and Crick. I had no idea that Crick had apparently been using LSD when he was figuring out the structure of DNA. Yes, that blew my mind when it came out in 2004. I feel like I read that between the lines of Crick’s papers, which gave me confidence to write the rest of the book about the feedback between psychedelics and the world we inhabit.

The paper did hone in on the role that LSD played in the invention of PCR (polymerase chain reaction) – Kary Mullis, who won the Nobel prize for the invention of this method of making copies of a sequence of DNA, talked openly of the role that LSD played in the process of invention. Chapter 4 of the book looks to use of LSD in “creative problem solving” studies of the 1960s. These studies – hard to imagine now, 39 years into the War on Drugs, but we can Change the Archetype - suggest that used with care, psychedelics can be part of effective training in remembering how to discern the difference between words and things, maps and territories.

In short, this research suggested that psychedelics were useful for seeing the limitations of words as well as their power, perhaps occasioned by the experience of the linguistic feedback loops between language and psychedelic experiences that themselves could never be satisfactorily described in language. I argue that Mullis had a different conception of information than mainstream molecular biology – a pragmatic concept steeped in what you can do with words rather than in what they mean. Mullis seems to have thought of information as “algorithms” - recipes of code, while the mainsteam view was thinking of it as implicitly semantically, as “words with meaning.”

Ipods, Internet, etc: Well, in some cases there are direct connections. Perhaps Bill Joy said it best when he said that there was a reason that LSD and Unix were both from BerkeleyWhat the Doormouse Said by John Markoff came out after I wrote my first paper on Mullis and I was working on the book, and it was really confirmation of a lot of what I seeing indicated by my conceptual model of what is going on, which is as follows: Sexual selection is a good way to model the evolution of information technology. It yields bioluminescence – the most common communication strategy on the planet – chirping insects, singing birds, Peacocks fanning their feathers, singing whales, speaking humans, and humans with internet access. These are all techniques of information production, transformation or evaluation. I am persuaded by Geoffrey Miller’s update of Charles Darwin’s argument that language and mind are sexually selected traits, selected not simply for survival or even the representation of fitness, but for their sexiness. Leary: “Intelligence is the greatest aphrodisiac.”

I offer the hypothesis that psychedelics enter the human toolkit as “eloquence adjuncts” - tools and techniques for increasing the efficacy of language to seemingly create reality – different patterns of language ( and other attributes of set and setting) literally causes different experiences. The informatic revolution is about applying this ability to create reality with different “codes” to the machine interface. Perhaps this is one of the reason people like Mitch Kapor (a pioneer of computer spreadsheets), Stewart Brand (founder of a pre-internet computer commons known as the Well) and Bob Wallace (one of the original Microsoft seven and an early proponent of shareware), Mark Pesce were or are all psychonauts.

Q: Cyborg Anthropologist Amber Case has written about Techno-social wormholes.. the instant compression of time and space created every time we make a telephone call…  What do you make of this compression of time and space made possible by the engineering “magic” of technology? 

Richard Doyle:  It’s funny the role that the telephone call plays as an example in the history of our attempts to model the effects of information technologies. William Gibson famously defined cyberspace as the place where a telephone call takes place. (Gibson’s coinage of the term “cyberspace” is a good example of an “inception”) Avital Ronell wrote about Nietzsche’s telephone call to the beyond and interprets the history of philosophy according to a “telephonic logic”. When I was a child my father once threw our telephone into the atlantic ocean – that was what he made of the magic of that technology, at least in one moment of anger. This was back in the day when Bell owned your phone and there was some explaining to do. This magic of compression has other effects – my dad got phone calls all day at work, so when was at home he wanted to turn it off. The only way he knew to turn it off was to rip it out of the wall – there was no modular plug, just a wire into the wall - and throw it into the ocean.

So there is more than compression going on here: Deleuze and Guattari, along with the computer scientist Pierre Levy after them, call it “deterritorialization”. The differences between “here” and “there” are being constantly renegotiated as our technologies of interaction develop. Globalization is the collective effect of these deterritorializations and reterritorializations at any given moment.

And the wormhole example is instructive: the forces that enable such collapse of space and time as the possibility of time travel would likely tear us to smithereens. The tensions and torsions of this deterritorialization at part of what is at play in the Wikileaks revolutions, this compression of time and space offers promise for distributed governance as well as turbulence. Time travel through wormholes, by the way, is another example of an inception – Carl Sagan was looking for a reasonable way to transport his fictional aliens in Contact, called Cal Tech physicist Skip Thorne for help, and Thorne came up with the idea.

Q: The film Vanilla Sky explored the notion of a scientifically-induced lucid dream where we can live forever and our world is built out of our memories and ”sculpted moment to moment and lived with the romantic abandon of a summer day or the feeling of a great movie or a pop song you always loved”. Can we sculpt ‘real’ reality as if it were a “lucid dream”

Richard Doyle:Some traditions model reality as a lucid dream. The Diamond Sutra tells us that to be enlightened we must view reality as “a phantom, a dew drop, a bubble.”  This does not mean, of course, that reality does not exist, only that appearance has no more persistence than a dream and that what we call “reality” is our map of reality. When we wake up, the dream that had been so compelling is seen to be what it was: a dream, nothing more or less. Dreams do not lack reality – they are real patterns of information. They just aren’t what we usually think they are. Ditto for “ordinary” reality. Lucid dreaming has been practiced by multiple traditions for a long time – we can no doubt learn new ways of doing so. In the meantime, by recognizing and acting according to the practice of looking beyond appearances, we can find perhaps a smidgeon more creative freedom to manifest our intentions in reality.

Q: Paola Antonelli, design curator of MoMa, has written about Existenz Maximum, the ability of portable music devices like the ipod to create”customized realities”, imposing a soundtrack on the movie of our own life. This sounds empowering and godlike- can you expand on this notion? How is technology helping us design every aspect of both our external reality as well as our internal, psychological reality?

Richard Doyle: Well, the Upanishads and the Book of Luke both suggest that we “get our inner Creator on”, the former by suggesting that “Tat Tvam Asi” - there is an aspect of you that is connected to Everything, and the latter by recommending that we look not here or there for the Kingdom of God, but “within.” So if this sounds “god like”, it is part of a long and persistent tradition. I personally find the phrase “customized realities” redundant given the role of our always unique programs and metaprograms. So what we need to focus on his: to which aspect of ourselves do we wish to give this creative power? These customized realities could be enpowering and god like for corporations that own the material, or they could enpower our planetary aspect that unites all of us, and everything in between. It is, as always, the challenge of the magus and the artist to decide how we want to customize reality once we know that we can.

Q: The Imaginary Foundation says that "to understand is to perceive patterns"… Some advocates of psychedelic therapy have said that certain chemicals heighten our perception of patterns..They help! us “see more”.  What exactly are they helping us understand? 

Richard Doyle: Understanding! One of the interesting bits of knowledge that I found in my research was some evidence that psychonauts scored better on the Witkin Embedded Figure test, a putative measure of a human subject’s ability to “distinguish a simple geometrical figure embedded in a complex colored figure.” When we perceive the part within the whole, we can suddenly get context, understanding.

Q: An article pointing to the use of psychedelics as catalysts for breakthrough innovation in silicon valley says that users …

"employ these cognitive catalysts, de-condition their thinking periodically and come up with the really big connectivity ideas arrived at wholly outside the linear steps of argument. These are the gestalt-perceiving, asterism-forming “aha’s!” that connect the dots and light up the sky with a new archetypal pattern."

This seems to echo what other intellectuals have been saying for ages.  You referred to Cannabis as “an assassin of referentiality, inducing a butterfly effect in thought. Cannabis induces a parataxis wherein sentences resonate together and summon coherence in the bardos between one statement and another.”

Baudelaire also wrote about cannabis as inducing an artificial paradise of thought:  

“…It sometimes happens that people completely unsuited for word-play will improvise an endless string of puns and wholly improbable idea relationships fit to outdo the ablest masters of this preposterous craft. […and eventually]… Every philosophical problem is resolved. Every contradiction is reconciled. Man has surpassed the gods.”

Anthropologist Henry Munn wrote that:

"Intoxicated by the mushrooms, the fluency, the ease, the aptness of expression one becomes capable of are such that one is astounded by the words that issue forth… At times… the words leap to mind, one after another, of themselves without having to be searched for: a phenomenon similar to the automatic dictation of the surrealists except that here the flow of consciousness, rather than being disconnected, tends to be coherent: a rational enunciation of meanings.  The spontaneity they liberate is not only perceptual, but linguistic, the spontaneity of speech, of fervent, lucid discourse, astonishing. […] For the inspired one, it is as if existence were uttering itself through him […]

Can you expand a bit on how certain ecodelics (as well as marijuana) can help us de-condition our thinking, have creative breakthroughs as well as intellectual catharsis? How is it that “intoxication” could, under certain conditions, actually improve our cognition and creativity and contribute to the collective intelligence of the species?

Richard Doyle: I would point, again, to Pahnke's description of ego death. This is by definition an experience when our maps of the world are humbled. In the breakdown of our ordinary worldview - such as when a (now formerly)  secular being such as myself finds himself  feeling unmistakably sacred - we get a glimpse of reality without our usual filters. It is just not possible to use the old maps, so we get even an involuntary glimpse of reality. This is very close to the Buddhist practice of exhausting linguistic reference through chanting or Koans - suddenly we see the world through something besides our verbal mind.

Ramana Maharshi says that in the silence of the ego we perceive reality - reality IS the breakdown of the ego. Aldous Huxley, who was an extraordinarily adroit and eloquent writer with knowledge of increasingly rare breadth and depth, pointed to a quote by William Blake when trying to sum up his experience: the doors of perception were cleansed. This is a humble act, if you think about it: Huxley, faced with the beauty and grandeur of his mescaline experience, offers the equivalent of ‘What he said!”. Huxley also said that psychedelics offered a respite from “the throttling embrace of the self”, suggesting that we see the world without the usual filters of our egoic self. (…)

And if you look carefully at the studies by pioneers such as Myron Stolaroff and Willis Harman that you reference, as I do in the book, you will see that great care was taken to compose the best contexts for their studies. Subjects, for example, were told not to think about personal problems but to focus on their work at hand, and, astonishingly enough, it seems to have worked. These are very sensitive technologies and we really need much more research to explore their best use. This means more than studying their chemical function - it means studying the complex experiences human beings have with them. Step one has to be accepting that ecodelics are and always have been an integral part of human culture for some subset of the population. (…)

Q: Kevin Kelly refers to technological evolution as following the momentum begun at the big bang - he has stated:

"…there is a continuum, a connection back all the way to the Big Bang with these self-organizing systems that make the galaxies, stars, and life, and now is producing technology in the same way. The energies flowing through these things are, interestingly, becoming more and more dense. If you take the amount of energy that flows through one gram per second in a galaxy, it is increased when it goes through a star, and it is actually increased in life…We don’t realize this. We think of the sun as being a hugely immense amount of energy. Yet the amount of energy running through a sunflower per gram per second of the livelihood, is actually greater than in the sun. Actually, it’s so dense that when it’s multiplied out, the sunflower actually has a higher amount of energy flowing through it. "..

Animals have even higher energy usage than the plant, and a jet engine has even higher than an animal. The most energy-dense thing that we know about in the entire universe is the computer chip in your computer. It is sending more energy per gram per second through that than anything we know. In fact, if it was to send it through any faster, it would melt or explode. It is so energy-dense that it is actually at the edge of explosion.”…  

Can you comment on the implications of what he’s saying here?

Richard Doyle: I think maps of “continuity” are crucial and urgently needed. We can model the world as either “discrete” - made up of parts - or “continuous” - composing a whole - to powerful effect. Both are in this sense true. This is not “relativism” but a corollary of that creative freedom to choose our models that seems to be an attribute of consciousness. The mechanistic worldview extracts, separates and reconnects raw materials, labor and energy in ways that produce astonishing order as well as disorder (entropy).

By mapping the world as discrete – such as the difference between one second and another – and uniform – to a clock, there is no difference between one second and another – we have transformed the planet. Consciousness informed by discrete maps of reality has been an actual geological force in a tiny sliver of time. In so doing, we have have transformed the biosphere. So you can see just how actual this relation between consciousness, its maps, and earthly reality is. This is why Vernadsky, a geophysicist, thought we needed a new term for the way consciousness functions as a geological force: noosphere.

These discrete maps of reality are so powerful that we forget that they are maps. Now if the world can be cut up into parts, it is only because it forms a unity. A Sufi author commented that the unity of the world was both the most obvious and obscure fact. It is obvious because our own lives and the world we inhabit can be seen to continue without any experienced interruption – neither the world nor our lives truly stops and starts. This unity can be obscure because in a literal sense we can’t perceive it with our senses – this unity can only be “perceived” by our minds. We are so effective as separate beings that we forget the whole for the part.

The world is more than a collection of parts, and we can quote Carl Sagan: “The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.” Equally beautiful is what Sagan follows up with: “The cosmos is also within us. We are made of star stuff.” Perhaps this is why models such as Kelly’s feel so powerful: reminding ourselves that there is a continuity between the Big Bang and ourselves means we are an aspect of something unfathomably grand, beautiful, complex and unbroken. This is perhaps the “grandeur” Darwin was discussing. And when we experience that grandeur it can help us think and act in aways appropriate to a geological force.

I am not sure about the claims for energy that Kelly is making – I would have to see the context and the source of his data – but I do know that when it comes to thermodynamics, what he is saying rings true. We are dissipative structures far from equilibrium, meaning that we fulfill the laws of thermodynamics. Even though biological systems such as ourselves are incredibly orderly – and we export that order through our maps onto and into the world – we also yield more entropy than our absence. Living systems, according to an emerging paradigm of Stanley Salthe, Rob Swenson, the aforementioned Margulis and Sagan, Eric Schneider, James J. kay and others, maximize entropy, and the universe is seeking to dissipate ever greater amounts of entropy.

Order is a way to dissipate yet more energy. We’re thermodynamic beings, so we are always on the prowl for new ways to dissipate energy as heat and create uncertainty (entropy), and consciousness helps us find ever new ways to do so. (In case you are wondering, Consciousness is the organized effort to model reality that yields ever increasing spirals of uncertainty in Deep Time. But you knew that.) It is perhaps in this sense that, again following Carl Sagan, “ We are a way for the cosmos to know itself.” That is pretty great map of continuity.

What I don’t understand in Kelly’s work, and I need to look at with more attention, is the discontinuity he posits between biology and technology. In my view our maps have made us think of technology as different in kind from biology, but the global mycelial web of fungi suggests otherwise, and our current view of technology seems to intensify this sense of separation even as we get interconnected through technology. I prefer Noosphere to what Kelly calls the Technium because it reminds us of the ways we are biologically interconnected with our technosocial realities. Noosphere sprouts from biosphere.

Q: There is this notion of increasing complexity… Yet in a universe where entropy destroys almost everything, here we are, the cutting edge of evolution, taking the reigns and accelerating this emergent complexity.. Kurzweil says that this makes us “very important”: 

“…It turns out that we are central, after all.  Our ability to create models—virtual realities—in our brains, combined with ou modest-looking thumbs, has been sufficient to usher in another form of evolution: technology. That development enabled the persistence of the accelerating pace that started with biological evolution. It will continue until the entire universe is at our fingertips.”   

What do you think?

Richard Doyle: Well, I think from my remarks already you can see that I agree with Kurzweil here and can only suggest that it is for this very reason that we must be very creative, careful and cunning with our models. Do we model the technologies that we are developing according to the effects they will have on the planetary whole? Only rarely, though this is what we are trying to do at the Penn State Center for Nanofutures, as are lots of people involved in Science, Technology and Society as well as engineering education. When we develop technologies - and that is the way psychedelics arrived in modern culture, as technologies -  we must model their effects not only on the individuals who use them, but on the whole of our ecosystem and planetary society.

If our technological models are based on the premise that this is a dead planet – and most of them very much are, one is called all kinds of names if you suggest otherwise - animist, vitalist, Gaian intelligence agent, names I wear with glee – then we will end up with a asymptotically dead planet. Consciousness will, of course, like the Terminator, “Be Back” should we perish, but let us hope that it learns to experiment better with its maps and learns to notice reality just a little bit more. I am actually an optimist on this front and think that a widespread “aha” moment is occurring where there is a collective recognition of the feedback loops that make up our technological & biological evolution.

Again, I don’t know why Kurzweil seems to think that technological evolution is discontinuous with biological evolution – technology is nested within the network of “wetwares” that make it work, and our wetwares are increasingly interconnected with our technological infrastructure, as the meltdowns in Japan demonstrate along with the dependence of many of us – we who are more bacterial than human by dry weight - upon a network of pharmaceuticals and electricity for continued life. The E. coli outbreak in Europe is another case in point – our biological reality is linked with the technological reality of supply chain management. Technological evolution is biological evolution enabled by the maps of reality forged by consciousness. (…)

Whereas technology for many promised the “disenchantment” of the world –the rationalization of this world of the contemplative spirit as everything became a Machine – here was mystical contemplative experience manifesting itself directly within what sociologist Max Weber called the “iron cage of modernity”, Gaia bubbling up through technological “Babylon.”

Now many contemplatives have sought to share their experiences through writing – pages and pages of it. As we interconnect through information technology, we perhaps have the opportunity to repeat this enchanted contemplative experience of radical interconnection on another scale, and through other means. Just say Yes to the Noosphere!”

Richard Doyle, Professor of English Affiliate Faculty, Information Science and Technology at Pennsylvania State University, in conversation with Jason Silva, Creativity, evolution of mind and the “vertigo of freedom”, Big Think, June 21, 2011. (Illustrations: 1) Randy Mora, Artífices del sonido, 2) Noosphere)

See also:

☞ RoseRose, Google and the Myceliation of Consciousness
Kevin Kelly on Why the Impossible Happens More Often
Luciano Floridi on the future development of the information society
Luciano Floridi on The Digital Revolution as a Fourth Revolution: “P2P does’t mean Pirate to Pirate but Platonist to Platonist”
The Rise of the Conversation Society: information, communication and collaboration
Keen On… James Gleick: Why Cyberspace, As a Mode of Being, Will Never Go Away (TCTV), (video) TechCrunch, Jun 23, 2011
Timothy Leary on cybernetics and a new global culture
Mark Changizi on Humans, Version 3.0.
Cyberspace tag on Lapidarium

Mar
20th
Tue
permalink

Nicholas Carr on the evolution of communication technology and our compulsive consumption of information

        

"The term “information age” gets across our sense that we’re engulfed in information in a way that is very different from anything that’s come before. (…)

I think it’s pretty clear that humans have a natural inclination, even compulsion, to seek out information. We want not only to be entertained but to know everything that is going on around us. And so as these different mass media have proliferated, we’ve gone along with the technology and consumed – to put an ugly term on it – more information. (…)

"In “The Shallows” I argue that the Internet fundamentally encourages very rapid gathering of small bits of information – the skimming and scanning of information to quickly get the basic gist of it. What it discourages are therefore the ways of thinking that require greater attentiveness and concentration, everything from contemplation to reflection to deep reading.

The Internet is a hypertext system, which means that it puts lots of links in a text. These links are valuable to us because they allow us to go very quickly between one bit of information and another. But there are studies that compare what happens when a person reads a printed page of text versus when you put links into that text. Even though we may not be conscious of it, a link represents a little distraction, a little division of attention. You can see in the evidence that reading comprehension goes down with hypertext versus plaintext. (…)

The reason why I start with Tom Standage’s book is because we tend to think of the information age as something entirely new. In fact, people have been wrestling with information for many centuries. If I was going to say when the information age started, I would probably say the 15th century with the invention of the mechanical clock, which turned time into a measurable flow, and the printing press, which expanded our ability to tap into other kinds of thinking. The information age has been building ever since then.

Standage covers one very important milestone in that story, which is the building of the telegraph system in the 19th century. The telegraph was the first really efficient system for long-distance, almost instantaneous communication. It’s a short book, a very lively read, and it shows how this ability to throw one’s thoughts across the world changed all aspects of society. It certainly changed the business world. Suddenly you could coordinate a business not just in a local area, but across the country or across oceans. It had a lot of social implications too, as people didn’t have to wait for letters to come over the course of days. And as Standage points out, it inspired a lot of the same hopes and concerns that we have today with the Internet. (…)

If “The Information” is a sprawling, sweeping story of how information has changed over time, one thing it doesn’t get into is the commercial nature of information as a good that is bought and sold. That’s the story Tim Wu tells in ”The Master Switch.” His basic argument is that whenever a new communication medium arises, a similar pattern occurs. The technology starts off as a hobbyist’s passion, democratic and open. Then over time, as it becomes more popular, it starts to be dominated by corporate interests and becomes much more formalised, before eventually being displaced by a new technology.

You see this with radio, for instance. In the beginning, radio was very much a hobbyist’s technology. When people bought a radio back then it wasn’t just a receiver, it was a transmitter. People would both receive and transmit information through their radio – it was an early version of the blogosphere in some ways. Then dominant radio corporations come in, and suddenly radio isn’t a democratic tool for transmitting and receiving information, it’s purely for receiving. Tim Wu tells a series of stories like this, and television. All of that history is really a backdrop for a discussion of the Internet, which Wu suggests will likely follow the same cycle.

So far, I think we’ve seen that. When the World Wide Web appeared 20 years ago, there was all kinds of utopian, democratic rhetoric about how it was breaking the hold of big corporations over media and communications. You saw a huge explosion of personal websites. But over time you saw corporate interests begin to dominate the web – Google, Facebook and so on. If you look at how much time a user devotes to Facebook, it shows a consolidation and centralisation of web activity onto these large corporate sites. (…)

Matthew Crawford argues that we’re losing our sense of importance of actual physical interaction with the natural world. He says that the richest kind of thinking that’s open to human beings is not thinking that takes place in the mind but thinking that involves both the mind and the body interacting with the world. Whereas when we’re sitting at our computer or looking at our smartphone, we’re in a world of symbols. It seems to me that one of the dangers of the Internet, and the way that the screen mediates all work and other kinds of processing, is that not only are we distancing ourselves from interaction with the world, but we’re beginning to lose sight of the fact that that’s even important. (…)

As more and more of the physical world is operated by software and computers, we shut off interacting with the world. Crawford, in addition to being a political philosopher, is also a motorcycle mechanic. And a lot of the book is simply stories of being a mechanic. One of the points he makes is that people used to know how their cars worked. They could open the hood, see all of the parts of their engine, change their own oil. Now when you open your hood you can’t touch anything and you don’t know how the thing works. We’ve allowed ourselves to be removed from the physical world. We’re told just to look at our GPS screen and forget how the engine works.

Q: A key point about the information age we should mention is that societies have moved from an industrial economy to a service economy, with more people in white-collar jobs and increasing income disparity as a result.

That’s absolutely true. More and more of our basic jobs, due to broad shifts in the economy, involve manipulating symbols, whether it’s words, numbers or images. That too serves to distance ourselves from manual manipulation of the world. We have offloaded all of those jobs to specialists in order to spend more time working with symbols.

Q: Tell us why you’re closing with Gary Shteyngart’s novel “Super Sad True Love Story.”

I think that novelists, and other artists, are only beginning to grapple with the implications of the Internet, smartphones and all of that. Literature provides a different and very valuable way of perceiving those implications, so I decided to end with a novel. This book is both funny and extremely horrifying. It’s set in a future that is very close in some ways to the present. Shteyngart takes phenomena and trends that are around us but we don’t even notice, pushes them a little more extreme, and suddenly it gives you a new way to think about not only where we’re heading but where we already are. (…)

As is true with most dystopian science fiction, I don’t think it’s an attempt to portray what’s going to happen. It’s more an insight into how much we and our societies have changed in a very short time, without really being aware of it. If somebody from even 10 years ago suddenly dropped into the world and saw us all walking down the street staring at these little screens, hitting them with our thumbs, it would seem very strange.

It is becoming more and more normal to monitor your smartphone even while having a conversation with a friend, spouse or child. A couple will go out to a restaurant and the first thing they will each do is stick their iPhone or Android on the table in front of them, basically announcing that they’re not going to give their full attention to the other person. So technology seems to be changing even our relationships and social expectations. (…)

Q: In a hundred years’ time, what do you think the legacy of the early Internet will be?

I think the legacy will both be of enormous benefits – particularly those that can be measured in terms of efficiency and productivity, but also the ability for people to communicate with others – and also of more troubling consequences. We are witnessing an erosion not only of privacy but of the sense that privacy of the individual is important. And we are seeing the commercialisation of processes of communication, affiliation and friendship that used to be considered intimate.

You’re probably right to talk about a hundred years to sort this all out. There’s a whole lot of threads to the story that being in the midst of it are hard to see properly, and it’s difficult to figure out what the balance of good, bad and indifferent is.

Q: What’s next in the immediate five or 10 years for the information age?

More of the same. Overall I think the general trend, as exemplified by social networks and the evolution of Google, is towards ever smaller bits of information delivered ever more quickly to people who are increasingly compulsive consumers of media and communication products. So I would say more screens, smaller screens, more streams of information coming at us from more directions, and more of us adapting to that way of living and thinking, for better or worse.

Q: So we’re not at the apex of the information age? That peak is yet to come?

All indications are that we’re going to see more rather than less.”

Nicholas Carr, American writer, interwieved by Alec Ash, Our compulsive consumption of information, The Browser - Salon.com, Mar 19, 2012.

See also:

Does Google Make Us Stupid?
Nicholas Carr on what the internet is doing to our brains?
Nicholas Carr on Books That Are Never Done Being Written

Dec
27th
Tue
permalink

'To understand is to perceive patterns'

                  

"Everything we care about lies somewhere in the middle, where pattern and randomness interlace."

James Gleick, The Information: A History, a Theory, a Flood, Pantheon, 2011

"Humans are pattern-seeking story-telling animals, and we are quite adept at telling stories about patterns, whether they exist or not."

Michael Shermer

"The pattern, and it alone, brings into being and causes to pass away and confers purpose, that is to say, value and meaning, on all there is. To understand is to perceive patterns. (…) To make intelligible is to reveal the basic pattern.”

Isaiah Berlin, British social and political theorist, philosopher and historian, (1909-1997), The proper study of mankind: an anthology of essays, Chatto & Windus, 1997, p. 129.

"One of the most wonderful things about the emerging global superbrain is that information is overflowing on a scale beyond what we can wrap our heads around. The electronic, collective, hive mind that we know as the Internet produces so much information that organizing this data — and extracting meaning from it — has become the conversation of our time.

Sanford Kwinter’s Far From Equilibrium tackles everything from technology to society to architecture under the thesis that creativity, catharsis, transformation and progressive breakthroughs occur far from equilibrium. So even while we may feel overwhelmed and intimidated by the informational overload and radical transformations of our times, we should, perhaps, take refuge in knowing that only good can come from this. He writes:

“(…) We accurately think of ourselves today not only as citizens of an information society, but literally as clusters of matter within an unbroken informational continuum: "We are all," as the great composer Karlheinz Stockhausen once said, "transistors, in the literal sense. We send, receive and organize [and] so long as we are vital, our principle work is to capture and artfully incorporate the signals that surround us.” (…)

Clay Shirky often refers to the “Cognitive Surplus,” the overflowing output of the billion of minds participating in the electronic infosphere. A lot of this output is silly, but a lot of it is meaningful and wonderful. The key lies in curation; which is the result of pattern-recognition put into practice. (…)

Matt Ridley’s TED Talk, “When Ideas Have Sex” points to this intercourse of information and how it births new thought-patterns. Ideas, freed from the confines of space and time by the invisible, wireless metabrain we call The Internet, collide with one another and explode into new ideas; accelerating the collective intelligence of the species. Creativity thrives when minds come together. The last great industrial strength creative catalyst was the city: It is no coincidence than when people migrate to cities in large numbers, creativity and innovation thrives.  

Now take this very idea and apply it to the web:  the web  essentially is a planetary-scale nervous system where individual minds take on the role of synapses, firing electrical pattern-signals to one another at light speed — the net effect being an astonishing increase in creative output. (…)

Ray Kurzweil too, expounds on this idea of the power of patterns:

“I describe myself as a patternist, and believe that if you put matter and energy in just the right pattern you create something that transcends it. Technology is a good example of that: you put together lenses and mechanical parts and some computers and some software in just the right combination and you create a reading machine for the blind. It’s something that transcends the semblance of parts you’ve put together. That is the nature of technology, and it’s the nature of the human brain.

Biological molecules put in a certain combination create the transcending properties of human intelligence; you put notes and sounds together in just the rightcombination, and you create a Beethoven symphony or a Beatles song. So patterns have a power that transcends the parts of that pattern.”

R. Buckminster Fuller refers to us as “pattern integrities.” “Understanding order begins with understanding patterns,” he was known to say E.J. White, who worked with Fuller, says that:

“For Fuller, the thinking process is not a matter of putting anything into the brain or taking anything out; he defines thinking as the dismissal of irrelevancies, as the definition of relationships” — in other words, thinking is simultaneously a form of filtering out the data that doesn’t fit while highlighting the things that do fit together… We dismiss whatever is an “irrelevancy” and retain only what fits, we form knowledge by ‘connecting the dots’… we understand things by perceiving patterns — we arrive at conclusions when we successfully reveal these patterns. (…)

Fuller’s primary vocation is as a poet. All his disciplines and talents — architect, engineer, philosopher, inventor, artist, cartographer, teacher — are just so many aspects of his chief function as integrator… the word “poet" is a very general term for a person who puts things together in an era of great specialization when most people are differentiating or taking things apart… For Fuller, the stuff of poetry is the patterns of human behavior and the environment, and the interacting hierarchies of physics and design and industry. This is why he can describe Einstein and Henry Ford as the greatest poets of the 20th century.” (…)

In a recent article in Reality Sandwich, Simon G Powell proposed that patterned self-organization is a default condition of the universe: 

“When you think about it, Nature is replete with instances of self-organization. Look at how, over time, various exquisitely ordered patterns crystallise out of the Universe. On a macroscopic scale you have stable and enduring spherical stars, solar systems, and spiral galaxies. On a microscopic scale you have atomic and molecular forms of organization. And on a psychological level, fed by all this ambient order and pattern, you have consciousness which also seems to organise itself into being (by way of the brain). Thus, patterned organisation of one form or another is what nature is proficient at doing over time

This being the case, is it possible that the amazing synchronicities and serendipities we experience when we’re doing what we love, or following our passions — the signs we pick up on when we follow our bliss- represent an emerging ‘higher level’ manifestation of self-organization? To make use of an alluring metaphor, are certain events and cultural processes akin to iron filings coming under the organising influence of a powerful magnet? Is serendipity just the playing out on the human level of the same emerging, patterned self-organization that drives evolution?

Barry Ptolemy's film Transcendent Man reminds us that the universe has been unfolding in patterns of greater complexity since the beginning of time. Says Ptolemy:

First of all we are all patterns of information. Second, the universe has been revealing itself as patterns of information of increasing order since the big bang. From atoms, to molecules, to DNA, to brains, to technology, to us now merging with that technology. So the fact that this is happening isn’t particularly strange to a universe which continues to evolve and unfold at ever accelerating rates.”

Jason Silva, Connecting All The Dots - Jason Silva on Big think, Imaginary Fundation, Dec 2010

"Networks are everywhere. The brain is a network of nerve cells connected by axons, and cells themselves are networks of molecules connected by biochemical reactions. Societies, too, are networks of people linked by friendships, familial relationships and professional ties. On a larger scale, food webs and ecosystems can be represented as networks of species. And networks pervade technology: the Internet, power grids and transportation systems are but a few examples. Even the language we are using to convey these thoughts to you is a network, made up of words connected by syntactic relationships.”

'For decades, we assumed that the components of such complex systems as the cell, the society, or the Internet are randomly wired together. In the past decade, an avalanche of research has shown that many real networks, independent of their age, function, and scope, converge to similar architectures, a universality that allowed researchers from different disciplines to embrace network theory as a common paradigm.”

Albert-László Barabási , physicist, best known for his work in the research of network theory, and Eric Bonabeau, Scale-Free Networks, Scientific American, April 14, 2003.

Coral reefs are sometimes called “the cities of the sea”, and part of the argument is that we need to take the metaphor seriously: the reef ecosystem is so innovative because it shares some defining characteristics with actual cities. These patterns of innovation and creativity are fractal: they reappear in recognizable form as you zoom in and out, from molecule to neuron to pixel to sidewalk. Whether you’re looking at original innovations of carbon-based life, or the explosion of news tools on the web, the same shapes keep turning up. (…) When life gets creative, it has a tendency to gravitate toward certain recurring patterns, whether those patterns are self-organizing, or whether they are deliberately crafted by human agents.”

— Steven Johnson, author of Where Good Ideas Come From, cited by Jason Silva

"Network systems can sustain life at all scales, whether intracellularly or within you and me or in ecosystems or within a city. (…) If you have a million citizens in a city or if you have 1014 cells in your body, they have to be networked together in some optimal way for that system to function, to adapt, to grow, to mitigate, and to be long term resilient."

Geoffrey West, British theoretical physicist, The sameness of organisms, cities, and corporations: Q&A with Geoffrey West, TED, 26 July 2011.

“Recognizing this super-connectivity and conductivity is often accompanied by blissful mindbody states and the cognitive ecstasy of multiple “aha’s!” when the patterns in the mycelium are revealed. That Googling that has become a prime noetic technology (How can we recognize a pattern and connect more and more, faster and faster?: superconnectivity and superconductivity) mirrors the increased speed of connection of thought-forms from cannabis highs on up. The whole process is driven by desire not only for these blissful states in and of themselves, but also as the cognitive resource they represent.The devices of desire are those that connect,” because as Johnson says “chance favors the connected mind”.

Google and the Myceliation of Consciousness, Reality Sandwich, 10-11-2007

Jason Silva, Venezuelan-American television personality, filmmaker, gonzo journalist and founding producer/host for Current TV, To understand is to perceive patterns, Dec 25, 2011 (Illustration: Color Blind Test)

[This note will be gradually expanded]

See also:

The sameness of organisms, cities, and corporations: Q&A with Geoffrey West, TED, 26 July 2011.
☞ Albert-László Barabási and Eric Bonabeau, Scale-Free Networks, Scientific American, April 14, 2003.
Google and the Myceliation of Consciousness, Reality Sandwich, 10.11.2007
The Story of Networks, Lapidarium notes
Geoffrey West on Why Cities Keep Growing, Corporations and People Always Die, and Life Gets Faster
☞ Manuel Lima, visualcomplexity.com, A visual exploration on mapping complex networks
Constructal theory, Wiki
☞ A. Bejan, Constructal theory of pattern formation (pdf), Duke University
Pattern recognition, Wiki
Patterns tag on Lapidarium
Patterns tag on Lapidarium notes