Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Sep
29th
Sun
permalink

Kevin Kelly: The Improbable is the New Normal

"The improbable consists of more than just accidents. The internets are also brimming with improbable feats of performance — someone who can run up a side of a building, or slide down suburban roof tops, or stack up cups faster than you can blink. Not just humans, but pets open doors, ride scooters, and paint pictures. The improbable also includes extraordinary levels of super human achievements: people doing astonishing memory tasks, or imitating all the accents of the world. In these extreme feats we see the super in humans.

Every minute a new impossible thing is uploaded to the internet and that improbable event becomes just one of hundreds of extraordinary events that we’ll see or hear about today. The internet is like a lens which focuses the extraordinary into a beam, and that beam has become our illumination. It compresses the unlikely into a small viewable band of everyday-ness. As long as we are online - which is almost all day many days — we are illuminated by this compressed extraordinariness. It is the new normal.

That light of super-ness changes us. We no longer want mere presentations, we want the best, greatest, the most extraordinary presenters alive, as in TED. We don’t want to watch people playing games, we want to watch the highlights of the highlights, the most amazing moves, catches, runs, shots, and kicks, each one more remarkable and improbable than the other.

We are also exposed to the greatest range of human experience, the heaviest person, shortest midgets, longest mustache — the entire universe of superlatives! Superlatives were once rare — by definition — but now we see multiple videos of superlatives all day long, and they seem normal. Humans have always treasured drawings and photos of the weird extremes of humanity (early National Geographics), but there is an intimacy about watching these extremities on video on our phones while we wait at the dentist. They are now much realer, and they fill our heads.

I see no end to this dynamic. Cameras are becoming ubiquitous, so as our collective recorded life expands, we’ll accumulate thousands of videos showing people being struck by lightening. When we all wear tiny cameras all the time, then the most improbable accident, the most superlative achievement, the most extreme actions of anyone alive will be recorded and shared around the world in real time. Soon only the most extraordinary moments of our 6 billion citizens will fill our streams. So henceforth rather than be surrounded by ordinariness we’ll float in extraordinariness. (…)

When the improbable dominates the archive to the point that it seems as if the library contains ONLY the impossible, then these improbabilities don’t feel as improbable. (…)

To the uninformed, the increased prevalence of improbable events will make it easier to believe in impossible things. A steady diet of coincidences makes it easy to believe they are more than just coincidences, right? But to the informed, a slew of improbably events make it clear that the unlikely sequence, the outlier, the black swan event, must be part of the story. After all, in 100 flips of the penny you are just as likely to get 100 heads in a row as any other sequence. But in both cases, when improbable events dominate our view — when we see an internet river streaming nothing but 100 heads in a row — it makes the improbable more intimate, nearer.

I am unsure of what this intimacy with the improbable does to us. What happens if we spend all day exposed to the extremes of life, to a steady stream of the most improbable events, and try to run ordinary lives in a background hum of superlatives? What happens when the extraordinary becomes ordinary?

The good news may be that it cultivates in us an expanded sense of what is possible for humans, and for human life, and so expand us. The bad news may be that this insatiable appetite for supe-superlatives leads to dissatisfaction with anything ordinary.”

Kevin Kelly, is the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, The Improbable is the New Normal, The Technium, 7 Jan, 2013. (Photo source)

May
20th
Sun
permalink

The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks

    
                                                             Image: Library of Congress

“Knowledge is not a result merely of filtering or algorithms. It results from a far more complex process that is social, goal-driven, contextual, and culturally-bound. We get to knowledge — especially “actionable” knowledge — by having desires and curiosity, through plotting and play, by being wrong more often than right, by talking with others and forming social bonds, by applying methods and then backing away from them, by calculation and serendipity, by rationality and intuition, by institutional processes and social roles. Most important in this regard, where the decisions are tough and knowledge is hard to come by, knowledge is not determined by information, for it is the knowing process that first decides which information is relevant, and how it is to be used.”

David Weinberger, The Problem with the Data-Information-Knowledge-Wisdom Hierarchy, Harvard Business Review, Feb 2, 2010.

"The digitization of 21st-century media, Weinberger argues, leads not to the creation of a “global village" but rather to a new understanding of what knowledge is, to a change in the basic epistemology governing the universe. And this McLuhanesque transformation, in turn, reveals the general truth of the Heideggarian vision. Knowledge qua knowledge, Weinberger claims, is increasingly enmeshed in webs of discourse: culture-dependent and theory-free.

The causal force lying behind this massive sea change is, of course, the internet. Google search results — “9,560,000 results for ‘Heidegger’ in .71 seconds”) — taunt you with the realization that there are still another 950,000-odd pages of results to get through before you reach the end. The existence of hyperlinks is enough to convince even the most stubborn positivist that there is always another side to the story. And on the web, fringe believers can always find each other and marinate in their own illusions. The “web world” is too big to ever know. There is always another link. In the era of the Internet, Weinberger argues, facts are not bricks. They are networks. (…)

The most important aspect of Heidegger’s thought for our purposes is his understanding that human beings (or rather “Dasein,” “being-in-the-world”) are always thrown into a particular context, existing within already existing language structures and pre-determined meanings. In other words, the world is like the web, and we, Dasein, live inside the links. (…)

If our starting point is that all knowledge is networked, and always has been, then we are in a far better point to start talking about what makes today’s epistemological infastructure different from the infrastrucure in 1983. But we are also in a position to ask: if all knowledge was networked knowledge, even in 1983, than how did we not behave as if it was so? How did humanity carry on? Why did civilization not collapse into a morass of post-modern chaos? Weinberger’s answer is, once again, McLuhanesque. It was the medium in which knowledge was contained that created the difference. Stable borders around knowledge were built by books.

I would posit a different answer: if knowledge has always been networked knowledge, than facts have never had stable containers. Most of the time, though, we more or less act as if they do. Within philosophical subfield known as Actor-Network Theory (ANT) this “acting-as-if-stability-existed” is referred to as “black boxing.” One of the black boxes around knowledge might very well be the book. But black boxes can also include algorithms, census bureaus, libraries, laboratories, and news rooms. Black boxes emerge out of actually-existing knowledge networks, stabilize for a time, and unravel, and our goal as thinkers and scholars ought to be understanding how these nodes emerge and disappear. In other words, understanding changes to knowledge in this way leaves us far more sensitive to the operations of power than does the notoriously power-free perspective of Marshall McLuhan. (…)

Why don’t I care that the Google results page goes on towards infinity? If we avoid Marshall McLuhan’s easy answers to these complex questions, and retain the core of Heidegger’s brilliant insights while also adding a hefty dose of ontology to his largely immaterial philosophy, we might begin to understand the real operations of digital knowledge/power in a networked age.

Weinberger, however, does not care about power, and more or less admits this himself in a brilliant essay 2008 on the distinction between digital realists, utopians, and dystopians. Digital utopians, a group in which he includes himself, “point to the ways in which the Web has changed some of the basic assumptions about how we live together, removing old obstacles and enabling shiny new possibilities.” The realists, on the other hand, are rather dull: They argue that “the Web hasn’t had nearly as much effect as the utopians and dystopians proclaim. The Web carries with it certain possibilities and limitations, but (the realists say) not many more than other major communications medium.” Politically speaking, digital utopianism tantalizes us with the promise of what might be, and pushes us to do better. The political problem with the realist position, Weinberger argues, is that it “is … [a] decision that leans toward supporting the status quo because what-is is more knowable than what might be.”

The realist position, however, is not necessarily a position of quietude. Done well, digital realism can sensitize us to the fact that all networked knowledge systems eventually become brick walls, that these brick walls are maintained through technological, political, cultural, economic, and organizational forms of power. Our job, as thinkers and teachers, is not to stand back and claim that the all bricks have crumbled. Rather, our job is to understand how the wall gets built, and how we might try to build it differently.”

C.W. Anderson, Ph.D, an assistant professor in the Department of Media Culture at the College of Staten Island (CUNY), researcher at the Columbia University Graduate School of Journalism, The Difference Between Online Knowledge and Truly Open Knowledge, The Atlantic, Feb 3, 2012.

David Weinberger: ‘I think the Net generation is beginning to see knowledge in a way that is closer to the truth about knowledge’

"I think the Net generation is beginning to see knowledge in a way that is closer to the truth about knowledge — a truth we’ve long known but couldn’t instantiate. My generation, and the many generations before mine, have thought about knowledge as being the collected set of trusted content, typically expressed in libraries full of books. Our tradition has taken the trans-generational project of building this Library of Knowledge book by book as our God-given task as humans. Yet, for the coming generation, knowing looks less like capturing truths in books than engaging in never-settled networks of discussion and argument. That social activity — collaborative and contentious, often at the same time — is a more accurate reflection of our condition as imperfect social creatures trying to understand a world that is too big and too complex for even the biggest-headed expert.

This new topology of knowledge reflects the topology of the Net. The Net (and especially the Web) is constructed quite literally out of links, each of which expresses some human interest. If I link to a site, it’s because I think it matters in some way, and I want it to matter that way to you. The result is a World Wide Web with billions of pages and probably trillions of links that is a direct reflection of what matters to us humans, for better or worse. The knowledge networks that live in this new ecosystem share in that property; they are built out of, and reflect, human interest. Like our collective interests, the Web and the knowledge that resides there is at odds and linked in conversation. That’s why the Internet, for all its weirdness, feels so familiar and comfortable to so many of us. And that’s the sense in which I think networked knowledge is more “natural.” (…)

To make a smart room — a knowledge network — you have to have just enough diversity. And it has to be the right type of diversity. Scott Page in The Difference says that a group needs a diversity of perspectives and skill sets if it is going to be smarter than the smartest person in it. It also clearly needs a set of coping skills, norms, and procedures that enable it to deal with diversity productively. (…)

We humans can only see things from a point of view, and we can only understand things by appropriating them into our already-existing context. (…)

In fact, the idea of objectivity arose in response to the limitations of paper, as did so much of our traditional Western idea of knowledge. Paper is a disconnected medium. So, when you write a news story, you have to encapsulate something quite complex in just a relatively small rectangle of print. You know that the reader has no easy way to check what you’re saying, or to explore further on her own; to do so, she’ll have to put down the paper, go to a local library, and start combing through texts that are less current than the newspaper in which your article appears. The reporter was the one mediator of the world the reader would encounter, so the report had to avoid the mediator’s point of view and try to reflect all sides of contentious issues. Objectivity arose to address the disconnected nature of paper.

Our new medium is, of course, wildly connective. Now we can explore beyond the news rectangle just by clicking. There is no longer an imperative to squeeze the world into small, self-contained boxes. Hyperlinks remove the limitations that objectivity was invented to address.

Hyperlinks also enable readers to understand — and thus perhaps discount — the writer’s point of view, which is often a better way of getting past the writer’s prejudices than asking the writer to write as if she or he had none. This, of course, inverts the old model that assumed that if we knew about the journalist’s personal opinions, her or his work would be less credible. Now we often think that the work becomes more credible if the author is straightforward about his or her standpoint. That’s the sense in which transparency is the new objectivity.

There is still value in trying to recognize how one’s own standpoint and assumptions distort one’s vision of the world; emotional and conceptual empathy are of continuing importance because they are how we embody the truth that we share a world with others to home that world matters differently. But we are coming to accept that we can’t really get a view from nowhere, and if we could, we would have no idea what we’re looking at. (…)

Our new ability to know the world at a scale never before imaginable may not bring us our old type of understanding, but understanding and knowledge are not motivated only by the desire to feel that sudden gasp of insight. The opposite and ancient motive is to feel the breath of awe in the face of the almighty unknowability of our universe. A knowing that recognizes its object is so vast that it outstrips understanding makes us more capable of awe. (…)

Technodeterminism is the claim that technology by itself has predictable, determinant effects on people or culture. (…) We still need to be able to discuss how a technology is affecting a culture in general. Generalizations can be a vehicle of truth, so long as they are understood to be only generally true. (…) The new knowledge continues to find generalities that connect individual instances, but because the new ecosystem is hyperlinked, we can go from the generalities back to the individual cases. And those generalizations are themselves linked into a system of difference and disagreement.”

David Weinberger, Ph.D. from the University of Toronto, American technologist, professional speaker, and commentator, interviewed by Rebecca J. Rosen, What the Internet Means for How We Think About the World, The Atlantic, Jan 5 2012.

See also:

To Know, but Not Understand: David Weinberger on Science and Big Data, The Atlantic, Jan 3, 2012 
When science becomes civic: Connecting Engaged Universities and Learning Communities, University of California, Davis, September 11 - 12, 2001
The Filter Bubble: Eli Pariser on What the Internet Is Hiding From You
A story about the Semantic Web (Web 3.0) (video)
Vannevar Bush on the new relationship between thinking man and the sum of our knowledge (1945)
George Lakoff on metaphors, explanatory journalism and the ‘Real Rationality’
The Relativity of Truth - a brief résumé, Lapidarium notes

Mar
20th
Tue
permalink

Nicholas Carr on the evolution of communication technology and our compulsive consumption of information

        

"The term “information age” gets across our sense that we’re engulfed in information in a way that is very different from anything that’s come before. (…)

I think it’s pretty clear that humans have a natural inclination, even compulsion, to seek out information. We want not only to be entertained but to know everything that is going on around us. And so as these different mass media have proliferated, we’ve gone along with the technology and consumed – to put an ugly term on it – more information. (…)

"In “The Shallows” I argue that the Internet fundamentally encourages very rapid gathering of small bits of information – the skimming and scanning of information to quickly get the basic gist of it. What it discourages are therefore the ways of thinking that require greater attentiveness and concentration, everything from contemplation to reflection to deep reading.

The Internet is a hypertext system, which means that it puts lots of links in a text. These links are valuable to us because they allow us to go very quickly between one bit of information and another. But there are studies that compare what happens when a person reads a printed page of text versus when you put links into that text. Even though we may not be conscious of it, a link represents a little distraction, a little division of attention. You can see in the evidence that reading comprehension goes down with hypertext versus plaintext. (…)

The reason why I start with Tom Standage’s book is because we tend to think of the information age as something entirely new. In fact, people have been wrestling with information for many centuries. If I was going to say when the information age started, I would probably say the 15th century with the invention of the mechanical clock, which turned time into a measurable flow, and the printing press, which expanded our ability to tap into other kinds of thinking. The information age has been building ever since then.

Standage covers one very important milestone in that story, which is the building of the telegraph system in the 19th century. The telegraph was the first really efficient system for long-distance, almost instantaneous communication. It’s a short book, a very lively read, and it shows how this ability to throw one’s thoughts across the world changed all aspects of society. It certainly changed the business world. Suddenly you could coordinate a business not just in a local area, but across the country or across oceans. It had a lot of social implications too, as people didn’t have to wait for letters to come over the course of days. And as Standage points out, it inspired a lot of the same hopes and concerns that we have today with the Internet. (…)

If “The Information” is a sprawling, sweeping story of how information has changed over time, one thing it doesn’t get into is the commercial nature of information as a good that is bought and sold. That’s the story Tim Wu tells in ”The Master Switch.” His basic argument is that whenever a new communication medium arises, a similar pattern occurs. The technology starts off as a hobbyist’s passion, democratic and open. Then over time, as it becomes more popular, it starts to be dominated by corporate interests and becomes much more formalised, before eventually being displaced by a new technology.

You see this with radio, for instance. In the beginning, radio was very much a hobbyist’s technology. When people bought a radio back then it wasn’t just a receiver, it was a transmitter. People would both receive and transmit information through their radio – it was an early version of the blogosphere in some ways. Then dominant radio corporations come in, and suddenly radio isn’t a democratic tool for transmitting and receiving information, it’s purely for receiving. Tim Wu tells a series of stories like this, and television. All of that history is really a backdrop for a discussion of the Internet, which Wu suggests will likely follow the same cycle.

So far, I think we’ve seen that. When the World Wide Web appeared 20 years ago, there was all kinds of utopian, democratic rhetoric about how it was breaking the hold of big corporations over media and communications. You saw a huge explosion of personal websites. But over time you saw corporate interests begin to dominate the web – Google, Facebook and so on. If you look at how much time a user devotes to Facebook, it shows a consolidation and centralisation of web activity onto these large corporate sites. (…)

Matthew Crawford argues that we’re losing our sense of importance of actual physical interaction with the natural world. He says that the richest kind of thinking that’s open to human beings is not thinking that takes place in the mind but thinking that involves both the mind and the body interacting with the world. Whereas when we’re sitting at our computer or looking at our smartphone, we’re in a world of symbols. It seems to me that one of the dangers of the Internet, and the way that the screen mediates all work and other kinds of processing, is that not only are we distancing ourselves from interaction with the world, but we’re beginning to lose sight of the fact that that’s even important. (…)

As more and more of the physical world is operated by software and computers, we shut off interacting with the world. Crawford, in addition to being a political philosopher, is also a motorcycle mechanic. And a lot of the book is simply stories of being a mechanic. One of the points he makes is that people used to know how their cars worked. They could open the hood, see all of the parts of their engine, change their own oil. Now when you open your hood you can’t touch anything and you don’t know how the thing works. We’ve allowed ourselves to be removed from the physical world. We’re told just to look at our GPS screen and forget how the engine works.

Q: A key point about the information age we should mention is that societies have moved from an industrial economy to a service economy, with more people in white-collar jobs and increasing income disparity as a result.

That’s absolutely true. More and more of our basic jobs, due to broad shifts in the economy, involve manipulating symbols, whether it’s words, numbers or images. That too serves to distance ourselves from manual manipulation of the world. We have offloaded all of those jobs to specialists in order to spend more time working with symbols.

Q: Tell us why you’re closing with Gary Shteyngart’s novel “Super Sad True Love Story.”

I think that novelists, and other artists, are only beginning to grapple with the implications of the Internet, smartphones and all of that. Literature provides a different and very valuable way of perceiving those implications, so I decided to end with a novel. This book is both funny and extremely horrifying. It’s set in a future that is very close in some ways to the present. Shteyngart takes phenomena and trends that are around us but we don’t even notice, pushes them a little more extreme, and suddenly it gives you a new way to think about not only where we’re heading but where we already are. (…)

As is true with most dystopian science fiction, I don’t think it’s an attempt to portray what’s going to happen. It’s more an insight into how much we and our societies have changed in a very short time, without really being aware of it. If somebody from even 10 years ago suddenly dropped into the world and saw us all walking down the street staring at these little screens, hitting them with our thumbs, it would seem very strange.

It is becoming more and more normal to monitor your smartphone even while having a conversation with a friend, spouse or child. A couple will go out to a restaurant and the first thing they will each do is stick their iPhone or Android on the table in front of them, basically announcing that they’re not going to give their full attention to the other person. So technology seems to be changing even our relationships and social expectations. (…)

Q: In a hundred years’ time, what do you think the legacy of the early Internet will be?

I think the legacy will both be of enormous benefits – particularly those that can be measured in terms of efficiency and productivity, but also the ability for people to communicate with others – and also of more troubling consequences. We are witnessing an erosion not only of privacy but of the sense that privacy of the individual is important. And we are seeing the commercialisation of processes of communication, affiliation and friendship that used to be considered intimate.

You’re probably right to talk about a hundred years to sort this all out. There’s a whole lot of threads to the story that being in the midst of it are hard to see properly, and it’s difficult to figure out what the balance of good, bad and indifferent is.

Q: What’s next in the immediate five or 10 years for the information age?

More of the same. Overall I think the general trend, as exemplified by social networks and the evolution of Google, is towards ever smaller bits of information delivered ever more quickly to people who are increasingly compulsive consumers of media and communication products. So I would say more screens, smaller screens, more streams of information coming at us from more directions, and more of us adapting to that way of living and thinking, for better or worse.

Q: So we’re not at the apex of the information age? That peak is yet to come?

All indications are that we’re going to see more rather than less.”

Nicholas Carr, American writer, interwieved by Alec Ash, Our compulsive consumption of information, The Browser - Salon.com, Mar 19, 2012.

See also:

Does Google Make Us Stupid?
Nicholas Carr on what the internet is doing to our brains?
Nicholas Carr on Books That Are Never Done Being Written

Sep
11th
Sun
permalink

Supercomputer predicts revolution: Forecasting large-scale human behavior using global news media tone in time and space


Figure 1. Global geocoded tone of all Summary of World Broadcasts content, 2005. Note: Click on image to see animation.

"Feeding a supercomputer with news stories could help predict major world events, according to US research.

While the analysis was carried out retrospectively, scientists say the same processes could be used to anticipate upcoming conflict. (…)

The study’s information was taken from a range of sources including the US government-run Open Source Centre and BBC Monitoring, both of which monitor local media output around the world.

News outlets which published online versions were also analysed, as was the New York Times' archive, going back to 1945.

In total, Mr Leetaru gathered more than 100 million articles.

Reports were analysed for two main types of information: mood - whether the article represented good news or bad news, and location - where events were happening and the location of other participants in the story.

Mood detection, or “automated sentiment mining” searched for words such as “terrible”, “horrific” or “nice”.

Location, or “geocoding” took mentions of specific places, such as “Cairo” and converted them in to coordinates that could be plotted on a map.

Analysis of story elements was used to create an interconnected web of 100 trillion relationships. (…)

The computer event analysis model appears to give forewarning of major events, based on deteriorating sentiment.

However, in the case of this study, its analysis is applied to things that have already happened.

According to Kalev Leetaru, such a system could easily be adapted to work in real time, giving an element of foresight. (…)

"It looks like a stock ticker in many regards and you know what direction it has been heading the last few minutes and you want to know where it is heading in the next few.

"It is very similar to what economic forecasting algorithms do.” (…)

"The next iteration is going to city level and beyond and looking at individual groups and how they interact.

"I liken it to weather forecasting. It’s never perfect, but we do better than random guessing."

Supercomputer predicts revolution, BBC News, 9 September 2011

Culturomics 2.0: Forecasting large-scale human behavior using global news media tone in time and space

"News is increasingly being produced and consumed online, supplanting print and broadcast to represent nearly half of the news monitored across the world today by Western intelligence agencies. Recent literature has suggested that computational analysis of large text archives can yield novel insights to the functioning of society, including predicting future economic events. Applying tone and geographic analysis to a 30–year worldwide news archive, global news tone is found to have forecasted the revolutions in Tunisia, Egypt, and Libya, including the removal of Egyptian President Mubarak, predicted the stability of Saudi Arabia (at least through May 2011), estimated Osama Bin Laden’s likely hiding place as a 200–kilometer radius in Northern Pakistan that includes Abbotabad, and offered a new look at the world’s cultural affiliations. Along the way, common assertions about the news, such as “news is becoming more negative” and “American news portrays a U.S.–centric view of the world” are found to have merit.”

The emerging field of Culturomics” seeks to explore broad cultural trends through the computerized analysis of vast digital book archives, offering novel insights into the functioning of human society (Michel, et al., 2011). Yet, books represent the “digested history” of humanity, written with the benefit of hindsight. People take action based on the imperfect information available to them at the time, and the news media captures a snapshot of the real–time public information environment (Stierholz, 2008). News contains far more than just factual details: an array of cultural and contextual influences strongly impact how events are framed for an outlet’s audience, offering a window into national consciousness (Gerbner and Marvanyi, 1977). A growing body of work has shown that measuring the “tone” of this real–time consciousness can accurately forecast many broad social behaviors, ranging from box office sales (Mishne and Glance, 2006) to the stock market itself (Bollen, et al., 2011). (…)


Figure 2. Global geocoded tone of all Summary of World Broadcasts content, January 1979–April 2011 mentioning “bin Laden”

Most theories of civilizations feature some approximation of the degree of conflict or cooperation between each group. Figure 3 displays the average tone of all links between cities in each civilization, visualizing the overall “tone” of the relationship between each. Group 1, which roughly encompasses the Asiatic and Australian regions, has largely positive links to the rest of the world and is the only group with a positive connection to Group 4 (Middle East). Group 3 (Africa) has no positive links to any other civilization, while Group 2  (North and South America excluding Canada) has negative links to all but Group 1. As opposed to explicit measures of conflict or cooperation based on armed conflict or trade ties, this approach captures the latent view of conflict and cooperation as portrayed by the world’s news media.

            
Figure 3. Average tone of links between world “civilizations” according to SWB, 1979–2009.

Figure 4 shows the world civilizations according to the New York Times 1945–2005. It divides the world into five civilizations, but paints a very different picture of the world, with a far greater portion of the global landmass arrayed around the United States. Geographic affinity appears to play a far lesser role in this grouping, and the majority of the world is located in a single cluster with the United States. It is clear from comparing the SWB and NYT civilization maps that even within the news media there is no one “universal” set of civilizations, but that each country’s media system may portray the world very differently to its audience. By pooling all of these varied viewpoints together, SWB’s view of the world’s civilizations offers a “crowdsourced” aggregate view of civilization, but it too is likely subject to some innate Western bias.


Figure 4. World “civilizations” according to NYT, 1945–2005. A full–resolution version of this figure is available here

Monitoring first broadcast then print media over the last 70 years, nearly half of the annual output of Western intelligence global news monitoring is now derived from Internet–based news, standing testament to the Web’s disruptive power as a distribution medium. Pooling together the global tone of all news mentions of a country over time appears to accurately forecast its near–term stability, including predicting the revolutions in Egypt, Tunisia, and Libya, conflict in Serbia, and the stability of Saudi Arabia.

Location plays a critical role in news reporting, and “passively crowdsourcing” the media to find the locations most closely associated with Bin Laden prior to his capture finds a 200km – wide swath of northern Pakistan as his most likely hiding place, an area which contains Abbottabad, the city he was ultimately captured in. Finally, the geographic clustering of the news, the way in which it frames localities together, offers new insights into how the world views itself and the “natural civilizations” of the news media.

While heavily biased and far from complete, the news media captures the only cross–national real–time record of human society available to researchers. The findings of this study suggest that Culturomics, which has thus far focused on the digested history of books, can yield intriguing new understandings of human society when applied to the real–time data of news. From forecasting impending conflict to offering insights on the locations of wanted fugitives, applying data mining approaches to the vast historical archive of the news media offers promise of new approaches to measuring and understanding human society on a global scale.”

Kalev Leetaru is Senior Research Scientist for Content Analysis at the Institute for Computing in the Humanities, Arts, and Social Science at the University of Illinois, Center Affiliate of the National Center for Supercomputing Applications, and Research Coordinator at the University of Illinois Cline Center for Democracy. His award-winning work centers on the application of high performance computing to grand challenge problems using news and open sources intelligence. He holds three US patents and more than 50 University Invention Disclosures.

To see full research click University of Illinois at Chicago - UI, Volume 16, Number 9, 5 September 2011

See also:

Culturomics: Quantitative Analysis of Culture Using Millions of Digitized Books

"Construct a corpus of digitized texts containing about 4% of all books ever printed, and then analyze that corpus using advanced software and the investigatory curiosity of thousands, and you get something called "Culturomics," a field in which cultural trends are represented quantitatively.

In this talk Erez Lieberman Aiden and Jean-Baptiste Michel — co-founders of the Cultural Observatory at Harvard and Visiting Faculty at Google — show how culturomics can provide insights about fields as diverse as lexicography, the evolution of grammar, collective memory, the adoption of technology, the pursuit of fame, censorship, and historical epidemiology.”

— E. Lieberman Aiden, Harvard Society of Fellows & Jean-Baptiste Michel, FQEB Fellow at Harvard, Culturomics: Quantitative Analysis of Culture Using Millions of Digitized Books, May 10, 2011

See also:

What we learned from 5 million books, TED.com, 2011 (video)

Jul
15th
Fri
permalink

How the Internet Affects Our Memories: Cognitive Consequences of Having Information at Our Fingertips
                           image

"Before the printed book, Memory ruled daily life… (…)

The elder Seneca (c. 55 B.C.-A.D. 37), a famous teacher of rhetoric, was said to be able to repeat long passages of speeches he had heard only once many years before. He would impress his students by asking each member of a class of two hundred to recite lines of poetry, and then he would recite all the lines they had quoted—in reverse order, from last to first.”

Daniel Boorsten, American historian, professor, attorney, and writer (1914-2004), The Discoverers, Random House, 1983

Abstract:

"The advent of the Internet, with sophisticated algorithmic search engines, has made accessing information as easy as lifting a finger. No longer do we have to make costly efforts to find the things we want. We can “Google” the old classmate, find articles online, or look up the actor who was on the tip of our tongue. The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves.”

***

"We investigate whether the Internet has become an external memory system that is primed by the need to acquire information. If asked the question whether there are any countries with only one color in their flag, for example, do we think about flags—or immediately think to go online to find out? Our research then tested if, once information has been accessed, our internal encoding is increased for where the information is to be found rather than for the information itself. (…)

Participants apparently did not make the effort to remember when they thought they could later look up the trivia statements they had read. Since search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up. (…)

The social form of information storage is also reflected in the findings that people forget items they think will be available externally, and remember items they think will not be available. And transactive memory is also evident when people seem better able to remember which computer folder an item has been stored in than the identity of the item itself. These results suggest that processes of human memory are adapting to the advent of new computing and communication technology. Just as we learn through transactive memory who knows what in our families and offices, we are learning what the computer “knows” and when we should attend to where we have stored information in our computer-based memories. We are becoming symbiotic with our computer tools, growing into interconnected systems that remember less by knowing information than by knowing where the information can be found. This gives us the advantage of access to a vast range of information—although the disadvantages of being constantly “wired” are still being debated.

It may be no more that nostalgia at this point, however, to wish we were less dependent on our gadgets. We have become dependent on them to the same degree we are dependent on all the knowledge we gain from our friends and coworkers—and lose if they are out of touch. The experience of losing our Internet connection becomes more and more like losing a friend. We must remain plugged in to know what Google knows.”

— B. Sparrow (Department of Psychology, Columbia University), J. Liu (University of Wisconsin–Madison), D. M. Wegner (Harvard University), ☞ Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips, Science, 5 August 2011: Vol. 333 no. 6043, pp. 776-778

Daniel M. Wegner comment:

"Groups of people commonly depend on one another for memory in this way — not by all knowing the same thing, but by specializing. And now we’ve added our computing devices to the network, depending for memory not just on people but also on a cloud of linked people and specialized information-filled devices.

We have all become a great cybermind. As long as we are connected to our machines through talk and keystrokes, we can all be part of the biggest, smartest mind ever. It is only when we are trapped for a moment without our Internet link that we return to our own humble little personal minds, tumbling back to earth from our flotation devices in the cloud.”

Daniel M. Wegner, professor of psychology at Harvard University, Don’t Fear the Cybermind, The New York Times, Aug 4, 2012 

The Extended Mind

"Why do we engage in reconsolidation? One theory is that reconsolidation helps ensure our memories are kept up to date, interpreted in light of recent experience. The brain has no interest in immaculate recall – it’s only interested in the past to the extent it helps us make sense of the future. By having memories that constantly change, we ensure that the memories stored inside our mental file cabinets are mostly relevant.

Of course, reconsolidation theory poses problems for the fidelity of memory. Although our memories always feel true – like a literal recording of the past – they’re mostly not, since they’re always being edited and bent by what we think now. And now. And now. (See the work of Elizabeth Loftus for more on memory inaccuracy.)

And this is where the internet comes in. One of the virtues of transactive memory is that it acts like a fact-check, helping ensure we don’t all descend into selfish solipsism. By sharing and comparing our memories, we can ensure that we still have some facts in common, that we all haven’t disappeared down the private rabbit hole of our own reconsolidations. In this sense, instinctually wanting to Google information – to not entrust trivia to the fallible brain – is a perfectly healthy impulse. (I’ve used Google to correct my errant memories thousands of times.) I don’t think it’s a sign that technology is rotting our cortex – I think it shows that we’re wise enough to outsource a skill we’re not very good at. Because while the web enables all sorts of other biases – it lets us filter news, for instance, to confirm what we already believe – the use of the web as a vessel of transactive memory is mostly virtuous. We save hard drive space for what matters, while at the same time improving the accuracy of recall.

PS. If you’d like a contrarian take, here’s Nicholas Carr:

If a fact stored externally were the same as a memory of that fact stored in our mind, then the loss of internal memory wouldn’t much matter. But external storage and biological memory are not the same thing. When we form, or “consolidate,” a personal memory, we also form associations between that memory and other memories that are unique to ourselves and also indispensable to the development of deep, conceptual knowledge. The associations, moreover, continue to change with time, as we learn more and experience more. As Emerson understood, the essence of personal memory is not the discrete facts or experiences we store in our mind but “the cohesion” which ties all those facts and experiences together. What is the self but the unique pattern of that cohesion?”

Jonah Lehrer, American journalist who writes on the topics of psychology, neuroscience, Is Google Ruining Your Memory?, Wired.com, July 15, 2011 (Illustration source)

See also:

☞ Amara D. Angelica, Google is destroying your memory, KurzweilAI
Thomas Metzinger on How Has The Internet Changed The Way You Think
Rolf Fobelli: News is to the mind what sugar is to the body, Lapidarium notes
Luciano Floridi on the future development of the information society
The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks
Memory tag on Lapidarium notes

permalink

Thomas Metzinger on How Has The Internet Changed The Way You Think

Attention management. The ability to attend to our environment, to our own feelings, and to those of others is a naturally evolved feature of the human brain. Attention is a finite commodity, and it is absolutely essential to living a good life. We need attention in order to truly listen to others — and even to ourselves. We need attention to truly enjoy sensory pleasures, as well as for efficient learning. We need it in order to be truly present during sex, or to be in love, or when we are just contemplating nature. Our brains can generate only a limited amount of this precious resource every day. Today, the advertisement and entertainment industries are attacking the very foundations of our capacity for experience, drawing us into the vast and confusing media jungle. They are trying to rob us of as much of our scarce resource as possible, and they are doing so in ever more persistent and intelligent ways. We know all that. But here is something we are just beginning to understand — that the Internet affects our sense of selfhood, and on a deep functional level.

Consciousness is the space of attentional agency: Conscious information is exactly that information in your brain to which you can deliberately direct your attention. As an attentional agent, you can initiate a shift in attention and, as it were, direct your inner flashlight at certain targets: a perceptual object, say, or a specific feeling. In many situations, people lose the property of attentional agency, and consequently their sense of self is weakened. Infants cannot control their visual attention; their gaze seems to wander aimlessly from one object to another, because this part of their Ego is not yet consolidated. Another example of consciousness without attentional control is the non-lucid dream. In other cases, too, such as severe drunkenness or senile dementia, you may lose the ability to direct your attention — and, correspondingly, feel that your “self” is falling apart.

If it is true that the experience of controlling and sustaining your focus of attention is one of the deeper layers of phenomenal selfhood, then what we are currently witnessing is not only an organized attack on the space of consciousness per se but a mild form of depersonalization. New medial environments may therefore create a new form of waking consciousness that resembles weakly subjective states — a mixture of dreaming, dementia, intoxication, and infantilization. Now we all do this together, every day. I call it Public Dreaming.”

Thomas Metzinger, German philosopher, department of philosophy at the Johannes Gutenberg University of Mainz, answering the question "How Has The Internet Changed The Way You Think?", Edge, 2010

See also:

Nicholas Carr on what the internet is doing to our brains?
Does Google Make Us Stupid?
The extended mind – how the Internet affects our memories
☞ Neuroscientist Gary Small, Is Google Making Us Smarter?, (video) World Science Festival, June 2010
William Deresiewicz on multitasking and the value of solitude

Jul
3rd
Sun
permalink

George Lakoff on metaphors, explanatory journalism and the ‘Real Rationality’

    

Metaphor is a fundamental mechanism of mind, one that allows us to use what we know about our physical and social experience to provide understanding of countless other subjects. Because such metaphors structure our most basic understandings of our experience, they are “metaphors we live by”—metaphors that can shape our perceptions and actions without our ever noticing them. (…)

We are neural beings, (…) our brains take their input from the rest of our bodies. What our bodies are like and how they function in the world thus structures the very concepts we can use to think. We cannot think just anything – only what our embodied brains permit. (…)

The mind is inherently embodied. Thought is mostly unconscious. Abstract concepts are largely metaphorical.”

George Lakoff, cited in Daniel Lende, Brainy Trees, Metaphorical Forests: On Neuroscience, Embodiment, and Architecture, Neuroanthropology, Jan 10, 2012.

"For Lakoff, language is not a neutral system of communication, because it is always based on frames, conceptual metaphors, narratives, and emotions. Political thought and language is inherently moral and emotional. (…)

The way people really reason — Real Rationality — coming new understandings of the brain—something that up-to-date marketers have already done. Enlightenment reason, we now know, was a false theory of rationality.

Most thought is unconscious. It doesn’t work by mathematical logic. You can’t reason directly about the world—because you can only conceptual what your brain and body allow, and because ideas are structured using frames.” Lakoff says. “As Charles Fillmore has shown, all words are defined in terms of conceptual frames, not in terms of some putative objective, mind-free world.”

“People really reason using the logic of frames, metaphors, and narratives, and real decision making requires emotion, as Antonio Damasio showed in Descartes’ Error.” 

“A lot of reason does not serve self interest, but is rather about empathizing with and connecting to others.”

People Don’t Decide Using ‘Just the Facts’

Contemporary explanatory journalism, in particular, is prone to the false belief that if the facts are presented to people clearly enough, they will accept and act upon them, Lakoff says. “In the ‘marketplace of ideas’ theory,  that the best factually based logical argument will always win. But this doesn’t actually happen.”

“Journalists always wonder, ‘We’ve reported on all the arguments, why do people vote wrong?’” Lakoff says. “They’ve missed the main event.”

Many journalists think that “framing” a story or issue is “just about choices of words and manipulation,” and that one can report factually and neutrally without framing. But language itself isn’t neutral. If you study the way the brain processes language, Lakoff says, “every word is defined with respect to frames. You’re framing all the time.” Morality and emotion are already embedded in the way people think and the way people perceive certain words—and most of this processing happens unconsciously. “You can only learn things that fit in with what your brain will allow,” Lakoff says.

A recent example? The unhappy phrase “public option.”

“When you say public, it means ‘government’ to conservatives,” Lakoff explains. “When you say ‘option,’ it means two things: it’s not necessary, it’s just an ‘option,’ and secondly it’s a public policy term, a bureaucratic term. To conservatives, ‘public option’ means government bureaucracy, the very worst thing you could have named this. They could have called it the America Plan. They could have called it doctor-patient care.”

According to Lakoff, because of the conservative success in shaping public discourse through their elaborate communication system, the most commonly used words often have been given a conservative meaning. “Tax relief,” for example, suggests that taxation is an affliction to be relieved.

Don’t Repeat the Language Politicians Use: Decode It

Instead of simply adopting the language politicians use to frame an issue, Lakoff argues, journalists need to analyze the language political figures use and explain the moral content of particular words and arguments.

That means, for example, not just quoting a politician about whether a certain policy infringes or supports American “liberty,” but explaining what he or she means by “liberty,” how this conception of liberty fits into the politician’s overall moral outlook, and how it contrasts with other conceptions of liberty.

It also means spelling out the full implications of the metaphors politicians choose. In the recent coverage of health care reform, Lakoff says, one of the “hidden metaphors” that needed to be explored was whether politicians we’re talking about healthcare as a commodity or as a necessity and a right.

Back on the 2007 presidential campaign trail, Lakoff pointed out, Rudy Giuliani called Obama’s health care plans “socialist,” while he himself compared buying health care to buying a flatscreen tv set, using the metaphor of health care as a commodity, not a necessity. A few liberal bloggers were outraged, but several newspapers reported his use of the metaphor without comment or analysis, rather than exploring what it revealed about Giuliani’s worldview. (…)

A Dictionary of the Real Meanings of Words

What would a nonpartisan explanatory journalism be like? To make nonpartisan decoding easier, Lakoff thinks journalists should create an online dictionary of the different meanings of words—“ not just a glossary, but a little Wikipedia-like website,” as he puts it. This site would have entries to explain the differences between the moral frameworks of conservatives and progressives, and what they each typically mean when they say words like “freedom.” Journalists across the country could link to the site whenever they sensed a contested word.

A project like this would generate plenty of resistance, Lakoff acknowledges. “What that says is most people don’t know what they think. That’s extremely scary…the public doesn’t want to be told, ‘You don’t know what you think.’” The fact is that about 98 percent of thought is unconscious.”

But, he says, people are also grateful when they’re told what’s really going on, and why political figures reason as they do. He would like to see a weekly column in the New York Times and other newspapers decoding language and framing, and analyzing what can and cannot be said politically, and he’d also like to see cognitive science and the study of framing added to journalism school curricula.

Ditch Objectivity, Balance, and ‘The Center ‘

Lakoff has two further sets of advice for improving explanatory journalism. The first is to ditch journalism’s emphasis on balance. Global warming and evolution are real. Unscientific views are not needed for “balance.”

“The idea that truth is balanced, that objectivity is balanced, is just wrong,” Lakoff says. Objectivity is a valuable ideal when it means unbiased reporting, Lakoff argues. But too often, the need for objectivity means that journalists hide their own judgments of an issue behind “public opinion.” The journalistic tradition of “always having to get a quote from somebody else” when the truth is obvious is foolish, Lakoff says.

So is the naïve reporting of poll data, since poll results can change drastically depending on the language and the framing of the questions. The framng of the questions should be part of reporting on polls.

Finally, Lakoff’s research suggests that many Americans, perhaps 20 per cent, are “biconceptuals” who have both conservative and liberal moral systems in their brains, but apply them to different issues. In some cases they can switch from one ideological position to another, based on the way an issue is framed. These biconceptuals occupy the territory that’s usually labeled “centrist.” “There isn’t such a thing as ‘the center.’ There are just people who are conservative on some issues and liberal on others, with lots of variations occurring. Journalists accept the idea of a “center” with its own ideology, and that’s just not the case,” he says.

Journalists tell “stories.” Those stories are often narratives framed from a particular moral or political perspective. Journalists need to be more upfront about the moral and political underpinnings of the stories they write and the angles they choose.

Journalism Isn’t Neutral–It’s Based on Empathy

“Democracy is based on empathy, with people not just caring, but acting on that care —having social as well as personal responsibility…That’s a view that many journalists have. That’s the reason they become journalists rather than stockbrokers. They have a certain view of democracy. That’s why a lot of journalists are liberals. They actually care about how politics can hurt people, about the social causes of harm. That’s a really different view than the conservative view: if you get hurt and you haven’t taken personal responsibility, then you deserve to get hurt—as when you sign on to a mortgage you can’t pay. Investigative journalism is very much an ethical enterprise, and I think journalists have to ask themselves, ‘What is the ethics behind the enterprise?’ and not be ashamed of it.” Good investigative journalism uncovers real facts, but is done, and should be done, with a moral purpose.

To make a moral story look objective, “journalists tend to pin moral reactions on other people: ‘I’m going to find someone around here who thinks it’s outrageous’…This can make outrageous moral action into a matter of public opinion rather than ethics.”

In some ways, Lakoff’s suggestions were in line with the kind of journalism that one of our partners,  the non-profit investigative journalism outlet ProPublica, already does. In its mission statement, ProPublica, makes its commitment to “moral force” explicit. “Our work focuses exclusively on truly important stories, stories with ‘moral force,’” the statement reads. “We do this by producing journalism that shines a light on exploitation of the weak by the strong and on the failures of those with power to vindicate the trust placed in them.”

He emphasized the importance of doing follow-ups to investigative stories, rather than letting the public become jaded by a constant succession of outrages that flare on the front page and then disappear. Most of ProPublica’s investigations are ongoing and continually updated on its site.

Cognitive Explanation:’ A Different Take on ProPublica’s Mission 

But Lakoff also had some very nontraditional suggestions about what it would mean for ProPublica to embark on a different kind of explanatory journalism project. “There are two different forms of explanatory journalism. One is material explanation — the kind of investigative reporting now done at ProPublica: who got paid what by whom, what actions resulted in harm, and so on. All crucial,” he noted. “But equally crucial, and not done, is cognitive and communicative explanation.”

“Cognitive explanation depends on what conceptual system lies behind political positions on issues and how the working of people’s brains explains their political behavior. For example, since every word of political discourse evokes a frame and the moral system behind it, the superior conservative communication system reaches most Americans 24/7/365. The more one hears conservative language and not liberal language, the more the brains of those listening get changed. Conservative communication with an absence of liberal communication exerts political pressure on Democrats whose constituents hear conservative language all day every day. Explanatory journalism should be reporting on the causal effects of conservative framing and the conservative communicative superiority.”

“ProPublica seems not to be explicit about conflicting views of what constitutes ‘moral force.’ ProPublica does not seem to be covering the biggest story in the country, the split over what constitutes morality in public policy. Nor is it clear that ProPublica studies the details of framing that permeate public discourse. Instead, ProPublica assumes a view of “moral force” in deciding what to cover and how to cover it.

“For example, ProPublica has not covered the difference in moral reasoning behind the conservative and progressive views on tax policy, health care, global warming and energy policy, and so on for major issue after major issue.

“ProPublica also is not covering a major problem in policy-making — the assumption of classical views of rationality and the ways they have been scientifically disproved in the cognitive and brain sciences.

“ProPublica has not reported on the disparity between the conservative and liberal communication systems, nor has it covered the globalization of conservatism — the international exportation of American conservative strategists, framing, training, and communication networks.

“When ProPublica uncovers facts about organ transplants and nursing qualifications, that’s fine. But where is ProPublica on the reasons for the schisms in our politics? Explanatory journalism demands another level of understanding.

“ProPublica, for all its many virtues, has room for improvement, in much the same way as journalism in general — especially in explanatory journalism. Cognitive and communicative explanation must be added to material explanation.”

What Works In the Brain: Narrative & Metaphor

As for creating Explanatory Journalism that resonates with the way people process information, Lakoff suggested two familiar tools: narrative and metaphor.

The trick to finding the right metaphors for complicated systems, he said, is to figure out what metaphors the experts themselves use in the way they think. “Complex policy is usually understood metaphorically by people in the field,” Lakoff says. What’s crucial is learning how to distinguish the useful frames from the distorting or overly-simplistic ones.

As for explaining policy, Lakoff says, “the problem with this is that policy is made in a way that is not understandable…Communication is always seen as last, as the tail on the dog, whereas if you have a policy that people don’t understand, you’re going to lose. What’s the point of trying to get support for a major health care reform if no one understands it?

One of the central problems with policy, Lakoff says, is that policy-makers tend to take their moral positions so much for granted that the policies they develop seem to them like the “merely practical” things to do.

Journalists need to restore the real context of policy, Lakoff says, by trying “to get people in the government and policy-makers in the think tanks to understand and talk about what the moral basis of their policy is, and to do this in terms that are understandable.”

George Lakoff, American cognitive linguist and professor of linguistics at the University of California, Berkeley, interviewed by Lois Beckett in Explain yourself: George Lakoff, cognitive linguist, explainer.net, 31 January, 2011 (Illustration source)

See also:

Professor George Lakoff: Reason is 98% Subconscious Metaphor in Frames & Cultural Narratives
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks, Lapidarium notes
☞ Metaphor tag on Lapidarium notes

Jun
18th
Sat
permalink

William Deresiewicz on the meaning of friendship in our time

                   
                        Achilles binding his friend Patroclus’ wounds, by the Sosias Painter

"…[a] numberless multitude of people, of whom no one was close, no one was distant. …" — Leo Tolstoy, War and Peace

We live at a time when friendship has become both all and nothing at all. Already the characteristically modern relationship, it has in recent decades become the universal one: the form of connection in terms of which all others are understood, against which they are all measured, into which they have all dissolved. (…)

The Facebook phenomenon, so sudden and forceful a distortion of social space, needs little elaboration. Having been relegated to our screens, are our friendships now anything more than a form of distraction? When they’ve shrunk to the size of a wall post, do they retain any content? If we have 768 “friends,” in what sense do we have any? Facebook isn’t the whole of contemporary friendship, but it sure looks a lot like its future. Yet Facebook—and MySpace, and Twitter, and whatever we’re stampeding for next—are just the latest stages of a long attenuation. They’ve accelerated the fragmentation of consciousness, but they didn’t initiate it. They have reified the idea of universal friendship, but they didn’t invent it. In retrospect, it seems inevitable that once we decided to become friends with everyone, we would forget how to be friends with anyone. (…)

The idea of friendship in ancient times

How did we come to this pass? The idea of friendship in ancient times could not have been more different. Achilles and Patroclus, David and Jonathan, Virgil’s Nisus and Euryalus: Far from being ordinary and universal, friendship, for the ancients, was rare, precious, and hard-won. In a world ordered by relations of kin and kingdom, its elective affinities were exceptional, even subversive, cutting across established lines of allegiance. David loved Jonathan despite the enmity of Saul; Achilles’ bond with Patroclus outweighed his loyalty to the Greek cause. Friendship was a high calling, demanding extraordinary qualities of character—rooted in virtue, for Aristotle and Cicero, and dedicated to the pursuit of goodness and truth. And because it was seen as superior to marriage and at least equal in value to sexual love, its expression often reached an erotic intensity. Jonathan’s love, David sang, “was more wondrous to me than the love of women.” Achilles and Patroclus were not lovers—the men shared a tent, but they shared their beds with concubines—they were something greater. Achilles refused to live without his friend, just as Nisus died to avenge Euryalus, and Damon offered himself in place of Pythias.

The rise of Christianity put the classical ideal in eclipse. Christian thought discouraged intense personal bonds, for the heart should be turned to God. Within monastic communities, particular attachments were seen as threats to group cohesion. In medieval society, friendship entailed specific expectations and obligations, often formalized in oaths. Lords and vassals employed the language of friendship. “Standing surety”—guaranteeing a loan, as in The Merchant of Venice—was a chief institution of early modern friendship. Godparenthood functioned in Roman Catholic society (and, in many places, still functions) as a form of alliance between families, a relationship not between godparent and godchild, but godparent and parent. In medieval England, godparents were “godsibs”; in Latin America, they are “compadres,” co-fathers, a word we have taken as synonymous with friendship itself.

The classical notion of friendship was revived, along with other ancient modes of feeling, by the Renaissance. Truth and virtue, again, above all: “Those who venture to criticize us perform a remarkable act of friendship,” wrote Montaigne, “for to undertake to wound and offend a man for his own good is to have a healthy love for him.” His bond with Étienne, he avowed, stood higher not only than marriage and erotic attachment, but also than filial, fraternal, and homosexual love. “So many coincidences are needed to build up such a friendship, that it is a lot if fortune can do it once in three centuries.” The highly structured and, as it were, economic nature of medieval friendship explains why true friendship was held to be so rare in classical and neoclassical thought: precisely because relations in traditional societies were dominated by interest. Thus the “true friend” stood against the self-interested “flatterer” or “false friend,” as Shakespeare sets Horatio—”more an antique Roman than a Dane”—against Rosencrantz and Guildenstern. Sancho Panza begins as Don Quixote’s dependent and ends as his friend; by the close of their journey, he has come to understand that friendship itself has become the reward he was always seeking. (…)

               
 “Don Quixote and Sancho Panza,” by Alexandre-Gabriel Decamps, The Gallery Collection, Corbis

Friendship in modern society

The growth of democracy, an ideology of universal equality and inter-involvement. We are citizens now, not subjects, bound together directly rather than through allegiance to a monarch. But what is to bind us emotionally, make us something more than an aggregate of political monads? One answer was nationalism, but another grew out of the 18th-century notion of social sympathy: friendship, or at least, friendliness, as the affective substructure of modern society. It is no accident that “fraternity” made a third with liberty and equality as the watchwords of the French Revolution. Wordsworth in Britain and Whitman in America made visions of universal friendship central to their democratic vistas. For Mary Wollstonecraft, the mother of feminism, friendship was to be the key term of a renegotiated sexual contract, a new domestic democracy.

Now we can see why friendship has become the characteristically modern relationship. Modernity believes in equality, and friendships, unlike traditional relationships, are egalitarian. Modernity believes in individualism. Friendships serve no public purpose and exist independent of all other bonds. Modernity believes in choice. Friendships, unlike blood ties, are elective; indeed, the rise of friendship coincided with the shift away from arranged marriage. Modernity believes in self-expression. Friends, because we choose them, give us back an image of ourselves. Modernity believes in freedom. Even modern marriage entails contractual obligations, but friendship involves no fixed commitments. The modern temper runs toward unrestricted fluidity and flexibility, the endless play of possibility, and so is perfectly suited to the informal, improvisational nature of friendship. We can be friends with whomever we want, however we want, for as long as we want.

Social changes play into the question as well. As industrialization uprooted people from extended families and traditional communities and packed them into urban centers, friendship emerged to salve the anonymity and rootlessness of modern life. (…)

Both look to friends to replace the older structures. Friends may be “the family we choose,” as the modern proverb has it, but for many of us there is no choice but to make our friends our family, since our other families—the ones we come from or the ones we try to start—have fallen apart. When all the marriages are over, friends are the people we come back to. And even those who grow up in a stable family and end up creating another one pass more and more time between the two. We have yet to find a satisfactory name for that period of life, now typically a decade but often a great deal longer, between the end of adolescence and the making of definitive life choices. But the one thing we know is that friendship is absolutely central to it.

Inevitably, the classical ideal has faded. The image of the one true friend, a soul mate rare to find but dearly beloved, has completely disappeared from our culture. We have our better or lesser friends, even our best friends, but no one in a very long time has talked about friendship the way Montaigne and Tennyson did. That glib neologism “bff,” which plays at a lifelong avowal, bespeaks an ironic awareness of the mobility of our connections: Best friends forever may not be on speaking terms by this time next month. We save our fiercest energies for sex. Indeed, between the rise of Freudianism and the contemporaneous emergence of homosexuality to social visibility, we’ve taught ourselves to shun expressions of intense affection between friends—male friends in particular, though even Oprah was forced to defend her relationship with her closest friend—and have rewritten historical friendships, like Achilles’ with Patroclus, as sexual. For all the talk of “bromance” lately (or “man dates”), the term is yet another device to manage the sexual anxiety kicked up by straight-male friendships—whether in the friends themselves or in the people around them—and the typical bromance plot instructs the callow bonds of youth to give way to mature heterosexual relationships. At best, intense friendships are something we’re expected to grow out of. (…)

We seem to be terribly fragile now. A friend fulfills her duty, we suppose, by taking our side—validating our feelings, supporting our decisions, helping us to feel good about ourselves. We tell white lies, make excuses when a friend does something wrong, do what we can to keep the boat steady. We’re busy people; we want our friendships fun and friction-free.

The group friendship or friendship circle

Yet even as friendship became universal and the classical ideal lost its force, a new kind of idealism arose, a new repository for some of friendship’s deepest needs: the group friendship or friendship circle. Companies of superior spirits go back at least as far as Pythagoras and Plato and achieved new importance in the salons and coffeehouses of the 17th and 18th centuries, but the Romantic age gave them a fresh impetus and emphasis. The idea of friendship became central to their self-conception, whether in Words­worth’s circle or the “small band of true friends” who witness Emma’s marriage in Austen. And the notion of superiority acquired a utopian cast, so that the circle was seen—not least because of its very emphasis on friendship—as the harbinger of a more advanced age. The same was true, a century later, of the Bloomsbury Group, two of whose members, Woolf and Forster, produced novel upon novel about friendship. It was the latter who famously enunciated the group’s political creed. “If I had to choose between betraying my country and betraying my friend,” he wrote, “I hope I should have the guts to betray my country.” Modernism was the great age of the coterie, and like the legendary friendships of antiquity, modernist friendship circles—bohemian, artistic, transgressive—set their face against existing structures and norms. Friendship becomes, on this account, a kind of alternative society, a refuge from the values of the larger, fallen world.

The belief that the most significant part of an individual’s emotional life properly takes place not within the family but within a group of friends began to expand beyond the artistic coterie and become general during the last half of the 20th century. The Romantic-Bloomsburyan prophecy of society as a set of friendship circles was, to a great extent, realized. Mary McCarthy offered an early and tart view of the desirability of such a situation in The Group; Barry Levinson, a later, kinder one in Diner. Both works remind us that the ubiquity of group friendship owes a great deal to the rise of youth culture. Indeed, modernity associates friendship itself with youth, a time of life it likewise regards as standing apart from false adult values. “The dear peculiar bond of youth,” Byron called friendship, inverting the classical belief that its true practice demands maturity and wisdom. With modernity’s elevation of youth to supreme status as the most vital and authentic period of life, friendship became the object of intense emotion in two contradictory but often simultaneous directions. We have sought to prolong youth indefinitely by holding fast to our youthful friendships, and we have mourned the loss of youth through an unremitting nostalgia for those friendships. One of the most striking things about the way the 20th century understood friendship was the tendency to view it through the filter of memory, as if it could be recognized only after its loss, and as if that loss were inevitable.

The culture of group friendship reached its apogee in the 1960s. Two of the counterculture’s most salient and ideologically charged social forms were the commune—a community of friends in self-imagined retreat from a heartlessly corporatized society—and the rock’n’roll “band” (not “group” or “combo”), its name evoking Shakespeare’s “band of brothers” and Robin Hood’s band of Merry Men, its great exemplar the Beatles. Communes, bands, and other 60s friendship groups (including Woodstock, the apotheosis of both the commune and the rock concert) were celebrated as joyous, creative places of eternal youth—havens from the adult world. To go through life within one was the era’s utopian dream; it is no wonder the Beatles’ break-up was received as a generational tragedy. It is also no wonder that 60s group friendship began to generate its own nostalgia as the baby boom began to hit its 30s. The Big Chill, in 1983, depicted boomers attempting to recapture the magic of a late-60s friendship circle. (“In a cold world,” the movie’s tagline reads, “you need your friends to keep you warm.”) Thirtysomething, taking a step further, certified group friendship as the new adult norm. Most of the characters in those productions, though, were married.

It was only in the 1990s that a new generation, remaining single well past 30, found its own images of group friendship in Seinfeld, Sex and the City, and, of course, Friends. By that point, however, the notion of friendship as a redoubt of moral resistance, a shelter from normative pressures and incubator of social ideals, had disappeared. Your friends didn’t shield you from the mainstream, they were the mainstream. (…)

Friendship is devolving, in other words, from a relationship to a feeling—from something people share to something each of us hugs privately to ourselves in the loneliness of our electronic caves, rearranging the tokens of connection like a lonely child playing with dolls. The same path was long ago trodden by community. As the traditional face-to-face community disappeared, we held on to what we had lost—the closeness, the rootedness—by clinging to the word, no matter how much we had to water down its meaning. Now we speak of the Jewish “community” and the medical “community” and the “community” of readers, even though none of them actually is one. What we have, instead of community, is, if we’re lucky, a “sense” of community—the feeling without the structure; a private emotion, not a collective experience. And now friendship, which arose to its present importance as a replacement for community, is going the same way. We have “friends,” just as we belong to “communities.” Scanning my Facebook page gives me, precisely, a “sense” of connection. Not an actual connection, just a sense. (…)

The more people we know, the lonelier we get

Until a few years ago, you could share your thoughts with only one friend at a time (on the phone, say), or maybe with a small group, later, in person. And when you did, you were talking to specific people, and you tailored what you said, and how you said it, to who they were—their interests, their personalities, most of all, your degree of mutual intimacy. “Reach out and touch someone” meant someone in particular, someone you were actually thinking about. It meant having a conversation. Now we’re just broadcasting our stream of consciousness, live from Central Park, to all 500 of our friends at once, hoping that someone, anyone, will confirm our existence by answering back. We haven’t just stopped talking to our friends as individuals, at such moments, we have stopped thinking of them as individuals. We have turned them into an indiscriminate mass, a kind of audience or faceless public. We address ourselves not to a circle, but to a cloud.

It’s amazing how fast things have changed. Not only don’t we have Wordsworth and Coleridge anymore, we don’t even have Jerry and George. Today, Ross and Chandler would be writing on each other’s walls. Carrie and the girls would be posting status updates, and if they did manage to find the time for lunch, they’d be too busy checking their BlackBerrys to have a real conversation. Sex and Friends went off the air just five years ago, and already we live in a different world. Friendship (like activism) has been smoothly integrated into our new electronic lifestyles. We’re too busy to spare our friends more time than it takes to send a text. We’re too busy, sending texts. And what happens when we do find the time to get together? I asked a woman I know whether her teenage daughters and their friends still have the kind of intense friendships that kids once did. Yes, she said, but they go about them differently. They still stay up talking in their rooms, but they’re also online with three other friends, and texting with another three. Video chatting is more intimate, in theory, than speaking on the phone, but not if you’re doing it with four people at once. And teenagers are just an early version of the rest of us. A study found that one American in four reported having no close confidants, up from one in 10 in 1985. The figures date from 2004, and there’s little doubt that Facebook and texting and all the rest of it have already exacerbated the situation. The more people we know, the lonelier we get.

The new group friendship, already vitiated itself, is cannibalizing our individual friendships as the boundaries between the two blur. (…) Perhaps I need to surrender the idea that the value of friendship lies precisely in the space of privacy it creates: not the secrets that two people exchange so much as the unique and inviolate world they build up between them, the spider web of shared discovery they spin out, slowly and carefully, together. There’s something faintly obscene about performing that intimacy in front of everyone you know, as if its real purpose were to show what a deep person you are. Are we really so hungry for validation? So desperate to prove we have friends?

But surely Facebook has its benefits. Long-lost friends can reconnect, far-flung ones can stay in touch. I wonder, though. Having recently moved across the country, I thought that Facebook would help me feel connected to the friends I’d left behind. But now I find the opposite is true. Reading about the mundane details of their lives, a steady stream of trivia and ephemera, leaves me feeling both empty and unpleasantly full, as if I had just binged on junk food, and precisely because it reminds me of the real sustenance, the real knowledge, we exchange by e-mail or phone or face-to-face. And the whole theatrical quality of the business, the sense that my friends are doing their best to impersonate themselves, only makes it worse. The person I read about, I cannot help feeling, is not quite the person I know. [Facebook] As for getting back in touch with old friends—yes, when they’re people you really love, it’s a miracle. (…)

Facebook holds out a utopian possibility: What once was lost will now be found. But the heaven of the past is a promised land destroyed in the reaching. Facebook, here, becomes the anti-madeleine, an eraser of memory. Carlton Fisk has remarked that he’s watched the videotape of his famous World Series home run only a few times, lest it overwrite his own recollection of the event. Proust knew that memory is a skittish creature that peeks from its hole only when it isn’t being sought. Mementos, snapshots, reunions, and now this—all of them modes of amnesia, foes of true remembering. The past should stay in the heart, where it belongs.

Finally, the new social-networking Web sites have falsified our understanding of intimacy itself, and with it, our understanding of ourselves. The absurd idea, bruited about in the media, that a MySpace profile or "25 Random Things About Me" can tell us more about someone than even a good friend might be aware of is based on desiccated notions about what knowing another person means: First, that intimacy is confessional—an idea both peculiarly American and peculiarly young, perhaps because both types of people tend to travel among strangers, and so believe in the instant disgorging of the self as the quickest route to familiarity. Second, that identity is reducible to information: the name of your cat, your favorite Beatle, the stupid thing you did in seventh grade. Third, that it is reducible, in particular, to the kind of information that social-networking Web sites are most interested in eliciting, consumer preferences. Forget that we’re all conducting market research on ourselves. (…)

So information replaces experience, as it has throughout our culture. But when I think about my friends, what makes them who they are, and why I love them, it is not the names of their siblings that come to mind, or their fear of spiders. It is their qualities of character. This one’s emotional generosity, that one’s moral seriousness, the dark humor of a third. Yet even those are just descriptions, and no more specify the individuals uniquely than to say that one has red hair, another is tall. To understand what they really look like, you would have to see a picture. And to understand who they really are, you would have to hear about the things they’ve done. Character, revealed through action: the two eternal elements of narrative. In order to know people, you have to listen to their stories. (…)

Each evolved as a space for telling stories, an act that cannot usefully be accomplished in much less. Posting information is like pornography, a slick, impersonal exhibition. Exchanging stories is like making love: probing, questing, questioning, caressing. It is mutual. It is intimate. It takes patience, devotion, sensitivity, subtlety, skill—and it teaches them all, too. (…)

Now, in the age of the entrepreneurial self, even our closest relationships are being pressed onto this template. A recent book on the sociology of modern science describes a networking event at a West Coast university: "There do not seem to be any singletons—disconsolately lurking at the margins—nor do dyads appear, except fleetingly." No solitude, no friendship, no space for refusal—the exact contemporary paradigm. At the same time, the author assures us, “face time” is valued in this “community” as a “high-bandwidth interaction,” offering “unusual capacity for interruption, repair, feedback and learning.” Actual human contact, rendered “unusual” and weighed by the values of a systems engineer. We have given our hearts to machines, and now we are turning into machines. The face of friendship in the new century.”

William Deresiewicz, formerly an associate professor of English at Yale University, is a widely published literary critic, Faux Friendship, The Chronicle of Higher Education, Dec 6, 2009

See also:

☞ William Deresiewicz radio interview (audio)
Dunbar’s Number: Why We Can’t Have More Than 150 Friends
William Deresiewicz on solitude

May
30th
Mon
permalink

Luciano Floridi on the future development of the information society

                                   

In information societies, the threshold between online and offline will soon disappear, and that once there won’t be any difference, we shall become not cyborgs but rather inforgs, i.e. connected informational organisms. (…)

Infosphere is a neologism I coined years ago on the basis of “biosphere”, a term referring to that limited region on our planet that supports life. It denotes the whole informational environment constituted by all informational entities (thus including informational agents as well), their properties, interactions, processes and mutual relations. It is an environment comparable to, but different from cyberspace (which is only one of its sub-regions, as it were), since it also includes off-line and analogue spaces of information. We shall see that it is also an environment (and hence a concept) that is rapidly evolving. (…)

Re-ontologizing is another neologism that I have recently introduced in order to refer to a very radical form of re-engineering, one that not only designs, constructs or structures a system (e.g. a company, a machine or some artefact) anew, but that fundamentally transforms its intrinsic nature. In this sense, for example, nanotechnologies and biotechnologies are not merely re-engineering but actually re-ontologizing our world. (…)

Nowadays, we are used to considering the space of information as something we log-in to and log-out from. Our view of the world (our metaphysics) is still modern or Newtonian: it is made of “dead” cars, buildings, furniture, clothes, which are non-interactive, irresponsive and incapable of communicating, learning, or memorizing. But what we still experience as the world offline is bound to become a fully interactive and responsive environment of wireless, pervasive, distributed, a2a (anything to anything) information processes, that works a4a (anywhere for anytime), in real time. This will first gently invite us to understand the world as something “alive” (artificially live). Such animation of the world will, paradoxically, make our outlook closer to that of pre-technological cultures which interpreted all aspects of nature as inhabited by teleological forces.

The second step will be a reconceptualization of our ontology in informational terms. It will become normal to consider the world as part of the infosphere, not so much in the dystopian sense expressed by a Matrix-like scenario, where the “real reality” is still as hard as the metal of the machines that inhabit it; but in the evolutionary, hybrid sense represented by an environment such as New Port City, the fictional, post-cybernetic metropolis of Ghost in the Shell.

The infosphere will not be a virtual environment supported by a genuinely “material” world behind; rather, it will be the world itself that will be increasingly interpreted and understood informationally, as part of the infosphere. At the end of this shift, the infosphere will have moved from being a way to refer to the space of information to being synonymous with Being. This is the sort of informational metaphysics I suspect we shall find increasingly easy to embrace. (…)

We have all known that this was possible on paper for some time; the difference is that it is now actually happening in our kitchen. (…)

As a consequence of such re-ontologization of our ordinary environment, we shall be living in an infosphere that will become increasingly synchronized (time), delocalised (space) and correlated (interactions). Previous revolutions (especially the agricultural and the industrial ones) created macroscopic transformation in our social structures and architectural environments, often without much foresight.

The informational revolution is no less dramatic. We shall be in serious trouble, if we do not take seriously the fact that we are constructing the new environment that will be inhabited by future generations. We should be working on an ecology of the infosphere, if we wish to avoid problems such as a tragedy of the digital commons. Unfortunately, I suspect it will take some time and a whole new kind of education and sensitivity to realise that the infosphere is a common space, which needs to be preserved to the advantage of all.

One thing seems indubitable though: the digital divide will become a chasm, generating new forms of discrimination between those who can be denizens of the infosphere and those who cannot, between insiders and outsiders, between information rich and information poor. It will redesign the map of worldwide society, generating or widening generational, geographic, socio-economic and cultural divides. But the gap will not be reducible to the distance between industrialized and developing countries, since it will cut across societies.

The evolution of inforgs

We have seen that we are probably the last generation to experience a clear difference between onlife and online. The third transformation that I wish to highlight concerns precisely the emergence of artificial and hybrid (multi) agents, i.e., partly artificial and partly human (consider, for example, a family as a single agent, equipped with digital cameras, laptops, palm pilots, iPods, mobiles, wireless network, digital TVs, DVDs, CD players, etc.).

These new agents already share the same ontology with their environment and can
operate in it with much more freedom and control. We (shall) delegate or outsource to artificial agents memories, decisions, routine tasks and other activities in ways that will be increasingly integrated with us and with our understanding of what it means to be an agent. (…)

Our understanding of ourselves as agents will also be deeply affected. I am not referring here to the sci-fi vision of a “cyborged”2 humanity. Walking around with
something like a Bluetooth wireless headset implanted in your ear does not seem the best way forward, not least because it contradicts the social message it is also meant to be sending: being always on call is a form of slavery, and anyone so busy and important should have a PA instead. The truth is rather that being a sort of cyborg is not what people will embrace, but what they will try to avoid, unless it is inevitable (more on this shortly). (…)

We are all becoming connected informational organisms (inforgs). This is happening not through some fanciful transformation in our body, but, more seriously and realistically, through the re-ontologization of our environment and of ourselves. (…)

The informational nature of agents should not be confused with a “data shadow” either. The more radical change, brought about by the re-ontologization of the infosphere, will be the disclosure of human agents as interconnected, informational organisms among other informational organisms and agents. (…)

We are witnessing an epochal, unprecedented migration of humanity from its Umwelt [the outer world, or reality, as it affects the agent inhabiting it] to the infosphere itself, not least because the latter is absorbing the former. As a result, humans will be inforgs among other (possibly artificial) inforgs and agents operating in an environment that is friendlier to digital creatures. As digital immigrants like us are replaced by digital natives like our children, the latter will come to appreciate that there is no ontological difference between infosphere and Umwelt, only a difference of levels of abstractions. And when the migration is complete, we shall increasingly feel deprived, excluded, handicapped or poor to the point of paralysis and psychological trauma whenever we are disconnected from the infosphere, like fish out of water.

One day, being an inforg will be so natural that any disruption in our normal flow of information will make us sick. Even literally. A simple illustration is provided by current BAN (Body Area Network) – systems “a base technology for permanent monitoring and logging of vital signs […] [to supervise] the health status of patients suffering from chronic diseases, such as Diabetes and Asthma.” (…)

One important problem that we shall face will concern the availability of sufficient energy to stay connected to the infosphere non-stop. It is what Intel calls the battery life challenge[pdf] (…) Today, we know that our autonomy is limited by the energy bottleneck of our batteries. (…)

In the US, the average age of players is increasing, as the children of the post-computer revolutions are reaching their late thirties. (…) By the time they retire, in three or four decades, they will be living in the infosphere full-time. (…)

If you spend more time connected than sleeping, you are an inforg. (…)”

Luciano Floridi, MPhil. and PhD, MA University of Oxford, currently holds the Research Chair in philosophy of information and the UNESCO Chair in Information and Computer Ethics, both at the University of Hertfordshire, Department of Philosophy, The future development of the information society (pdf), University of Hertfordshire. (Illustration source)

See also:

Luciano Floridi on The Digital Revolution as a Fourth Revolution: “P2P does’t mean Pirate to Pirate but Platonist to Platonist”
Luciano Floridi on Philosophy of Information (set of videos)
The Rise of the Conversation Society: information, communication and collaboration
Keen On… James Gleick: Why Cyberspace, As a Mode of Being, Will Never Go Away (TCTV), (video) TechCrunch, Jun 23, 2011
Timothy Leary on cybernetics and a new global culture
Cyberspace tag on Lapidarium

May
27th
Fri
permalink

The Filter Bubble: Eli Pariser on What the Internet Is Hiding From You 

         image

"A “filter bubble”— “a unique universe of information for each of us”, meaning that we are less likely to encounter information online that challenges our existing views or sparks serendipitous connections. “A world constructed from the familiar is a world in which there’s nothing to learn,” Mr Pariser declares. He calls this “invisible autopropaganda, indoctrinating us with our own ideas”.The dangers of the internet: Invisible sieve, The Economist, Jun 30th 2011

"We’re used to thinking of the Internet like an enormous library, with services like Google providing a universal map. But that’s no longer really the case. Sites from Google and Facebook to Yahoo News and the New York Times are now increasingly personalized – based on your web history, they filter information to show you the stuff they think you want to see. That can be very different from what everyone else sees – or from what we need to see.

Your filter bubble is this unique, personal universe of information created just for you by this array of personalizing filters. It’s invisible and it’s becoming more and more difficult to escape. (…)

Q: What is the Internet hiding from me?

EP: As Google engineer Jonathan McPhie explained to me, it’s different for every person – and in fact, even Google doesn’t totally know how it plays out on an individual level. (…)

In one form or another, nearly every major website on the Internet is flirting with personalization. But the one that surprises people most is Google. If you and I Google the same thing at the same time, we may get very different results. Google tracks hundreds of “signals” about each of us – what kind of computer we’re on, what we’ve searched for in the past, even how long it takes us to decide what to click on – and uses it to customize our results. When the result is that our favorite pizza parlor shows up first when we Google pizza, it’s useful. But when the result is that we only see the information that is aligned with our religious or social or political beliefs, it’s difficult to maintain perspective. (…)

Research psychologists have known for a while that the media you consume shapes your identity. So when the media you consume is also shaped by your identity, you can slip into a weird feedback loop. A lot of people see a simple version of this on Facebook: You idly click on an old classmate, Facebook reads that as a friendship, and pretty soon you’re seeing every one of John or Sue’s posts.

Gone awry, personalization can create compulsive media – media targeted to appeal to your personal psychological weak spots. You can find yourself eating the equivalent of information junk food instead of having a more balanced information diet. (…)

Google’s Eric Schmidt recently said “It will be very hard for people to watch or consume something that has not in some sense been tailored for them,” rather than “Google is making it very hard…” Mark Zuckerberg perfectly summed up the tension in personalization when he said “A squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa.” But he refuses to engage with what that means at a societal level – especially for the people in Africa.”

Eli Pariser, the former Executive Director of MoveOn.org, political activist, The Filter Bubble: What the Internet Is Hiding From You, The Penguin Press, 2011. 

A Filter Bubbles: ‘a static ever-narrowing version of yourself’

"We are beginning to live in what Eli Pariser calls “filter bubble,” personalized micro-universes of information that overemphasize what we want to hear and filter out what we don’t. Not only are we unaware of the information that is filtered out, but we are unaware that we are unaware. Our personal economies of information seem complete despite their deficiencies. Personal decisions contribute to this pattern, and ever more sophisticated technologies add to it. Google’s understanding of our tastes and interests is still a crude one, but it shapes the information that we find via Google searches. And because the information we are exposed to perpetually reshapes our interests, we can become trapped in feedback loops: Google’s perception of what we want to read shapes the information we receive, which in turn affects our interests and browsing behavior, providing Google with new information. The result, Pariser suggests, may be “a static ever-narrowing version of yourself.”

This self-reinforcement may have unhappy consequences for politics. Pariser, who is inspired by the philosophy of John Dewey, argues that ideological opponents need to agree on facts and understand each other’s point of view if democracy is to work well. Exposure to the other side allows for the creation of a healthy “public” that can organize around important public issues. Traditional media, in which editors choose stories they believe to be of public interest, have done this job better than do trivia-obsessed new media. Furthermore, intellectual cocooning may stifle creativity, which is spurred by the collision of different ways of thinking about the world. If we are not regularly confronted with surprising facts and points of view, we are less likely to come up with innovative solutions. (…)

When Pariser argues that the dissemination of information has political consequences, he is right. He is also persuasive in arguing that filter-bubble problems cannot be solved easily through individual action. When Pariser himself sought to broaden his ideological horizons through “friending” conservatives on Facebook, he found that Facebook systematically failed to include their updates in his main feed. Since he clicked on these links less often, Facebook inferred that he wasn’t interested in them and calibrated his information diet accordingly. As long as dominant businesses have incentives to give us what they think we want, individuals will have difficulty in breaking through the bubble on their own. The businesses aren’t wrong: Most people don’t enjoy having their basic preconceptions regularly challenged. They like their bubbles.”

Henry Farrell, Irish-born associate professor of political science and international affairs at the George Washington University, Bubble Trouble, The American Prospect, Aug 30, 2011

See also:

Interview with Eli Pariser “On The Media: “The Filter Bubble”
Eli Pariser: Beware online “filter bubbles”, TED.com, Mar 2011

Network. What happens to the information we feed into the network (visualization)

Apr
30th
Sat
permalink

The Rise of the Conversation Society: information, communication and collaboration

“No, sir. The Americans have need of the telephone – but we do not. We have plenty of messenger boys.” — Sir William Preece, Chief Engineer Of The British Post Office, 1876

What information consumes is rather obvious: it consumes the attention of its recipients. Hence, a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.” Herbert Simon, American political scientist, economist, sociologist, and psychologist (1916-2001)

The World Live in Your Living Room

"A century ago, there were still no modern media: no radio, no TV, no fancy avatars. Our world was much smaller. The dinner table was the most important medium. A family would sit around it, joined by neighbors and sometimes the minister or pastor. Discussions involved this or that, as well as gossip and scandal. The table was the medium, even in a literal sense, as it stood in the midst of a group. The content was us. We talked about our stories, experiences, things we felt to be important, and how they were all interconnected. (…)

The dinner table was the most prominent media location in most households for centuries. In the kitchen or dining room, we wrote letters, read the Bible or browsed through the newspaper. Homework was done, drawings made and later the radio was listened to. People chatted, played games and insofar as the dinner table served all these functions, it might even be said that it was the first multimedia, multi-user and multitasking environment. Even Me-Media would be appropriate, the only difference being that no one could tune in remotely. For larger manifestations, we had to go to the town hall, the pool hall or the market. The latter was the most prominent place for town criers, proclamations, trade, and enjoyment, as well as podiums and pillories.

Spanning distances with media was successfully accomplished for the first time during the French Revolution. Use was then made of the optical telegraph to make the world a smaller place. Using flags, sticks and semaphore, reports were transmitted, just as the American Indians worked with smoke signals. This form of messaging was much faster than sending a messenger on horseback. The first trans-Atlantic telegraph transmission only became a reality in 1866, using electrical equipment to send Morse code back and forth along an undersea cable. The Victorian Internet was born, but it certainly was not a mass medium comparable to radio, TV and later the real Internet. (p.126-7.) (…)

Recording and Playing Back

The well-known British composer Arthur Sullivan, whose “Lost Chord” is one of the first recorded pieces of music, spoke the following into the phonogram:

"Dear Mr. Edison,

If my friend Edmund Yates has been a little incoherent it is in consequence of the excellent dinner and good wines that he has drunk. Therefore I think you will excuse him. He has his lucid intervals. For myself, I can only say that I am astonished and somewhat terrified at the result of this evening’s experiments: astonished at the wonderful power you have developed, and terrified at the thought that so much hideous and bad music may be put on record for ever. But all the same I think it is the most wonderful thing that I have ever experienced, and congratulate you with all my heart on this wonderful discovery.” — Arthur Sullivan (p.129-130)

The Obama Moment: From Conversation Economy to Conversation Society


                                                         (Click image to enlarge)

Clinton lost because Obama relied on a new model of politics powered by new technologies. For fundraising, he tapped into countless middle-class people who were able to use the Internet to donate small amounts that aggregated into huge sums. For organization, he skillfully used new technologies to reach vast numbers of party outsiders, i.e. ordinary people who could productively get involved in the campaign; not as spectators or occasional donors, but by rolling up their sleeves and doing work via a computer or on the ground. And as for media, Obama and his team mastered the new media better than any candidate yet. His videos went viral, his social networks hummed, his text messages really connected with people, particularly young people.” (p.105-106.) (…)

The Metaverse: Our New Virtual Universe

The principle meaning of the term “universe” is the cosmos, everything considered as a whole, both the very far and very near. It is therefore not just a term that describes distant heavens, but also the immediately accessible (everything around us); both the vastness of deep space and the “world of human experience” (Merriam-Webster). And it is this view of a proximate universe that you have to keep in mind when reading this chapter: the universe anchored in human experience.

In this context, the digital Metaverse (here a mashup of meta and universe) is a logical term describing a world beyond the immediacy of human reality. Intriguingly the Metaverse enriches our everyday universe, and thus transforms it into a new Virtu-Reality. (…) The purpose of the Metaverse is to digitally expand our physical reality, creating a new Virtu-Reality that adds socio-economic value to individuals and organizations. (…) (p.171.)

A hyperlinked identity within the Metaverse is called a Hyperego, regardless of whether it identifies an individual, brand, organization, and so on. Each identity will mostly consist of a number of subidentities. This segmentation is already discernible online. (p.174.) (…)

                                                         (Click image to enlarge)

In Snow Crash, the 1992 science fiction bestseller by Neal Stephenson, the Metaverse was 1.6 times as big as the real world. But the virtual economy may soon be over one and a half times as big as the physical one. (p. 202) (…)

The Development of Virtu-Real Media

“Virtual” is a collective qualifier for all uni-media or multimedia information that enriches our sensual contact with reality. But, by extension, virtual is also the characteristic of all information that is lacking from physical reality. When we, for example, call someone on the telephone, eye contact between conversing individuals is missing. From the vision perspective, telephone calls are therefore a virtual form of communication. Instant Messaging, or “chatting,” are a form of “texting,” no words being vocally uttered. Furthermore, the purely auditory nature of (traditional) phone conversations alters the focus of attention and fosters reflection, which often may be very positive. On the phone, we concentrate on the verbal subject and are not distracted by looks, gestures and postures of the other(s).

Virtuality therefore enriches the reality in which we exist, the envelope of everything around us. Real and virtual are two points on a single continuum, just like beautiful/ugly and healthy/sick. In fact, close inspection reveals that we are constantly physically and/or intellectually moving between the poles in this continuum in one way or another. Virtual-real is therefore not an “either…or” distinction but “both…and.” Every thought that we have, every image that we see, every song that we hear on the radio more or less all have virtual qualities. In each sensually limited stimulus (a person on TV cannot be smelled, touched or brought into immediate contact) there is media involved; each type of medium filters out a number of stimuli and focuses attention on others. (p. 248-9.) (…)

Metaverse media are designed to reach beyond Marshall McLuhans “Extensions of Man” status, instead being valued as an integral part of Real Life.

“Universal” Real Life is the origin, the reference point and the destination of all Metaverse virtuality. Their degree of being Sensory, Continuous and Physical determines the real-life experience we can have with such digital features and applications.”


(MMORPG - Massively multiplayer online role-playing game is a genre of role-playing video games in which a very large number of players interact with one another within a virtual game world.) (p.250)

Jaap Bloem, Menno van Doorn, Sander Duivestein, Me the Media. Rise of the Conversation Society, VINT, Research Institute of Sogeti, 2009

Collaboration in the Cloud

"Time itself has changed, or at least our perception of it. We are living in a 24/7 economy. (…) (p.3)

Compared to previous technological revolutions, there is one big discernible difference: current technology is causing various areas of knowledge to merge.

Figure 2.1 comes from the website of the Web Science Research Initiative, 2 set up by World Wide Web founder Tim Berners-Lee. This organization aims to chart the ways in which the internet is changing our society, and it does so by examining how various fields of knowledge are unifying. (…)

One of the direct consequences of this evolving merger of knowledge is that we are rediscovering people, members of society with whom we lost touch long ago. Long before the industrial revolution, the farmer and the baker knew precisely what they might expect from each other. The farmer worked the land and the flour from his harvested grain ended up at the baker, who then baked the farmer’s bread. If the farmer were not satisfied with the taste of the bread, he would complain directly to the baker in order to have him modify the recipe. The interaction was an entirely simple form of collaboration based on direct communication.

The industrial revolution’s fascination with maximum efficiency made sure that people only worried about their own tasks and never, or seldom, got together to deal with all the types of problems on the work floor. The balance between technology and community was disturbed so that it tilted to the advantage of technology. (…) As a direct consequence of this change, people grew distant from one another. (p. 35-36) (…)

"Call them the “weapons of mass collaboration.” These changes, among others, are ushering us toward a world where knowledge, power and productive capability will be more dispersed than at any time in our history – a world where value creation will be fast, fluid and persistently disruptive. A world where only the connected will survive. A power shift is underway, and a tough new business rule is emerging: harness the new collaboration or perish.” Don Tapscott, Wikinomics: How Mass Collaboration Changes Everything, Portfolio, 2006

(…)

Figure 2.6 displays a number of diagrams representing the ways in which collaborations evolve:

Figure 2.6 makes it clear that two types of collaboration are dominant. The traditional hierarchical forms, such as those that came into vogue with the industrial revolution, and the network form which is now coming into use as a consequence of the emergence of the World Wide Web. In terms of time, we are currently in a transition phase in which companies are mostly adopting hybrid forms. (p.44.) (…)

Timeline of Communication Tools


                                                      (Click image to enlarge) (p.88)

"Now is the time to really move swiftly, to seize these new possibilities and to exploit them… Web 2.0 has to have a purpose. The purpose I would urge as many of you as can take it on, is to repair our relationship with this planet and the imminent danger we face." — Al Gore (p. 255)

Sander Duivestein, a senior analyst at VINT, the International Research Institute of Sogeti, Erik van Ommeren, responsible for VINT – the International Research Institute of Sogeti in the USA, John deVadoss, Clemens Reijnen, Collaboration in the Cloud - How Cross-Boundary Collaboration is Transforming Business (pdf), Microsoft and Sogeti, 2009.

See also:
Luciano Floridi on the future development of the information society
James Gleick: Bits and Bytes - How the language, information transformed humanity, (video) Fora.tv, May, 19, 2011
Keen On… James Gleick: Why Cyberspace, As a Mode of Being, Will Never Go Away (TCTV), (video) TechCrunch, Jun 23, 2011
Timothy Leary on cybernetics and a new global culture
Cyberspace tag on Lapidarium
☞ The "Age of information" tag on Lapidarium

Apr
28th
Thu
permalink

Isaac Asimov predicted the Internet of today 20 years ago (1988)



Bill Moyers: Can we have a revolution in learning?

Isaac Asimov: “Yes, I think not only that we can but that we must. As computers take over more and more of the work that human beings shouldn’t be doing in the first place - because it doesn’t utilize their brains, it stultifies and bores them to death - there’s going to be nothing left for human beings to do but the more creative types of endeavor. The only way we can indulge in the more creative types of endeavor is to have brains that aim at that from the start. (…)

In the old days, very few people could read and write. Literacy was a very novel sort of thing, and it was felt that most people just didn’t have it in them. But with mass education, it turned out that most people could be taught to read and write. In the same way, once we have computer outlets in every home, each of them hooked up to enormous libraries, where you can ask any question and be given answers, you can look up something you’re interested in knowing, however silly it might seem to someone else.

Today, what people call learning is forced on you. Everyone is forced to learn the same thing on the same day at the same speed in class. But everyone is different. For some, class goes too fast, for some too slow, for some in the wrong direction. But give everyone a chance, in addition to school, to follow up their own bent from the start, to find out about whatever they’re interested in by looking it up in their own homes, at their own speed, in their own time, and everyone will enjoy learning.

BM: What about the argument that machines, like computers, dehumanize learning?

IA: As a matter of fact, it’s just the reverse. It’s through this machine that for the first time, we’ll be able to have a one-to-one relationship between information source and information consumer. In the old days, you used to hire a tutor or pedagogue to teach your children. And if he knew his job, he could adapt his teaching to the tastes and abilities of the students. But how many people could afford to hire a pedagogue? Most children went uneducated. Then we reached the point where it was absolutely necessary to educate everybody. The only way we could do it was to have one teacher for a great many students, and to give the teacher a curriculum to teach from. But how many teachers are good at this? As with everything else, the number of teachers is far greater than the number of good teachers. So we either have a one-to-one relationship for the very few, or a one-to-many for the many. Now, with the computer, it’s possible to have a one-to-one relationship for the many. Everyone can have a teacher in the form of access to the gathered knowledge of the human species. (…)

BM: What would such a teaching machine look like?

IA: I find that difficult to imagine. It’s easy to be theoretical, but when you really try to think of the nuts and bolts, then it becomes difficult. I could easily have imagined a horseless carriage in the middle of the nineteenth century, but I couldn’t have drawn a picture of it. But I suppose that one essential thing would be a screen on which you could display things, and another essential part would be a printing mechanism on which things could be printed for you. And you’ll have to have a keyboard on which you ask your questions’ although ideally I would like to see one that could be activated by voice. You could actually talk to it, and perhaps it could talk to you too, and say, “I have something here that may interest you. Would you like to have me print it out for you.?” And you’d say, “Well, what is it exactly?” And it would tell you, and you might say, “Oh all right, I’ll take a look at it.” (…)

BM: But the machine would have to be connected to books, periodicals, and documents in some vast library, so then when I want to look at Isaac Asimov’s new book Far as Human Eye Could See, the chapter on geochemistry, I could punch my keys and this chapter would come to me.

IA: That’s right, and then of course you ask - and believe me, I’ve asked - this question: “How do you arrange to pay the author for the use of the material?” After all, if a person writes something, and this then becomes available to everybody’ you deprive him of the economic reason for writing. A person like myself, if he was assured of a livelihood, might write anyway, just because he enjoyed it, but most people would want to do it in return for something. I imagine how they must have felt when free libraries were first instituted. “What? My book in a free library? Anyone can come in and read it for free.?” Then you realize that there are some books that wouldn’t be sold at all if you didn’t have libraries.

BM: With computers, in a sense, every student has his or her own private school.

IA: Yes, he can be the sole dictator of what he is going to study. Mind you, this is not all he’s going to do. He’ll still be going to school for some things that he has to know.

BM: Common knowledge for example.

IA: Right, and interaction with other students and with teachers. He can’t get away from that, but he’s got to look forward to the fun in life, which is following his own bent.

BM: Is this revolution in personal learning just for the young?

IA: No, it’s not just for the young. That’s another trouble with education as we now have it. People think of education as something that they can finish. And what’s more, when they finish, it’s a rite of passage. You’re finished with school. You’re no more a child, and therefore anything that reminds you of school - reading books, having ideas, asking questions - that’s kid’s stuff. Now you’re an adult, you don’t do that sort of thing any more. (…)

You have everybody looking forward to no longer learning, and you make them ashamed afterward of going back to learning. If you have a system of education using computers, then anyone, any age, can learn by himself, can continue to be interested. If you enjoy learning, there’s no reason why you should stop at a given age. People don’t stop things they enjoy doing just because they reach a certain age. They don’t stop playing tennis just because they’ve turned forty. They don’t stop with sex just because they’ve turned forty. They keep it up as long as they can if they enjoy it, and learning will be the same thing. The trouble with learning is that most people don’t enjoy it because of the circumstances. Make it possible for them to enjoy learning, and they’ll keep it up.

There’s the famous story about Oliver Wendell Holmes, who was in the hospital one time, when he was over ninety. President Roosevelt came to see him, and there was Oliver Wendell Holmes reading the Greek grammar. Roosevelt said, “Why are you reading a Greek grammar, Mr. Holmes?” And Holmes said, “To improve my mind, Mr. President.”

BM: Are we romanticizing this, or do you think that Saul Bellow's character Herzog was correct when he said that the people who come to evening classes are only ostensibly after culture. What they're really seeking, he said, is clarity, good sense, and truth, even an atom of it. People, he said, are dying for the lack of something real at the end of the day.

IA: I’d like to think that was so. I’d like to think that people who are given a chance to learn facts and broaden their knowledge of the universe wouldn’t seek so avidly after mysticism.”

Isaac Asimov, American author and professor of biochemistry at Boston University, best known for his works of science fiction and for his popular science books (1920-1992), Bill Moyers interviewed author Isaac Asimov, World of Ideas, PBS, 1988.

Apr
23rd
Sat
permalink

What Defines a Meme? James Gleick: Our world is a place where information can behave like human genes and ideas can replicate, mutate and evolve


With the rise of information theory, ideas were seen as behaving like organisms, replicating by leaping from brain to brain, interacting to form new ideas and evolving in what the scientist Roger Sperry called “a burstwise advance.” (Illustration by Stuart Bradford)

"When I muse about memes, I often find myself picturing an ephemeral flickering pattern of sparks leaping from brain to brain, screaming "Me, me!"Douglas Hofstadter (1983)

"Now through the very universality of its structures, starting with the code, the biosphere looks like the product of a unique event. (…) The universe was not pregnant with life, nor the biosphere with man. Our number came up in the Monte Carlo game. Is it any wonder if, like a person who has just made a million at the casino, we feel a little strange and a little unreal?"Jacques Monod (1970)

"What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions.”Richard Dawkins (1986)

"The cells of an organism are nodes in a richly interwoven communications network, transmitting and receiving, coding and decoding. Evolution itself embodies an ongoing exchange of information between organism and environment. “If you want to understand life,” [Richard] Dawkins wrote, “don’t think about vibrant, throbbing gels and oozes, think about information technology.” (…)

The rise of information theory aided and abetted a new view of life. The genetic code—no longer a mere metaphor—was being deciphered. Scientists spoke grandly of the biosphere: an entity composed of all the earth’s life-forms, teeming with information, replicating and evolving. And biologists, having absorbed the methods and vocabulary of communications science, went further to make their own contributions to the understanding of information itself.

Jacques Monod, the Parisian biologist who shared a Nobel Prize in 1965 for working out the role of messenger RNA in the transfer of genetic information, proposed an analogy: just as the biosphere stands above the world of nonliving matter, so an “abstract kingdom” rises above the biosphere. The denizens of this kingdom? Ideas.

Ideas have retained some of the properties of organisms,” he wrote. “Like them, they tend to perpetuate their structure and to breed; they too can fuse, recombine, segregate their content; indeed they too can evolve, and in this evolution selection must surely play an important role.”

Ideas have “spreading power,” he noted—“infectivity, as it were”—and some more than others. An example of an infectious idea might be a religious ideology that gains sway over a large group of people. The American neurophysiologist Roger Sperry had put forward a similar notion several years earlier, arguing that ideas are “just as real” as the neurons they inhabit. Ideas have power, he said:

"Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and thanks to global communication, in far distant, foreign brains. And they also interact with the external surroundings to produce in toto a burstwise advance in evolution that is far beyond anything to hit the evolutionary scene yet.

Monod added, “I shall not hazard a theory of the selection of ideas.” There was no need. Others were willing.

Richard Dawkins made his own jump from the evolution of genes to the evolution of ideas. For him the starring role belongs to the replicator, and it scarcely matters whether replicators were made of nucleic acid. His rule is “All life evolves by the differential survival of replicating entities.” Wherever there is life, there must be replicators. Perhaps on other worlds replicators could arise in a silicon-based chemistry—or in no chemistry at all.

What would it mean for a replicator to exist without chemistry? “I think that a new kind of replicator has recently emerged on this very planet,” Dawkins proclaimed near the end of his first book, The Selfish Gene, in 1976. “It is staring us in the face. It is still in its infancy, still drifting clumsily about in its primeval soup, but already it is achieving evolutionary change at a rate that leaves the old gene panting far behind.” That “soup” is human culture; the vector of transmission is language, and the spawning ground is the brain.

For this bodiless replicator itself, Dawkins proposed a name. He called it the meme, and it became his most memorable invention, far more influential than his selfish genes or his later proselytizing against religiosity. “Memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation,” he wrote. They compete with one another for limited resources: brain time or bandwidth. They compete most of all for attention. For example:

Ideas. Whether an idea arises uniquely or reappears many times, it may thrive in the meme pool or it may dwindle and vanish. The belief in God is an example Dawkins offers—an ancient idea, replicating itself not just in words but in music and art. The belief that Earth orbits the Sun is no less a meme, competing with others for survival. (Truth may be a helpful quality for a meme, but it is only one among many.)

Catchphrases. One text snippet, “What hath God wrought?” appeared early and spread rapidly in more than one medium. Another, “Read my lips,” charted a peculiar path through late twentieth-century America. “Survival of the fittest” is a meme that, like other memes, mutates wildly (“survival of the fattest”; “survival of the sickest”; “survival of the fakest”; “survival of the twittest”; … ).

Images. In Isaac Newton’s lifetime, no more than a few thousand peo­ple had any idea what he looked like, though he was one of England’s most famous men, yet now millions of people have quite a clear idea- based on replicas of copies of rather poorly painted portraits. Even more pervasive and indelible are the smile of Mona Lisa, The Scream of Edvard Munch, and the silhouettes of various fictional extraterrestrials. These are memes, living a life of their own, independent of any physical reality. “This may not be what George Washington looked like then,” a tour guide was overheard saying of the Gilbert Stuart painting at the Metropolitan Museum of Art, “but this is what he looks like now.” Exactly.

Memes emerge in brains and travel outward, establishing beachheads on paper and celluloid and silicon and anywhere else information can go. They are not to be thought of as elementary particles but as organisms. The number three is not a meme; nor is the color blue, nor any simple thought, any more than a single nucleotide can be a gene. Memes are complex units, distinct and memorable—units with staying power.

Also, an object is not a meme. The hula hoop is not a meme; it is made of plastic, not of bits. When this species of toy spread worldwide in a mad epidemic in 1958, it was the product, the physical manifestation, of a meme, or memes: the craving for hula hoops; the swaying, swinging, twirling skill set of hula-hooping. The hula hoop itself is a meme vehicle. So, for that matter, is each human hula hooper—a strikingly effective meme vehicle, in the sense neatly explained by the philosopher Daniel Dennett: “A wagon with spoked wheels carries not only grain or freight from place to place; it carries the brilliant idea of a wagon with spoked wheels from mind to mind.” Hula hoopers did that for the hula hoop’s memes—and in 1958 they found a new transmission vector, broadcast television, sending its messages immeasurably faster and farther than any wagon. The moving image of the hula hooper seduced new minds by hundreds, and then by thousands, and then by millions. The meme is not the dancer but the dance.

For most of our biological history memes existed fleetingly; their main mode of transmission was the one called “word of mouth.” Lately, however, they have managed to adhere in solid substance: clay tablets, cave walls, paper sheets. They achieve longevity through our pens and printing presses, magnetic tapes and optical disks. They spread via broadcast towers and digital networks. Memes may be stories, recipes, skills, legends or fashions. We copy them, one person at a time. Alternatively, in Dawkins’ meme-centered perspective, they copy themselves.

“I believe that, given the right conditions, replicators automatically band together to create systems, or machines, that carry them around and work to favor their continued replication,” he wrote. This was not to suggest that memes are conscious actors; only that they are entities with interests that can be furthered by natural selection. Their interests are not our interests. “A meme,” Dennett says, “is an information-packet with attitude.” When we speak of fighting for a principle or dying for an idea, we may be more literal than we know.

Tinker, tailor, soldier, sailor….Rhyme and rhythm help people remember bits of text. Or: rhyme and rhythm help bits of text get remembered. Rhyme and rhythm are qualities that aid a meme’s survival, just as strength and speed aid an animal’s. Patterned language has an evolutionary advantage. Rhyme, rhythm and reason—for reason, too, is a form of pattern. I was promised on a time to have reason for my rhyme; from that time unto this season, I received nor rhyme nor reason.

Like genes, memes have effects on the wide world beyond themselves. In some cases (the meme for making fire; for wearing clothes; for the resurrection of Jesus) the effects can be powerful indeed. As they broadcast their influence on the world, memes thus influence the conditions affecting their own chances of survival. The meme or memes comprising Morse code had strong positive feedback effects. Some memes have evident benefits for their human hosts (“Look before you leap,” knowledge of CPR, belief in hand washing before cooking), but memetic success and genetic success are not the same. Memes can replicate with impressive virulence while leaving swaths of collateral damage—patent medicines and psychic surgery, astrology and satanism, racist myths, superstitions and (a special case) computer viruses. In a way, these are the most interesting—the memes that thrive to their hosts’ detriment, such as the idea that suicide bombers will find their reward in heaven.

When Dawkins first floated the meme meme, Nicholas Humphrey, an evolutionary psychologist, said immediately that these entities should be considered “living structures, not just metaphorically but technically”:

When you plant a fertile meme in my mind you literally parasitize my brain, turning it into a vehicle for the meme’s propagation in just the way that a virus may parasitize the genetic mechanism of a host cell. And this isn’t just a way of talking-the meme for, say, “belief in life after death” is actually realized physically, millions of times over, as a structure in the nervous systems of individual men the world over.”

Most early readers of The Selfish Gene passed over memes as a fanciful afterthought, but the pioneering ethologist W. D. Hamilton, reviewing the book for Science, ventured this prediction:

"Hard as this term may be to delimit-it surely must be harder than gene, which is bad enough-I suspect that it will soon be in common use by biologists and, one hopes, by philosophers, linguists, and others as well and that it may become absorbed as far as the word “gene” has been into everyday speech."

Memes could travel wordlessly even before language was born. Plain mimicry is enough to replicate knowledge—how to chip an arrowhead or start a fire. Among animals, chimpanzees and gorillas are known to acquire behaviors by imitation. Some species of songbirds learn their songs, or at least song variants, after hearing them from neighboring birds (or, more recently, from ornithologists with audio players). Birds develop song repertoires and song dialects—in short, they exhibit a birdsong culture that predates human culture by eons. These special cases notwithstanding, for most of human history memes and language have gone hand in glove. (Clichés are memes.) Language serves as culture’s first catalyst. It supersedes mere imitation, spreading knowledge by abstraction and encoding.

Perhaps the analogy with disease was inevitable. Before anyone understood anything of epidemiology, its language was applied to species of information. An emotion can be infectious, a tune catchy, a habit contagious. “From look to look, contagious through the crowd / The panic runs,” wrote the poet James Thomson in 1730. Lust, likewise, according to Milton: “Eve, whose eye darted contagious fire.” But only in the new millennium, in the time of global electronic transmission, has the identification become second nature. Ours is the age of virality: viral education, viral marketing, viral e-mail and video and networking. Researchers studying the Internet itself as a medium—crowdsourcing, collective attention, social networking and resource allocation—employ not only the language but also the mathematical principles of epidemiology.

One of the first to use the terms “viral text” and “viral sentences” seems to have been a reader of Dawkins named Stephen Walton of New York City, corresponding in 1981 with the cognitive scientist Douglas Hofstadter. Thinking logically—perhaps in the mode of a computer—Walton proposed simple self-replicating sentences along the lines of “Say me!” “Copy me!” and “If you copy me, I’ll grant you three wishes!” Hofstadter, then a columnist for Scientific American, found the term “viral text” itself to be even catchier.

"Well, now, Walton’s own viral text, as you can see here before your eyes, has managed to commandeer the facilities of a very powerful host—an entire magazine and printing press and distribution service. It has leapt aboard and is now—even as you read this viral sentence—propagating itself madly throughout the ideosphere!”

Hofstadter gaily declared himself infected by the meme meme.

One source of resistance—or at least unease—was the shoving of us humans toward the wings. It was bad enough to say that a person is merely a gene’s way of making more genes. Now humans are to be considered as vehicles for the propagation of memes, too. No one likes to be called a puppet. Dennett summed up the problem this way: “I don’t know about you, but I am not initially attracted by the idea of my brain as a sort of dung heap in which the larvae of other people’s ideas renew themselves, before sending out copies of themselves in an informational diaspora…. Who’s in charge, according to this vision—we or our memes?”

He answered his own question by reminding us that, like it or not, we are seldom “in charge” of our own minds. He might have quoted Freud; instead he quoted Mozart (or so he thought):

“In the night when I cannot sleep, thoughts crowd into my mind…. Whence and how do they come? I do not know and I have nothing to do with it. Those which please me I keep in my head and hum them” (…)

Later Dennett was informed that this well—known quotation was not Mozart’s after all. It had taken on a life of its own; it was a fairly success­ful meme.

For anyone taken with the idea of memes, the landscape was changing faster than Dawkins had imagined possible in 1976, when he wrote, “The computers in which memes live are human brains.” By 1989, the time of the second edition of The Selfish Gene, having become an adept programmer himself, he had to amend that: “It was obviously predictable that manufactured electronic computers, too, would eventually play host to self-replicating patterns of information.” Information was passing from one computer to another “when their owners pass floppy discs around,” and he could see another phenomenon on the near horizon: computers connected in networks. “Many of them,” he wrote, “are literally wired up together in electronic mail exchange…. It is a perfect milieu for self-replicating programs to flourish.” Indeed, the Internet was in its birth throes. Not only did it provide memes with a nutrient-rich culture medium, it also gave wings to the idea of memes. Meme itself quickly became an Internet buzzword. Awareness of memes fostered their spread. (…)

Is this science? In his 1983 column, Hofstadter proposed the obvious memetic label for such a discipline: memetics. The study of memes has attracted researchers from fields as far apart as computer science and microbiology. In bioinformatics, chain letters are an object of study. They are memes; they have evolutionary histories. The very purpose of a chain letter is replication; whatever else a chain letter may say, it embodies one message: Copy me. One student of chain-letter evolution, Daniel W. VanArsdale, listed many variants, in chain letters and even earlier texts: “Make seven copies of it exactly as it is written” (1902); “Copy this in full and send to nine friends” (1923); “And if any man shall take away from the words of the book of this prophecy, God shall take away his part out of the book of life” (Revelation 22:19). Chain letters flourished with the help of a new 19th-century technology: “carbonic paper,” sandwiched between sheets of writing paper in stacks. Then carbon paper made a symbiotic partnership with another technology, the typewriter. Viral outbreaks of chain letters occurred all through the early 20th century. (…) Two subsequent technologies, when their use became widespread, provided orders-of-magnitude boosts in chain-letter fecundity: photocopying (c. 1950) and e-mail (c. 1995). (…)

Inspired by a chance conversation on a hike in the Hong Kong mountains, information scientists Charles H. Bennett from IBM in New York and Ming Li and Bin Ma from Ontario, Canada, began an analysis of a set of chain letters collected during the photocopier era. They had 33, all variants of a single letter, with mutations in the form of misspellings, omissions and transposed words and phrases. “These letters have passed from host to host, mutating and evolving,” they reported in 2003.

Like a gene, their average length is about 2,000 characters. Like a potent virus, the letter threatens to kill you and induces you to pass it on to your “friends and associates”—some variation of this letter has probably reached millions of people. Like an inheritable trait, it promises benefits for you and the people you pass it on to. Like genomes, chain letters undergo natural selection and sometimes parts even get transferred between coexisting “species.”

Reaching beyond these appealing metaphors, the three researchers set out to use the letters as a “test bed” for algorithms used in evolutionary biology. The algorithms were designed to take the genomes of various modern creatures and work backward, by inference and deduction, to reconstruct their phylogeny—their evolutionary trees. If these mathematical methods worked with genes, the scientists suggested, they should work with chain letters, too. In both cases the researchers were able to verify mutation rates and relatedness measures.

Still, most of the elements of culture change and blur too easily to qualify as stable replicators. They are rarely as neatly fixed as a sequence of DNA. Dawkins himself emphasized that he had never imagined founding anything like a new science of memetics. A peer-reviewed Journal of Memetics came to life in 1997—published online, naturally—and then faded away after eight years partly spent in self-conscious debate over status, mission and terminology. Even compared with genes, memes are hard to mathematize or even to define rigorously. So the gene-meme analogy causes uneasiness and the genetics-memetics analogy even more.

Genes at least have a grounding in physical substance. Memes are abstract, intangible and unmeasurable. Genes replicate with near-perfect fidelity, and evolution depends on that: some variation is essential, but mutations need to be rare. Memes are seldom copied exactly; their boundaries are always fuzzy, and they mutate with a wild flexibility that would be fatal in biology. The term “meme” could be applied to a suspicious cornucopia of entities, from small to large. For Dennett, the first four notes of Beethoven’s Fifth Symphony (quoted above) were “clearly” a meme, along with Homer’s Odyssey (or at least the idea of the Odyssey), the wheel, anti-Semitism and writing. “Memes have not yet found their Watson and Crick,” said Dawkins; “they even lack their Mendel.”

Yet here they are. As the arc of information flow bends toward ever greater connectivity, memes evolve faster and spread farther. Their presence is felt if not seen in herd behavior, bank runs, informational cascades and financial bubbles. Diets rise and fall in popularity, their very names becoming catchphrases—the South Beach Diet and the Atkins Diet, the Scarsdale Diet, the Cookie Diet and the Drinking Man’s Diet all replicating according to a dynamic about which the science of nutrition has nothing to say. Medical practice, too, experiences “surgical fads” and “iatro-epidemics”—epidemics caused by fashions in treatment—like the iatro-epidemic of children’s tonsillectomies that swept the United States and parts of Europe in the mid-20th century. Some false memes spread with disingenuous assistance, like the apparently unkillable notion that Barack Obama was not born in Hawaii. And in cyberspace every new social network becomes a new incubator of memes. Making the rounds of Facebook in the summer and fall of 2010 was a classic in new garb:

Sometimes I Just Want to Copy Someone Else’s Status, Word for Word, and See If They Notice.

Then it mutated again, and in January 2011 Twitter saw an outbreak of:

One day I want to copy someone’s Tweet word for word and see if they notice.

By then one of the most popular of all Twitter hashtags (the “hashtag” being a genetic—or, rather, memetic—marker) was simply the word “#Viral.”

In the competition for space in our brains and in the culture, the effective combatants are the messages. The new, oblique, looping views of genes and memes have enriched us. They give us paradoxes to write on Möbius strips. “The human world is made of stories, not people,” writes the novelist David Mitchell. “The people the stories use to tell themselves are not to be blamed.” Margaret Atwood writes: “As with all knowledge, once you knew it, you couldn’t imagine how it was that you hadn’t known it before. Like stage magic, knowledge before you knew it took place before your very eyes, but you were looking elsewhere.” Nearing death, John Updike reflected on

A life poured into words—apparent waste intended to preserve the thing consumed.

Fred Dretske, a philosopher of mind and knowledge, wrote in 1981: “In the beginning there was information. The word came later.” He added this explanation: “The transition was achieved by the development of organisms with the capacity for selectively exploiting this information in order to survive and perpetuate their kind.” Now we might add, thanks to Dawkins, that the transition was achieved by the information itself, surviving and perpetuating its kind and selectively exploiting organisms.

Most of the biosphere cannot see the infosphere; it is invisible, a parallel universe humming with ghostly inhabitants. But they are not ghosts to us—not anymore. We humans, alone among the earth’s organic creatures, live in both worlds at once. It is as though, having long coexisted with the unseen, we have begun to develop the needed extrasensory perception. We are aware of the many species of information. We name their types sardonically, as though to reassure ourselves that we understand: urban myths and zombie lies. We keep them alive in air-conditioned server farms. But we cannot own them. When a jingle lingers in our ears, or a fad turns fashion upside down, or a hoax dominates the global chatter for months and vanishes as swiftly as it came, who is master and who is slave?”

James Gleick, American author, journalist, biographer, Pulitzer Prize laureate, What Defines a Meme?, Smithsonian Magazine, May 2011. (Adapted from The Information: A History, A Theory, A Flood, by James Gleick)

See also:

Susan Blackmore on memes and “temes”
☞ Adam McNamara, Can we measure memes?, Department of Psychology, University of Surrey, UK
James Gleick: Bits and Bytes - How the language, information transformed humanity, (video) Fora.tv, May, 19, 2011
James Gleick on information: The basis of the universe isn’t matter or energy — it’s data

Mar
31st
Thu
permalink

From Cave Paintings to the Internet ☞ Chronological and Thematic Studies on the History of Information and Media (Timeline)

"A chronological record of significant events … often including an explanation of their causes." — definition of history from the Merriam Webster Online Dictionary, accessed 12-2010.

"The information overload that we associate with the Internet is not new. While the Internet is undoubtedly compounding an old problem, its instant searchability offers new means of exploring the rapidly expanding universe of information. From Cave Paintings to the Internet cannot save you from information overload and offers no panacea for information insufficiency. Using Internet technology, it is designed to help you follow the development of information and media, and attitudes about them, from the beginning of records to the present. Containing annotated references to discoveries, developments of a social, scientific, theoretical or technological nature, as well as references to physical books, documents, artifacts, art works, and to websites and other digital media, it arranges, both chronologically and thematically, selected historical examples and recent developments of the methods used to record, distribute, exchange, organize, store, and search information. The database is designed to allow you to approach the topics in a wide variety of ways.”

Jeremy Norman's 2,500,000 BCE to 8,000 BCE Timeline: From Cave Paintings to the Internet

See also: Jeremy Norman’s History of Science.com

Mar
30th
Wed
permalink

Kevin Kelly on the Satisfaction Paradox

“What if you lived in a world where everything around you was just what you wanted? And there was tons of it. How would you make a choice since all of it — 100% — was just what you liked?

What if you lived in a world where every great movie, book, song that was ever produced was at your fingertips as if “for free”, and your filters and friends had weeded out the junk, the trash, and anything that would remotely bore you. The only choices would be the absolute cream of the cream, the things your best friend would recommend. What would you watch or read or listen to next?

What if you lived in a miraculous world where the only works you ever saw were ones you absolutely loved, including the ones that were randomly thrown in? In other words, you could only watch things perfectly matched to you at that moment. But the problem is that in this world there are a thousand times as many works as you have time in your long life to see. How would you choose? Or would you? (…)

The paradox is that not-choosing may not be satisfying!

We may need to make choices in order to be satisfied, even if those choices lead to less than satisfying experiences.
But of course this would be less than optimal satisfaction. Thus, there may be a psychological dilemma or paradox that ultimate satisfaction may ultimately be unsatisfying.

This is the psychological problem of dealing with abundance rather than scarcity. It is not quite the same problem of abundance articulated by the Paradox of Choice, the theory that we find too many choices paralyzing. That if we are given 57 different mustards to choose from at the supermarket, we often leave without choosing any.

The paradox of satisfaction suggests that the tools we employ to increase our satisfaction of choices — filters and recommendations — may be unsatisfying if they diminish the power of our choices. Another way to say this: no system can be absolutely satisfying. (…)

Let’s say that after all is said and done, in the history of the world there are 2,000 theatrical movies, 500 documentaries, 200 TV shows, 100,000 songs, and 10,000 books that I would be crazy about. I don’t have enough time to absorb them all, even if I were a full time fan. But what if our tools could deliver to me only those items to choose from? How would I — or you — choose from those select choices? (…)

I believe that answering this question is what outfits like Amazon will be selling in the future. For the price of a subscription you will subscribe to Amazon and have access to all the books in the world at a set price. (An individual book you want to read will be as if it was free, because it won’t cost you extra.) The same will be true of movies (Netflix), or music (iTunes or Spotify or Rhapsody.) You won’t be purchasing individual works.

Instead you will pay Amazon, or Netflix, or Spotify, or Google for their suggestions of what you should pay attention to next. Amazon won’t be selling books (which are marginally free); they will be selling their recommendations of what to read. You’ll pay the subscription fee in order to get access to their recommendations to the “free” works, which are also available elsewhere. Their recommendations (assuming continual improvements by more collaboration and sharing of highlights, etc.) will be worth more than the individual books. You won’t buy movies; you’ll buy cheap access and pay for personalized recommendations.

The new scarcity is not creative products but satisfaction. And because of the paradox of satisfaction, few people will ever be satisfied.”

Kevin Kelly, the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, The Satsisfaction Paradox, The Technium, March 2011.