Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Pensieri a caso
Photography
A Box Of Stories
Reading Space
Homepage

Twitter
Facebook

Contact

Archive

Apr
27th
Sat
permalink

The Rise of Big Data. How It’s Changing the Way We Think About the World

       image

"In the third century BC, the Library of Alexandria was believed to house the sum of human knowledge. Today, there is enough information in the world to give every person alive 320 times as much of it as historians think was stored in Alexandria’s entire collection — an estimated 1,200 exabytes’ worth. If all this information were placed on CDs and they were stacked up, the CDs would form five separate piles that would all reach to the moon. (…)

Using big data will sometimes mean forgoting the quest for why in return for knowing what. (…)

There will be a special need to carve out a place for the human: to reserve space for intuition, common sense, and serendipity. (…)

Datafication is not the same as digitization, which takes analog content — books, films, photographs — and converts it into digital information, a sequence of ones and zeros that computers can read. Datafication is a far broader activity: taking all aspects of life and turning them into data. Google’s augmented-reality glasses datafy the gaze. Twitter datafies stray thoughts. LinkedIn datafies professional networks.

Once we datafy things, we can transform their purpose and turn the information into new forms of value. For example, IBM was granted a U.S. patent in 2012 for “securing premises using surface-based computing technology” — a technical way of describing a touch-sensitive floor covering, somewhat like a giant smartphone screen. Datafying the floor can open up all kinds of possibilities. The floor could be able to identify the objects on it, so that it might know to turn on lights in a room or open doors when a person entered. Moreover, it might identify individuals by their weight or by the way they stand and walk. (…)

This misplaced trust in data can come back to bite. Organizations can be beguiled by data’s false charms and endow more meaning to the numbers than they deserve. That is one of the lessons of the Vietnam War. U.S. Secretary of Defense Robert McNamara became obsessed with using statistics as a way to measure the war’s progress. He and his colleagues fixated on the number of enemy fighters killed. Relied on by commanders and published daily in newspapers, the body count became the data point that defined an era. To the war’s supporters, it was proof of progress; to critics, it was evidence of the war’s immorality. Yet the statistics revealed very little about the complex reality of the conflict. The figures were frequently inaccurate and were of little value as a way to measure success. Although it is important to learn from data to improve lives, common sense must be permitted to override the spreadsheets. (…)

Ultimately, big data marks the moment when the “information society” finally fulfills the promise implied by its name. The data take center stage. All those digital bits that have been gathered can now be harnessed in novel ways to serve new purposes and unlock new forms of value. But this requires a new way of thinking and will challenge institutions and identities. In a world where data shape decisions more and more, what purpose will remain for people, or for intuition, or for going against the facts? If everyone appeals to the data and harnesses big-data tools, perhaps what will become the central point of differentiation is unpredictability: the human element of instinct, risk taking, accidents, and even error. If so, then there will be a special need to carve out a place for the human: to reserve space for intuition, common sense, and serendipity to ensure that they are not crowded out by data and machine-made answers.

This has important implications for the notion of progress in society. Big data enables us to experiment faster and explore more leads. These advantages should produce more innovation. But at times, the spark of invention becomes what the data do not say. That is something that no amount of data can ever confirm or corroborate, since it has yet to exist. If Henry Ford had queried big-data algorithms to discover what his customers wanted, they would have come back with “a faster horse,” to recast his famous line. In a world of big data, it is the most human traits that will need to be fostered — creativity, intuition, and intellectual ambition — since human ingenuity is the source of progress.

Big data is a resource and a tool. It is meant to inform, rather than explain; it points toward understanding, but it can still lead to misunderstanding, depending on how well it is wielded. And however dazzling the power of big data appears, its seductive glimmer must never blind us to its inherent imperfections. Rather, we must adopt this technology with an appreciation not just of its power but also of its limitations.”

Kenneth Neil Cukier and Viktor Mayer-Schoenberger, The Rise of Big Data, Foreign Affairs, May/June 2013. (Photo: John Elk)

See also:

Dirk Helbing on A New Kind Of Socio-inspired Technology
Information tag on Lapidarium notes

Jan
22nd
Tue
permalink

Nicholas Carr on the meaning of ‘searching’ these days

        image

"All collected data had come to a final end. Nothing was left to be collected. But all collected data had yet to be completely correlated and put together in all possible relationships. A timeless interval was spent doing that."

— Isaac Asimov, “The Last Question”, cited in John Battelle's The Search

"When we talk about “searching” these days, we’re almost always talking about using Google to find something online. That’s quite a twist for a word that has long carried existential connotations, that has been bound up in our sense of what it means to be conscious and alive. We don’t just search for car keys or missing socks. We search for truth and meaning, for love, for transcendence, for peace, for ourselves. To be human is to be a searcher.

In its highest form, a search has no well-defined object. It’s open-ended, an act of exploration that takes us out into the world, beyond the self, in order to know the world, and the self, more fully. T. S. Eliot expressed this sense of searching in his famously eloquent lines from “Little Gidding”:

We shall not cease from exploration
And the end of all our exploring
Will be to arrive where we started
And know the place for the first time.

Google searches have always been more cut and dried, keyed as they are to particular words or phrases. But in its original conception, the Google search engine did transport us into a messy and confusing world—the world of the web—with the intent of helping us make some sense of it. It pushed us outward, away from ourselves. It was a means of exploration. That’s much less the case now. Google’s conception of searching has changed markedly since those early days, and that means our own idea of what it means to search is changing as well.

Google’s goal is no longer to read the web. It’s to read us. Ray Kurzweil, the inventor and AI speculator, recently joined the company as its director of research. His general focus will be on machine learning and natural language processing. But his particular concern, as he said in a recent interview, will entail reconfiguring the company’s search engine to focus not outwardly on the world but inwardly on the user:

“I envision some years from now that the majority of search queries will be answered without you actually asking. It’ll just know this is something that you’re going to want to see.” While it may take some years to develop this technology, Kurzweil added that he personally thinks it will be embedded into what Google offers currently, rather than as a stand-alone product necessarily.

(…) Back in 2006, Eric Schmidt, then the company’s CEO, said that Google’s “ultimate product” would be a service that would “tell me what I should be typing.” It would give you an answer before you asked a question, obviating the need for searching entirely. (…)

In its new design, Google’s search engine doesn’t push us outward; it turns us inward. It gives us information that fits the behavior and needs and biases we have displayed in the past, as meticulously interpreted by Google’s algorithms. Because it reinforces the existing state of the self rather than challenging it, it subverts the act of searching. We find out little about anything, least of all ourselves, through self-absorption. (…)

To be turned inward, to listen to speech that is only a copy, or reflection, of our own speech, is to keep the universe alone. To free ourselves from that prison — the prison we now call personalization — we need to voyage outward to discover “counter-love,” to hear “original response.” As Frost understood, a true search is as dangerous as it is essential. It’s about breaking the shackles of the self, not tightening them.

There was a time, back when Larry Page and Sergey Brin were young and naive and idealistic, that Google spoke to us with the voice of original response. Now, what Google seeks to give us is copy speech, our own voice returned to us.”

Nicholas Carr, American writer who has published books and articles on technology, business, and culture, The searchers, Rough Type, Jan 13, 2013.

See also:

The Filter Bubble: Eli Pariser on What the Internet Is Hiding From You
☞ Tim Adams, Google and the future of search: Amit Singhal and the Knowledge Graph, The Observer, 19 January 2013.

May
20th
Sun
permalink

The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks

    
                                                             Image: Library of Congress

“Knowledge is not a result merely of filtering or algorithms. It results from a far more complex process that is social, goal-driven, contextual, and culturally-bound. We get to knowledge — especially “actionable” knowledge — by having desires and curiosity, through plotting and play, by being wrong more often than right, by talking with others and forming social bonds, by applying methods and then backing away from them, by calculation and serendipity, by rationality and intuition, by institutional processes and social roles. Most important in this regard, where the decisions are tough and knowledge is hard to come by, knowledge is not determined by information, for it is the knowing process that first decides which information is relevant, and how it is to be used.”

David Weinberger, The Problem with the Data-Information-Knowledge-Wisdom Hierarchy, Harvard Business Review, Feb 2, 2010.

"The digitization of 21st-century media, Weinberger argues, leads not to the creation of a “global village" but rather to a new understanding of what knowledge is, to a change in the basic epistemology governing the universe. And this McLuhanesque transformation, in turn, reveals the general truth of the Heideggarian vision. Knowledge qua knowledge, Weinberger claims, is increasingly enmeshed in webs of discourse: culture-dependent and theory-free.

The causal force lying behind this massive sea change is, of course, the internet. Google search results — “9,560,000 results for ‘Heidegger’ in .71 seconds”) — taunt you with the realization that there are still another 950,000-odd pages of results to get through before you reach the end. The existence of hyperlinks is enough to convince even the most stubborn positivist that there is always another side to the story. And on the web, fringe believers can always find each other and marinate in their own illusions. The “web world” is too big to ever know. There is always another link. In the era of the Internet, Weinberger argues, facts are not bricks. They are networks. (…)

The most important aspect of Heidegger’s thought for our purposes is his understanding that human beings (or rather “Dasein,” “being-in-the-world”) are always thrown into a particular context, existing within already existing language structures and pre-determined meanings. In other words, the world is like the web, and we, Dasein, live inside the links. (…)

If our starting point is that all knowledge is networked, and always has been, then we are in a far better point to start talking about what makes today’s epistemological infastructure different from the infrastrucure in 1983. But we are also in a position to ask: if all knowledge was networked knowledge, even in 1983, than how did we not behave as if it was so? How did humanity carry on? Why did civilization not collapse into a morass of post-modern chaos? Weinberger’s answer is, once again, McLuhanesque. It was the medium in which knowledge was contained that created the difference. Stable borders around knowledge were built by books.

I would posit a different answer: if knowledge has always been networked knowledge, than facts have never had stable containers. Most of the time, though, we more or less act as if they do. Within philosophical subfield known as Actor-Network Theory (ANT) this “acting-as-if-stability-existed” is referred to as “black boxing.” One of the black boxes around knowledge might very well be the book. But black boxes can also include algorithms, census bureaus, libraries, laboratories, and news rooms. Black boxes emerge out of actually-existing knowledge networks, stabilize for a time, and unravel, and our goal as thinkers and scholars ought to be understanding how these nodes emerge and disappear. In other words, understanding changes to knowledge in this way leaves us far more sensitive to the operations of power than does the notoriously power-free perspective of Marshall McLuhan. (…)

Why don’t I care that the Google results page goes on towards infinity? If we avoid Marshall McLuhan’s easy answers to these complex questions, and retain the core of Heidegger’s brilliant insights while also adding a hefty dose of ontology to his largely immaterial philosophy, we might begin to understand the real operations of digital knowledge/power in a networked age.

Weinberger, however, does not care about power, and more or less admits this himself in a brilliant essay 2008 on the distinction between digital realists, utopians, and dystopians. Digital utopians, a group in which he includes himself, “point to the ways in which the Web has changed some of the basic assumptions about how we live together, removing old obstacles and enabling shiny new possibilities.” The realists, on the other hand, are rather dull: They argue that “the Web hasn’t had nearly as much effect as the utopians and dystopians proclaim. The Web carries with it certain possibilities and limitations, but (the realists say) not many more than other major communications medium.” Politically speaking, digital utopianism tantalizes us with the promise of what might be, and pushes us to do better. The political problem with the realist position, Weinberger argues, is that it “is … [a] decision that leans toward supporting the status quo because what-is is more knowable than what might be.”

The realist position, however, is not necessarily a position of quietude. Done well, digital realism can sensitize us to the fact that all networked knowledge systems eventually become brick walls, that these brick walls are maintained through technological, political, cultural, economic, and organizational forms of power. Our job, as thinkers and teachers, is not to stand back and claim that the all bricks have crumbled. Rather, our job is to understand how the wall gets built, and how we might try to build it differently.”

C.W. Anderson, Ph.D, an assistant professor in the Department of Media Culture at the College of Staten Island (CUNY), researcher at the Columbia University Graduate School of Journalism, The Difference Between Online Knowledge and Truly Open Knowledge, The Atlantic, Feb 3, 2012.

David Weinberger: ‘I think the Net generation is beginning to see knowledge in a way that is closer to the truth about knowledge’

"I think the Net generation is beginning to see knowledge in a way that is closer to the truth about knowledge — a truth we’ve long known but couldn’t instantiate. My generation, and the many generations before mine, have thought about knowledge as being the collected set of trusted content, typically expressed in libraries full of books. Our tradition has taken the trans-generational project of building this Library of Knowledge book by book as our God-given task as humans. Yet, for the coming generation, knowing looks less like capturing truths in books than engaging in never-settled networks of discussion and argument. That social activity — collaborative and contentious, often at the same time — is a more accurate reflection of our condition as imperfect social creatures trying to understand a world that is too big and too complex for even the biggest-headed expert.

This new topology of knowledge reflects the topology of the Net. The Net (and especially the Web) is constructed quite literally out of links, each of which expresses some human interest. If I link to a site, it’s because I think it matters in some way, and I want it to matter that way to you. The result is a World Wide Web with billions of pages and probably trillions of links that is a direct reflection of what matters to us humans, for better or worse. The knowledge networks that live in this new ecosystem share in that property; they are built out of, and reflect, human interest. Like our collective interests, the Web and the knowledge that resides there is at odds and linked in conversation. That’s why the Internet, for all its weirdness, feels so familiar and comfortable to so many of us. And that’s the sense in which I think networked knowledge is more “natural.” (…)

To make a smart room — a knowledge network — you have to have just enough diversity. And it has to be the right type of diversity. Scott Page in The Difference says that a group needs a diversity of perspectives and skill sets if it is going to be smarter than the smartest person in it. It also clearly needs a set of coping skills, norms, and procedures that enable it to deal with diversity productively. (…)

We humans can only see things from a point of view, and we can only understand things by appropriating them into our already-existing context. (…)

In fact, the idea of objectivity arose in response to the limitations of paper, as did so much of our traditional Western idea of knowledge. Paper is a disconnected medium. So, when you write a news story, you have to encapsulate something quite complex in just a relatively small rectangle of print. You know that the reader has no easy way to check what you’re saying, or to explore further on her own; to do so, she’ll have to put down the paper, go to a local library, and start combing through texts that are less current than the newspaper in which your article appears. The reporter was the one mediator of the world the reader would encounter, so the report had to avoid the mediator’s point of view and try to reflect all sides of contentious issues. Objectivity arose to address the disconnected nature of paper.

Our new medium is, of course, wildly connective. Now we can explore beyond the news rectangle just by clicking. There is no longer an imperative to squeeze the world into small, self-contained boxes. Hyperlinks remove the limitations that objectivity was invented to address.

Hyperlinks also enable readers to understand — and thus perhaps discount — the writer’s point of view, which is often a better way of getting past the writer’s prejudices than asking the writer to write as if she or he had none. This, of course, inverts the old model that assumed that if we knew about the journalist’s personal opinions, her or his work would be less credible. Now we often think that the work becomes more credible if the author is straightforward about his or her standpoint. That’s the sense in which transparency is the new objectivity.

There is still value in trying to recognize how one’s own standpoint and assumptions distort one’s vision of the world; emotional and conceptual empathy are of continuing importance because they are how we embody the truth that we share a world with others to home that world matters differently. But we are coming to accept that we can’t really get a view from nowhere, and if we could, we would have no idea what we’re looking at. (…)

Our new ability to know the world at a scale never before imaginable may not bring us our old type of understanding, but understanding and knowledge are not motivated only by the desire to feel that sudden gasp of insight. The opposite and ancient motive is to feel the breath of awe in the face of the almighty unknowability of our universe. A knowing that recognizes its object is so vast that it outstrips understanding makes us more capable of awe. (…)

Technodeterminism is the claim that technology by itself has predictable, determinant effects on people or culture. (…) We still need to be able to discuss how a technology is affecting a culture in general. Generalizations can be a vehicle of truth, so long as they are understood to be only generally true. (…) The new knowledge continues to find generalities that connect individual instances, but because the new ecosystem is hyperlinked, we can go from the generalities back to the individual cases. And those generalizations are themselves linked into a system of difference and disagreement.”

David Weinberger, Ph.D. from the University of Toronto, American technologist, professional speaker, and commentator, interviewed by Rebecca J. Rosen, What the Internet Means for How We Think About the World, The Atlantic, Jan 5 2012.

See also:

To Know, but Not Understand: David Weinberger on Science and Big Data, The Atlantic, Jan 3, 2012 
When science becomes civic: Connecting Engaged Universities and Learning Communities, University of California, Davis, September 11 - 12, 2001
The Filter Bubble: Eli Pariser on What the Internet Is Hiding From You
A story about the Semantic Web (Web 3.0) (video)
Vannevar Bush on the new relationship between thinking man and the sum of our knowledge (1945)
George Lakoff on metaphors, explanatory journalism and the ‘Real Rationality’
The Relativity of Truth - a brief résumé, Lapidarium notes

Apr
11th
Wed
permalink

The Cognitive Limit of Organizations. The structure of a society is connected to its total amount of information
                                               Click image to enlarge

The vertical axis of this slide represents the total stock of information in the world. The horizontal axis represents time.

In the early days, life was simple. We did important things like make spears and arrowheads. The amount of knowledge needed to make these items, however, was small enough that a single person could master their production. There was no need for a large division of labor and new knowledge was extremely precious. If you got new knowledge, you did not want to share it. After all, in a world where most knowledge can fit in someone’s head, stealing ideas is easy, and appropriating the value of the ideas you generate is hard.

At some point, however, the amount of knowledge required to make things began to exceed the cognitive limit of a single human being. Things could only be done in teams, and sharing information among team members was required to build these complex items. Organizations were born as our social skills began to compensate for our limited cognitive skills. Society, however, kept on accruing more and more knowledge, and the cognitive limit of organizations, just like that of the spearmaker, was ultimately reached. (…)

Today, however, most products are combinations of knowledge and intellectual property that resides in different organizations. Our world is less and less about the single pieces of intellectual property and more and more about the networks that help connect these pieces. The total stock of information used in these ecosystems exceeds the capacity of single organizations because doubling the size of huge organizations does not double the capacity of that organization to hold knowledge and put it into productive use.

In a world in which implementing the next generation of ideas will increasingly require pulling resources from different organizations, barriers to collaboration will be a crucial constraint limiting the development of firms. Agility, context, and a strong network are becoming the survival traits where assets, control, and power used to rule. John Seely Brown refers to this as the “Power of Pull.”“

The Cognitive Limit of Organizations, MIT Media Lab, Oct 7, 2011.

Mar
3rd
Sat
permalink

Beauty, Charm, and Strangeness: Art and Science as Metaphor

      

Science and art are different ways of looking at the same thing, namely, the world. (…)

The fact is, science is not making this new landscape, but discovering it. Einstein remarked more than once how strange it is that reality, as we know it, keeps proving itself amenable to the rules of man-made science. It certainly is strange; indeed, so strange, that perhaps it should make us a little suspicious. More than one philosopher has conjectured that our thought extends only as far as our capacity to express it. So too it is possible that what we consider reality is only that stratum of the world that we have the faculties to comprehend. For instance, I am convinced that quantum theory flouts commonsense logic only because commonsense logic has not yet been sufficiently expanded. (…)

I am not arguing that art is greater than science, more universal in its concerns, and wiser in its sad recognition of the limits of human knowledge. What I am proposing is that despite the profound differences between them, at an essential level art and science are so nearly alike as to be indistinguishable. (…)

The critic Frank Kermode has argued, persuasively, I believe, that one of art’s greatest attractions is that it offers “the sense of an ending.” The sense of completeness that is projected by the work of art is to be found nowhere else in our lives. We cannot remember our birth, and we shall not know our death; in between is the ramshackle circus of our days and doings. But in a poem, a picture, or a sonata, the curve is completed. This is the triumph of form. It is a deception, but one that we desire, and require.

The trick that art performs is to transform the ordinary into the extraordinary and back again in the twinkling of a metaphor. Here is [the poet] Wallace Stevens, in lines from his poem Notes Toward a Supreme Fiction (1942):

"You must become an ignorant man again

And see the sun again with an ignorant eye

And see it clearly in the idea of it.”

— Wallace Stevens, Collected Poetry and Prose (Library of America, 1997), p329. (…)

This is the project that all artists are embarked upon: to subject mundane reality to such intense, passionate, and unblinking scrutiny that it becomes transformed into something rich and strange while yet remaining solidly, stolidly, itself. Is the project of pure science any different?

When Johannes Kepler recognized that the planets move in elliptical orbits and not in perfect circles, as received wisdom had for millennia held they must do, he added infinitely to the richness of man’s life and thought. When Copernicus posited the horrifying notion that not the Earth but the sun is the center of our world, he literally put man in his place, and he did it for the sake of neither good nor ill, but for the sake of demonstrating how things are. (…)

In the 1970s, when quantum theory began employing such terms as “beauty,” “charm,” and “strangeness” to signify the various properties of quarks, a friend turned to me and said: “You know, they’re waiting for you to give them the words.” I saw what he meant, but he was not quite right: Science does not need art to supply its metaphors. Art and science are alike in their quest to reveal the world. Rainer Maria Rilke spoke for both the artist and the scientist when he said:

"Are we, perhaps, here just for saying: House, Bridge, Fountain, Gate, Jug, Fruit tree, Window,—possibly: Pillar, Tower?…but for saying, remember, oh, for such saying as never the things themselves hoped so intensely to be."

Rilke Poems (Knopf, 1996), p. 201 (stanza 2, lines 15 to 19).

John Banville, Irish novelist, adapter of dramas, and screenwriter, Beauty, Charm, and Strangeness: Science as Metaphor, Science, 3 July 1998. (Illustration: Greg Mort, Stewardship III, (2004)

See also:

Art and Science tag on Lapidarium
Art and Science tag on Lapidarium notes

Dec
27th
Tue
permalink

'To understand is to perceive patterns'

                  

"Everything we care about lies somewhere in the middle, where pattern and randomness interlace."

James Gleick, The Information: A History, a Theory, a Flood, Pantheon, 2011

"Humans are pattern-seeking story-telling animals, and we are quite adept at telling stories about patterns, whether they exist or not."

Michael Shermer

"The pattern, and it alone, brings into being and causes to pass away and confers purpose, that is to say, value and meaning, on all there is. To understand is to perceive patterns. (…) To make intelligible is to reveal the basic pattern.”

Isaiah Berlin, British social and political theorist, philosopher and historian, (1909-1997), The proper study of mankind: an anthology of essays, Chatto & Windus, 1997, p. 129.

"One of the most wonderful things about the emerging global superbrain is that information is overflowing on a scale beyond what we can wrap our heads around. The electronic, collective, hive mind that we know as the Internet produces so much information that organizing this data — and extracting meaning from it — has become the conversation of our time.

Sanford Kwinter’s Far From Equilibrium tackles everything from technology to society to architecture under the thesis that creativity, catharsis, transformation and progressive breakthroughs occur far from equilibrium. So even while we may feel overwhelmed and intimidated by the informational overload and radical transformations of our times, we should, perhaps, take refuge in knowing that only good can come from this. He writes:

“(…) We accurately think of ourselves today not only as citizens of an information society, but literally as clusters of matter within an unbroken informational continuum: "We are all," as the great composer Karlheinz Stockhausen once said, "transistors, in the literal sense. We send, receive and organize [and] so long as we are vital, our principle work is to capture and artfully incorporate the signals that surround us.” (…)

Clay Shirky often refers to the “Cognitive Surplus,” the overflowing output of the billion of minds participating in the electronic infosphere. A lot of this output is silly, but a lot of it is meaningful and wonderful. The key lies in curation; which is the result of pattern-recognition put into practice. (…)

Matt Ridley’s TED Talk, “When Ideas Have Sex” points to this intercourse of information and how it births new thought-patterns. Ideas, freed from the confines of space and time by the invisible, wireless metabrain we call The Internet, collide with one another and explode into new ideas; accelerating the collective intelligence of the species. Creativity thrives when minds come together. The last great industrial strength creative catalyst was the city: It is no coincidence than when people migrate to cities in large numbers, creativity and innovation thrives.  

Now take this very idea and apply it to the web:  the web  essentially is a planetary-scale nervous system where individual minds take on the role of synapses, firing electrical pattern-signals to one another at light speed — the net effect being an astonishing increase in creative output. (…)

Ray Kurzweil too, expounds on this idea of the power of patterns:

“I describe myself as a patternist, and believe that if you put matter and energy in just the right pattern you create something that transcends it. Technology is a good example of that: you put together lenses and mechanical parts and some computers and some software in just the right combination and you create a reading machine for the blind. It’s something that transcends the semblance of parts you’ve put together. That is the nature of technology, and it’s the nature of the human brain.

Biological molecules put in a certain combination create the transcending properties of human intelligence; you put notes and sounds together in just the rightcombination, and you create a Beethoven symphony or a Beatles song. So patterns have a power that transcends the parts of that pattern.”

R. Buckminster Fuller refers to us as “pattern integrities.” “Understanding order begins with understanding patterns,” he was known to say E.J. White, who worked with Fuller, says that:

“For Fuller, the thinking process is not a matter of putting anything into the brain or taking anything out; he defines thinking as the dismissal of irrelevancies, as the definition of relationships” — in other words, thinking is simultaneously a form of filtering out the data that doesn’t fit while highlighting the things that do fit together… We dismiss whatever is an “irrelevancy” and retain only what fits, we form knowledge by ‘connecting the dots’… we understand things by perceiving patterns — we arrive at conclusions when we successfully reveal these patterns. (…)

Fuller’s primary vocation is as a poet. All his disciplines and talents — architect, engineer, philosopher, inventor, artist, cartographer, teacher — are just so many aspects of his chief function as integrator… the word “poet" is a very general term for a person who puts things together in an era of great specialization when most people are differentiating or taking things apart… For Fuller, the stuff of poetry is the patterns of human behavior and the environment, and the interacting hierarchies of physics and design and industry. This is why he can describe Einstein and Henry Ford as the greatest poets of the 20th century.” (…)

In a recent article in Reality Sandwich, Simon G Powell proposed that patterned self-organization is a default condition of the universe: 

“When you think about it, Nature is replete with instances of self-organization. Look at how, over time, various exquisitely ordered patterns crystallise out of the Universe. On a macroscopic scale you have stable and enduring spherical stars, solar systems, and spiral galaxies. On a microscopic scale you have atomic and molecular forms of organization. And on a psychological level, fed by all this ambient order and pattern, you have consciousness which also seems to organise itself into being (by way of the brain). Thus, patterned organisation of one form or another is what nature is proficient at doing over time

This being the case, is it possible that the amazing synchronicities and serendipities we experience when we’re doing what we love, or following our passions — the signs we pick up on when we follow our bliss- represent an emerging ‘higher level’ manifestation of self-organization? To make use of an alluring metaphor, are certain events and cultural processes akin to iron filings coming under the organising influence of a powerful magnet? Is serendipity just the playing out on the human level of the same emerging, patterned self-organization that drives evolution?

Barry Ptolemy's film Transcendent Man reminds us that the universe has been unfolding in patterns of greater complexity since the beginning of time. Says Ptolemy:

First of all we are all patterns of information. Second, the universe has been revealing itself as patterns of information of increasing order since the big bang. From atoms, to molecules, to DNA, to brains, to technology, to us now merging with that technology. So the fact that this is happening isn’t particularly strange to a universe which continues to evolve and unfold at ever accelerating rates.”

Jason Silva, Connecting All The Dots - Jason Silva on Big think, Imaginary Fundation, Dec 2010

"Networks are everywhere. The brain is a network of nerve cells connected by axons, and cells themselves are networks of molecules connected by biochemical reactions. Societies, too, are networks of people linked by friendships, familial relationships and professional ties. On a larger scale, food webs and ecosystems can be represented as networks of species. And networks pervade technology: the Internet, power grids and transportation systems are but a few examples. Even the language we are using to convey these thoughts to you is a network, made up of words connected by syntactic relationships.”

'For decades, we assumed that the components of such complex systems as the cell, the society, or the Internet are randomly wired together. In the past decade, an avalanche of research has shown that many real networks, independent of their age, function, and scope, converge to similar architectures, a universality that allowed researchers from different disciplines to embrace network theory as a common paradigm.”

Albert-László Barabási , physicist, best known for his work in the research of network theory, and Eric Bonabeau, Scale-Free Networks, Scientific American, April 14, 2003.

Coral reefs are sometimes called “the cities of the sea”, and part of the argument is that we need to take the metaphor seriously: the reef ecosystem is so innovative because it shares some defining characteristics with actual cities. These patterns of innovation and creativity are fractal: they reappear in recognizable form as you zoom in and out, from molecule to neuron to pixel to sidewalk. Whether you’re looking at original innovations of carbon-based life, or the explosion of news tools on the web, the same shapes keep turning up. (…) When life gets creative, it has a tendency to gravitate toward certain recurring patterns, whether those patterns are self-organizing, or whether they are deliberately crafted by human agents.”

— Steven Johnson, author of Where Good Ideas Come From, cited by Jason Silva

"Network systems can sustain life at all scales, whether intracellularly or within you and me or in ecosystems or within a city. (…) If you have a million citizens in a city or if you have 1014 cells in your body, they have to be networked together in some optimal way for that system to function, to adapt, to grow, to mitigate, and to be long term resilient."

Geoffrey West, British theoretical physicist, The sameness of organisms, cities, and corporations: Q&A with Geoffrey West, TED, 26 July 2011.

“Recognizing this super-connectivity and conductivity is often accompanied by blissful mindbody states and the cognitive ecstasy of multiple “aha’s!” when the patterns in the mycelium are revealed. That Googling that has become a prime noetic technology (How can we recognize a pattern and connect more and more, faster and faster?: superconnectivity and superconductivity) mirrors the increased speed of connection of thought-forms from cannabis highs on up. The whole process is driven by desire not only for these blissful states in and of themselves, but also as the cognitive resource they represent.The devices of desire are those that connect,” because as Johnson says “chance favors the connected mind”.

Google and the Myceliation of Consciousness, Reality Sandwich, 10-11-2007

Jason Silva, Venezuelan-American television personality, filmmaker, gonzo journalist and founding producer/host for Current TV, To understand is to perceive patterns, Dec 25, 2011 (Illustration: Color Blind Test)

[This note will be gradually expanded]

See also:

The sameness of organisms, cities, and corporations: Q&A with Geoffrey West, TED, 26 July 2011.
☞ Albert-László Barabási and Eric Bonabeau, Scale-Free Networks, Scientific American, April 14, 2003.
Google and the Myceliation of Consciousness, Reality Sandwich, 10.11.2007
The Story of Networks, Lapidarium notes
Geoffrey West on Why Cities Keep Growing, Corporations and People Always Die, and Life Gets Faster
☞ Manuel Lima, visualcomplexity.com, A visual exploration on mapping complex networks
Constructal theory, Wiki
☞ A. Bejan, Constructal theory of pattern formation (pdf), Duke University
Pattern recognition, Wiki
Patterns tag on Lapidarium
Patterns tag on Lapidarium notes

Sep
29th
Thu
permalink

Vannevar Bush on the new relationship between thinking man and the sum of our knowledge (1945)

                             

Tim O’Reilly on the Birth of the global mind

“Computer scientist Danny Hillis once remarked, “Global consciousness is that thing responsible for deciding that pots containing decaffeinated coffee should be orange.” (…)

The web is a perfect example of what engineer and early computer scientist Vannevar Bush called “intelligence augmentation” by computers, in his 1945 article As We May Think” in The Atlantic. He described a future in which human ability to follow an associative knowledge trail would be enabled by a device he called “the memex”. This would improve on human memory in the precision of its recall. Google is today’s ultimate memex. (…)

This is man-computer symbiosis at its best, where the computer program learns from the activity of human teachers, and its sensors notice and remember things the humans themselves would not. This is the future: massive amounts of data created by people, stored in cloud applications that use smart algorithms to extract meaning from it, feeding back results to those people on mobile devices, gradually giving way to applications that emulate what they have learned from the feedback loops between those people and their devices.”

Tim O’Reilly, the founder of O’Reilly Media, a supporter of the free software and open source movements, Birth of the global mind, Financial Times, Sept 23, 2011

"In this significant article he [Vannevar Bush] holds up an incentive for scientists when the fighting has ceased. He urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge. For years inventions have extended man’s physical powers rather than the powers of his mind. Trip hammers that multiply the fists, microscopes that sharpen the eye, and engines of destruction and detection are new results, but not the end results, of modern science. Now, says Dr. Bush, instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work. Like Emerson’s famous address of 1837 on “The American Scholar,” this paper by Dr. Bush calls for a new relationship between thinking man and the sum of our knowledge.” - The Atlantic’ editor

"Assume a linear ratio of 100 for future use. Consider film of the same thickness as paper, although thinner film will certainly be usable. Even under these conditions there would be a total factor of 10,000 between the bulk of the ordinary record on books, and its microfilm replica. The Encyclopoedia Britannica could be reduced to the volume of a matchbox. A library of a million volumes could be compressed into one end of a desk. If the human race has produced since the invention of movable type a total record, in the form of magazines, newspapers, books, tracts, advertising blurbs, correspondence, having a volume corresponding to a billion books, the whole affair, assembled and compressed, could be lugged off in a moving van. Mere compression, of course, is not enough; one needs not only to make and store a record but also be able to consult it, and this aspect of the matter comes later. Even the modern great library is not generally consulted; it is nibbled at by a few. (…)

We may some day click off arguments on a machine with the same assurance that we now enter sales on a cash register. But the machine of logic will not look like a cash register, even of the streamlined model.

So much for the manipulation of ideas and their insertion into the record. Thus far we seem to be worse off than before—for we can enormously extend the record; yet even in its present bulk we can hardly consult it. This is a much larger matter than merely the extraction of data for the purposes of scientific research; it involves the entire process by which man profits by his inheritance of acquired knowledge. The prime action of use is selection, and here we are halting indeed. There may be millions of fine thoughts, and the account of the experience on which they are based, all encased within stone walls of acceptable architectural form; but if the scholar can get at only one a week by diligent search, his syntheses are not likely to keep up with the current scene. (…)

Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and, to coin one at random, “memex" will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. Otherwise it looks like an ordinary desk.

In one end is the stored material. The matter of bulk is well taken care of by improved microfilm. Only a small part of the interior of the memex is devoted to storage, the rest to mechanism. Yet if the user inserted 5000 pages of material a day it would take him hundreds of years to fill the repository, so he can be profligate and enter material freely.

Most of the memex contents are purchased on microfilm ready for insertion. Books of all sorts, pictures, current periodicals, newspapers, are thus obtained and dropped into place. Business correspondence takes the same path. And there is provision for direct entry. On the top of the memex is a transparent platen. On this are placed longhand notes, photographs, memoranda, all sorts of things. When one is in place, the depression of a lever causes it to be photographed onto the next blank space in a section of the memex film, dry photography being employed.

There is, of course, provision for consultation of the record by the usual scheme of indexing. If the user wishes to consult a certain book, he taps its code on the keyboard, and the title page of the book promptly appears before him, projected onto one of his viewing positions. Frequently-used codes are mnemonic, so that he seldom consults his code book; but when he does, a single tap of a key projects it for his use. Moreover, he has supplemental levers. On deflecting one of these levers to the right he runs through the book before him, each page in turn being projected at a speed which just allows a recognizing glance at each. If he deflects it further to the right, he steps through the book 10 pages at a time; still further at 100 pages at a time. Deflection to the left gives him the same control backwards.

A special button transfers him immediately to the first page of the index. Any given book of his library can thus be called up and consulted with far greater facility than if it were taken from a shelf. As he has several projection positions, he can leave one item in position while he calls up another. He can add marginal notes and comments, taking advantage of one possible type of dry photography, and it could even be arranged so that he can do this by a stylus scheme, such as is now employed in the telautograph seen in railroad waiting rooms, just as though he had the physical page before him.

All this is conventional, except for the projection forward of present-day mechanisms and gadgetry. It affords an immediate step, however, to associative indexing, the basic idea of which is a provision whereby any item may be caused at will to select immediately and automatically another. This is the essential feature of the  memex. The process of tying two items together is the important thing. (…)

The owner of the memex, let us say, is interested in the origin and properties of the bow and arrow. Specifically he is studying why the short Turkish bow was apparently superior to the English long bow in the skirmishes of the Crusades. He has dozens of possibly pertinent books and articles in his memex. First he runs through an encyclopedia, finds an interesting but sketchy article, leaves it projected. Next, in a history, he finds another pertinent item, and ties the two together. Thus he goes, building a trail of many items. Occasionally he inserts a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item. When it becomes evident that the elastic properties of available materials had a great deal to do with the bow, he branches off on a side trail which takes him through textbooks on elasticity and tables of physical constants. He inserts a page of longhand analysis of his own. Thus he builds a trail of his interest through the maze of materials available to him.

And his trails do not fade. Several years later, his talk with a friend turns to the queer ways in which a people resist innovations, even of vital interest. He has an example, in the fact that the outraged Europeans still failed to adopt the Turkish bow. In fact he has a trail on it. A touch brings up the code book. Tapping a few keys projects the head of the trail. A lever runs through it at will, stopping at interesting items, going off on side excursions. It is an interesting trail, pertinent to the discussion. So he sets a reproducer in action, photographs the whole trail out, and passes it to his friend for insertion in his own memex, there to be linked into the more general trail.

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities. The patent attorney has on call the millions of issued patents, with familiar trails to every point of his client’s interest. The physician, puzzled by a patient’s reactions, strikes the trail established in studying an earlier similar case, and runs rapidly through analogous case histories, with side references to the classics for the pertinent anatomy and histology. The chemist, struggling with the synthesis of an organic compound, has all the chemical literature before him in his laboratory, with trails following the analogies of compounds, and side trails to their physical and chemical behavior.

The historian, with a vast chronological account of a people, parallels it with a skip trail which stops only on the salient items, and can follow at any time contemporary trails which lead him all over civilization at a particular epoch. There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. The inheritance from the master becomes, not only his additions to the world’s record, but for his disciples the entire scaffolding by which they were erected.

Thus science may implement the ways in which man produces, stores, and consults the record of the race. It might be striking to outline the instrumentalities of the future more spectacularly, rather than to stick closely to methods and elements now known and undergoing rapid development, as has been done here. Technical difficulties of all sorts have been ignored, certainly, but also ignored are means as yet unknown which may come any day to accelerate technical progress as violently as did the advent of the thermionic tube. In order that the picture may not be too commonplace, by reason of sticking to present-day patterns, it may be well to mention one such possibility, not to prophesy but merely to suggest, for prophecy based on extension of the known has substance, while prophecy founded on the unknown is only a doubly involved guess. (…)

In the outside world, all forms of intelligence whether of sound or sight, have been reduced to the form of varying currents in an electric circuit in order that they may be transmitted. Inside the human frame exactly the same sort of process occurs. Must we always transform to mechanical movements in order to proceed from one electrical phenomenon to another? It is a suggestive thought, but it hardly warrants prediction without losing touch with reality and immediateness.

Presumably man’s spirit should be elevated if he can better review his shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory. His excursions may be more enjoyable if he can reacquire the privilege of forgetting the manifold things he does not need to have immediately at hand, with some assurance that he can find them again if they prove important.

The applications of science have built man a well-supplied house, and are teaching him to live healthily therein. They have enabled him to throw masses of people against one another with cruel weapons. They may yet allow him truly to encompass the great record and to grow in the wisdom of race experience. He may perish in conflict before he learns to wield that record for his true good. Yet, in the application of science to the needs and desires of man, it would seem to be a singularly unfortunate stage at which to terminate the process, or to lose hope as to the outcome.”

Vannevar Bush, (1890-1974) American engineer and science administrator known for his work on analog computing, his political role in the development of the atomic bomb as a primary organizer of the Manhattan Project, the founding of Raytheon, and the idea of the memex, an adjustable microfilm viewer which is somewhat analogous to the structure of the World Wide Web, As We May Think, The Atlantic, July 1945 (Illustration: James Ferguson, FT)

See also:

Video archive of Oct 12-13 1995 MIT/Brown Symposium on the 50th Anniversary of As We May Think
"As We May Think" - A Celebration of Vannevar Bush’s 1945 Vision, at Brown University
Computing Pages by Francesc Hervada-Sala - “As We May Think” by Vannevar Bush
Timeline of hypertext technology (Wiki)
The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks

Sep
4th
Sun
permalink

Neal Gabler on The Elusive Big Idea - ‘We are living in a post ideas world where bold ideas are almost passé’

Ideas just aren’t what they used to be. Once upon a time, they could ignite fires of debate, stimulate other thoughts, incite revolutions and fundamentally change the ways we look at and think about the world.

They could penetrate the general culture and make celebrities out of thinkers — notably Albert Einstein, but also Reinhold Niebuhr, Daniel Bell, Betty Friedan, Carl Sagan and Stephen Jay Gould, to name a few. The ideas themselves could even be made famous: for instance, for “the end of ideology,” “the medium is the message,” “the feminine mystique,” “the Big Bang theory,” “the end of history.” A big idea could capture the cover of Time — “Is God Dead?” — and intellectuals like Norman Mailer, William F. Buckley Jr. and Gore Vidal would even occasionally be invited to the couches of late-night talk shows. How long ago that was. (…)

If our ideas seem smaller nowadays, it’s not because we are dumber than our forebears but because we just don’t care as much about ideas as they did. In effect, we are living in an increasingly post-idea world — a world in which big, thought-provoking ideas that can’t instantly be monetized are of so little intrinsic value that fewer people are generating them and fewer outlets are disseminating them, the Internet notwithstanding. Bold ideas are almost passé.

It is no secret, especially here in America, that we live in a post-Enlightenment age in which rationality, science, evidence, logical argument and debate have lost the battle in many sectors, and perhaps even in society generally, to superstition, faith, opinion and orthodoxy. While we continue to make giant technological advances, we may be the first generation to have turned back the epochal clock — to have gone backward intellectually from advanced modes of thinking into old modes of belief. But post-Enlightenment and post-idea, while related, are not exactly the same.

Post-Enlightenment refers to a style of thinking that no longer deploys the techniques of rational thought. Post-idea refers to thinking that is no longer done, regardless of the style. (…)

There is the retreat in universities from the real world, and an encouragement of and reward for the narrowest specialization rather than for daring — for tending potted plants rather than planting forests.

There is the eclipse of the public intellectual in the general media by the pundit who substitutes outrageousness for thoughtfulness, and the concomitant decline of the essay in general-interest magazines. And there is the rise of an increasingly visual culture, especially among the young — a form in which ideas are more difficult to express. (…)

We live in the much vaunted Age of Information. Courtesy of the Internet, we seem to have immediate access to anything that anyone could ever want to know. We are certainly the most informed generation in history, at least quantitatively. There are trillions upon trillions of bytes out there in the ether — so much to gather and to think about.

And that’s just the point. In the past, we collected information not simply to know things. That was only the beginning. We also collected information to convert it into something larger than facts and ultimately more useful — into ideas that made sense of the information. We sought not just to apprehend the world but to truly comprehend it, which is the primary function of ideas. Great ideas explain the world and one another to us.

Marx pointed out the relationship between the means of production and our social and political systems. Freud taught us to explore our minds as a way of understanding our emotions and behaviors. Einstein rewrote physics. More recently, McLuhan theorized about the nature of modern communication and its effect on modern life. These ideas enabled us to get our minds around our existence and attempt to answer the big, daunting questions of our lives.

But if information was once grist for ideas, over the last decade it has become competition for them. We are like the farmer who has too much wheat to make flour. We are inundated with so much information that we wouldn’t have time to process it even if we wanted to, and most of us don’t want to.

The collection itself is exhausting: what each of our friends is doing at that particular moment and then the next moment and the next one; who Jennifer Aniston is dating right now; which video is going viral on YouTube this hour; what Princess Letizia or Kate Middleton is wearing that day. In effect, we are living within the nimbus of an informational Gresham’s law in which trivial information pushes out significant information, but it is also an ideational Gresham’s law in which information, trivial or not, pushes out ideas.

We prefer knowing to thinking because knowing has more immediate value. It keeps us in the loop, keeps us connected to our friends and our cohort. Ideas are too airy, too impractical, too much work for too little reward. Few talk ideas. Everyone talks information, usually personal information. Where are you going? What are you doing? Whom are you seeing? These are today’s big questions.

It is certainly no accident that the post-idea world has sprung up alongside the social networking world. Even though there are sites and blogs dedicated to ideas, Twitter, Facebook, Myspace, Flickr, etc., the most popular sites on the Web, are basically information exchanges, designed to feed the insatiable information hunger, though this is hardly the kind of information that generates ideas. It is largely useless except insofar as it makes the possessor of the information feel, well, informed. Of course, one could argue that these sites are no different than conversation was for previous generations, and that conversation seldom generated big ideas either, and one would be right. (…)

An artist friend of mine recently lamented that he felt the art world was adrift because there were no longer great critics like Harold Rosenberg and Clement Greenberg to provide theories of art that could fructify the art and energize it. Another friend made a similar argument about politics. While the parties debate how much to cut the budget, he wondered where were the John Rawls and Robert Nozick who could elevate our politics.

One could certainly make the same argument about economics, where John Maynard Keynes remains the center of debate nearly 80 years after propounding his theory of government pump priming. This isn’t to say that the successors of Rosenberg, Rawls and Keynes don’t exist, only that if they do, they are not likely to get traction in a culture that has so little use for ideas, especially big, exciting, dangerous ones, and that’s true whether the ideas come from academics or others who are not part of elite organizations and who challenge the conventional wisdom. All thinkers are victims of information glut, and the ideas of today’s thinkers are also victims of that glut.

But it is especially true of big thinkers in the social sciences like the cognitive psychologist Steven Pinker, who has theorized on everything from the source of language to the role of genetics in human nature, or the biologist Richard Dawkins, who has had big and controversial ideas on everything from selfishness to God, or the psychologist Jonathan Haidt, who has been analyzing different moral systems and drawing fascinating conclusions about the relationship of morality to political beliefs. But because they are scientists and empiricists rather than generalists in the humanities, the place from which ideas were customarily popularized, they suffer a double whammy: not only the whammy against ideas generally but the whammy against science, which is typically regarded in the media as mystifying at best, incomprehensible at worst. A generation ago, these men would have made their way into popular magazines and onto television screens. Now they are crowded out by informational effluvium.

No doubt there will be those who say that the big ideas have migrated to the marketplace, but there is a vast difference between profit-making inventions and intellectually challenging thoughts. Entrepreneurs have plenty of ideas, and some, like Steven P. Jobs of Apple, have come up with some brilliant ideas in the “inventional” sense of the word.

Still, while these ideas may change the way we live, they rarely transform the way we think. They are material, not ideational. It is thinkers who are in short supply, and the situation probably isn’t going to change anytime soon.

We have become information narcissists, so uninterested in anything outside ourselves and our friendship circles or in any tidbit we cannot share with those friends that if a Marx or a Nietzsche were suddenly to appear, blasting his ideas, no one would pay the slightest attention, certainly not the general media, which have learned to service our narcissism.

What the future portends is more and more information — Everests of it. There won’t be anything we won’t know. But there will be no one thinking about it.

Think about that.”

Neal Gabler, a professor, journalist, author, film critic and political commentator, The Elusive Big Idea, The New York Times, August 14, 2011.

See also:

☞ The Kaleidoscopic Discovery Engine. ‘All scientific discoveries are in principle ‘multiples’’
Mark Pagel, Infinite Stupidity. Social evolution may have sculpted us not to be innovators and creators as much as to be copiers, Edge, Lapidarium, Dec 16, 2011
The Paradox of Contemporary Cultural History. We are clinging as never before to the familiar in matters of style and culture

Jul
30th
Sat
permalink

Stewart Brand: ‘Look At the World Through the Eyes Of A Fool’

                                  

Q: Has society become too eager to discard things and ideas?

(…) I think we have become too shortsighted. Everything is moving faster, everybody is multitasking. Investments are made for short-term returns, democracies run on short-term election cycles. Speedy progress is great, but it is also chancy. When everything is moving fast, the future looks like it is next week. But what really counts is the future ten or hundred years from now. And we should also bear in mind that the history that matters is not only yesterday’s news but events from a decade or a century or a millennium ago. To balance that, we want to look at the long term: the last ten thousand years, the next ten thousand years. (…)

When NASA released the first photographs of the earth from space in the 1960s, people changed their frame of reference. We began to think differently about the earth, about our environment, about humanity. (…)

There had been many drawings of the earth from space, just like people made images of cities from above before we had hot-air balloons. But they were all wrong. Usually, images of the earth did not include any clouds, no weather, no climate. They also tended to neglect the shadow that much of the earth is usually in. From most angles, the earth appears as a crescent. Only when the sun is directly behind you would you see the whole planet brightly illuminated against the blackness of space. (…)

The question of framing

I think there is always the question of framing: How do we look at things? The first photos of the earth changed the frame. We began to talk more about “humans” and less about Germans or Americans. We began to start talking about the planet as a whole. That, in a way, gave us the ability to think about global problems like climate change. We did not have the idea of a global solution before. Climate Change is a century-sized problem. Never before has humanity tried to tackle something on such a long temporal scale. Both the large scale and the long timeframe have to be taken seriously.

Q: Do you believe in something like a human identity?

In a way, the ideal breakthrough would be to discover alien life. That would give us a clear sense of our humanity. But even without that, we have done pretty well in stepping outside our usual frame of reference and looking at the planet and at the human race from the outside. That’s nice. I would prefer if we didn’t encounter alien intelligence for a while. (…)

Q: So we have to improve the extrapolations and predictions that we make based on present data sets?

We like to think that we are living in a very violent time, that the future looks dark. But the data says that violence has declined every millennium, every century, every decade. The reduction in cruelty is just astounding. So we should not focus too much on the violence that has marked the twentieth century. The interesting question is how we can continue that trend of decreasing violence into the future. What options are open to us to make the world more peaceful? Those are data-based questions. (…)

Q: When you started to publish the Whole Earth Catalogue in 1968, you said that you wanted to create a database so that “anyone on Earth can pick up a telephone and find out the complete information on anything.” Is that the idea of the internet, before the internet?

Right, I had forgotten about that quote. Isn’t it nice that I didn’t have to go through the work of collecting that information, it just happened organically. Some people say to me that I should revive the catalogue and my answer is: The internet is better than any catalogue or encyclopedia could ever be. (…)

I don’t think the form determines the triviality of information or the level of discussion. By having much more opportunities and much lower costs of online participation, we are in a position to really expand and improve those discourses. (…)

When Nicholas Negroponte said a few years ago that every child in the world needed a laptop computer, he was right. Many people were skeptical of his idea, but they have been proven wrong. When you give internet access to people in the developing world, they immediately start forming educational networks. They expand their horizons, children teach their parents how to read and write. (…)

Q: On the back cover of the 1974 Whole Earth Catalogue, it said something similar: “Stay hungry, stay foolish”. Why?

It proposes that a beginner’s mind is the way to look at new things. We need a combination of confidence and of curiosity. It is a form of deep-seated opportunism that goes to the core of our nature and is very optimistic. I haven’t been killed by my foolishness yet, so let’s keep going, let’s take chances. The phrase expresses that our knowledge is always incomplete, and that we have to be willing to act on imperfect knowledge. That allows you to open your mind and explore. It means putting aside the explanations provided by social constructs and ideologies.

I really enjoyed your interview with Wade Davis. He makes a persuasive case for allowing native cultures to keep their cultures intact. That’s the idea behind the Rosetta Project as well. Most Americans are limited by the fact that they only speak one language. Being multilingual is a first step to being more aware of different perspectives on the world. We should expand our cognitive reach. I think there are many ways to do that: Embrace the internet. Embrace science. Travel a lot. Learn about people who are unlike yourself. I spent much of my twenties with Native American tribes, for example. You miss a lot of important stuff if you only follow the beaten path. If you look at the world through the eyes of a fool, you will see more. But I probably hadn’t thought about all of this back in 1974. It was a very countercultural move.

Q: In politics, we often talk about policies that supposedly have no rational alternative. Is that a sign of the stifling effects of ideology?

Ideologies are stories we like to tell ourselves. That’s fine, as long as we remember that they are stories and not accurate representations of the world. When the story gets in the way of doing the right thing, there is something wrong with the story. Many ideologies involve the idea of evil: Evil people, evil institutions, et cetera. Marvin Minsky has once said to me that the only real evil is the idea of evil. Once you let that go, the problems become manageable. The idea of pragmatism is that you go with the things that work and cast aside lovely and lofty theories. No theory can be coherent and comprehensive enough to provide a direct blueprint for practical actions. That’s the idea of foolishness again: You work with imperfect theories, but you don’t base your life on them.

Q: So “good” is defined in terms of a pragmatic assessment of “what works”?

Good is what creates more life and more options. That’s a useful frame. The opposite of that would not be evil, but less life and fewer options.”

Stewart Brand, American writer, best known as editor of the Whole Earth Catalog, "Look At the World Through the Eyes Of A Fool", The European, 30.05.2011

See also:Whole Earth Catalogue

Jun
20th
Mon
permalink

The Argumentative Theory: ‘Reason evolved to win arguments, not seek truth’

                    

"For centuries thinkers have assumed that the uniquely human capacity for reasoning has existed to let people reach beyond mere perception and reflex in the search for truth. Rationality allowed a solitary thinker to blaze a path to philosophical, moral and scientific enlightenment.

Now some researchers are suggesting that reason evolved for a completely different purpose: to win arguments. Rationality, by this yardstick (and irrationality too, but we’ll get to that) is nothing more or less than a servant of the hard-wired compulsion to triumph in the debating arena. According to this view, bias, lack of logic and other supposed flaws that pollute the stream of reason are instead social adaptations that enable one group to persuade (and defeat) another. (…)

The idea, labeled the argumentative theory of reasoning, is the brainchild of French cognitive social scientists, and it has stirred excited discussion (and appalled dissent) among philosophers, political scientists, educators and psychologists, some of whom say it offers profound insight into the way people think and behave. The Journal of Behavioral and Brain Sciences devoted its April issue to debates over the theory, with participants challenging everything from the definition of reason to the origins of verbal communication.

“Reasoning doesn’t have this function of helping us to get better beliefs and make better decisions,” said Hugo Mercier, who is a co-author of the journal article, with Dan Sperber. “It was a purely social phenomenon. It evolved to help us convince others and to be careful when others try to convince us.” Truth and accuracy were beside the point.

Indeed, Mr. Sperber, a member of the Jean-Nicod research institute in Paris, first developed a version of the theory in 2000 to explain why evolution did not make the manifold flaws in reasoning go the way of the prehensile tail and the four-legged stride. Looking at a large body of psychological research, Mr. Sperber wanted to figure out why people persisted in picking out evidence that supported their views and ignored the rest — what is known as confirmation bias — leading them to hold on to a belief doggedly in the face of overwhelming contrary evidence.

Other scholars have previously argued that reasoning and irrationality are both products of evolution. But they usually assume that the purpose of reasoning is to help an individual arrive at the truth, and that irrationality is a kink in that process, a sort of mental myopia. Gary F. Marcus, for example, a psychology professor at New York University and the author of “Kluge: The Haphazard Construction of the Human Mind,” says distortions in reasoning are unintended side effects of blind evolution. They are a result of the way that the brain, a Rube Goldberg mental contraption, processes memory. People are more likely to remember items they are familiar with, like their own beliefs, rather than those of others.

What is revolutionary about argumentative theory is that it presumes that since reason has a different purpose — to win over an opposing group — flawed reasoning is an adaptation in itself, useful for bolstering debating skills.

Mr. Mercier, a post-doctoral fellow at the University of Pennsylvania, contends that attempts to rid people of biases have failed because reasoning does exactly what it is supposed to do: help win an argument.

“People have been trying to reform something that works perfectly well,” he said, “as if they had decided that hands were made for walking and that everybody should be taught that.”

Think of the American judicial system, in which the prosecutors and defense lawyers each have a mission to construct the strongest possible argument. The belief is that this process will reveal the truth, just as the best idea will triumph in what John Stuart Mill called the “marketplace of ideas.” (…)

Patricia Cohen, writer, journalist, Reason Seen More as Weapon Than Path to Truth, The New York Times, June 14, 2011.

"Imagine, at some point in the past, two of our ancestors who can’t reason. They can’t argue with one another. And basically as soon as they disagree with one another, they’re stuck. They can’t try to convince one another. They are bound to keep not cooperating, for instance, because they can’t find a way to agree with each other. And that’s where reasoning becomes important.
                                 
We know that in the evolutionary history of our species, people collaborated a lot. They collaborated to hunt, they collaborated to gather food, and they collaborated to raise kids. And in order to be able to collaborate effectively, you have to communicate a lot. You have to tell other people what you want them to do, and you have to tell them how you feel about different things.
                                 
But then once people start to communicate, a host of new problems arise. The main problem posed by communication in an evolutionary context is that of deceiving interlocutors. When I am talking to you, if you accept everything I say then it’s going to be fairly easy for me to manipulate you into doing things that you shouldn’t be doing. And as a result, people have a whole suite of mechanisms that are called epistemic vigilance, which they use to evaluate what other people tell them.
                                 
If you tell me something that disagrees with what I already believe, my first reaction is going to be to reject what you’re telling me, because otherwise I could be vulnerable. But then you have a problem. If you tell me something that I disagree with, and I just reject your opinion, then maybe actually you were right and maybe I was wrong, and you have to find a way to convince me. This is where reasoning kicks in. You have an incentive to convince me, so you’re going to start using reasons, and I’m going to have to evaluate these reasons. That’s why we think reasoning evolved. (…)

We predicted that reasoning would work rather poorly when people reason on their own, and that is the case. We predicted that people would reason better when they reason in groups of people who disagree, and that is the case. We predicted that reasoning would have a confirmation bias, and that is the case. (…)

The starting point of our theory was this contrast between all the results showing that reasoning doesn’t work so well and the assumption that reasoning is supposed to help us make better decisions. But this assumption was not based on any evolutionary thinking, it was just an intuition that was probably cultural in the West, people think that reasoning is a great thing. (…)

That’s important to keep in mind is that reasoning is used in a very technical sense. And sometimes not only laymen, but philosophers, and sometimes psychologists tend to use “reasoning” in an overly broad way, in which basically reasoning can mean anything you do with your mind.

By contrast, the way we use the term “reasoning” is very specific. And we’re only referring to what reasoning is supposed to mean in the first place, when you’re actually processing reasons. Most of the decisions we make, most of the inferences we make, we make without processing reasons. (…) When you’re shopping for cereals at the supermarket, and you just grab a box of cereal not because you’ve reasoned through all the alternatives, but just because it’s the one you always buy. And you’re just doing the same thing. There is no reasoning involved in that decision. (…)

It’s only when you’re considering reasons, reasons to do something, reasons to believe, that you’re reasoning. If you’re just coming up with ideas without reasons for these ideas, then you’re using your intuitions.”

The Argumentative Theory. A Conversation with Hugo Mercier, Edge, 4.27.2011

"Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis.

Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found. (…) p.1

Some of the evidence reviewed here shows not only that reasoning falls short of delivering rational beliefs and rational decisions reliably, but also that, in a variety of cases, it may even be detrimental to rationality. Reasoning can lead to poor outcomes not because humans are bad at it but because they systematically look for arguments to justify their beliefs or their actions. The argumentative theory, however, puts such well-known demonstrations of “irrationality” in a novel perspective. Human reasoning is not a profoundly flawed general mechanism; it is a remarkably efficient specialized device adapted to a certain type of social and cognitive interaction at which it excels. (…)

People are good at assessing arguments and are quite able to do so in an unbiased way, provided they have no particular axe to grind. In group reasoning experiments where participants share an interest in discovering the right answer, it has been shown that truth wins. (…) p.58

What makes [Sherlock] Holmes such a fascinating character is precisely his preternatural turn of mind operating in a world rigged by Conan Doyle, where what should be inductive problems in fact have deductive solutions. More realistically, individuals may develop some limited ability to distance themselves from their own opinion, to consider alternatives and thereby become more objective. Presumably this is what the 10% or so of people who pass the standard Wason selection task do. But this is an acquired skill and involves exercising some imperfect control over a natural disposition that spontaneously pulls in a different direction. (…)” p. 60

Hugo Mercier, postdoc in the Philosophy, Politics and Economics program at the University of Pennsylvania, and Dan Sperber, French social and cognitive scientist, Why do humans reason? Arguments for an argumentative theory, (pdf) Cambridge University Press 2011, published in Behavioral and Brain Sciences (Illustration source)

See also:

☞ Dan Sperber, Hugo Mercier, Reasoning as a Social Competence (pdf), Collective Wisdom Landemore, H. and Elster, J. (Eds.)
☞ Hugo Mercier, On the Universality of Argumentative Reasoning, Journal of Cognition and Culture, Vol. 11, pp. 85–113, 2011

Jun
5th
Sun
permalink

Vlatko Vedral: Decoding Reality: the universe as quantum information

                  

Everything in our reality is made up of information. From the evolution of life to the dynamics of social ordering to the functioning of quantum computers, they can all be understood in terms of bits of information. We saw that in order to capture all the latest elements of reality we needed to extend Claude Shannon's original notion of information, and upgrade his notion from bits to quantum bits, or qubits. Qubits incorporate the fact that in quantum theory outcomes to our measurements are intrinsically random.

But where do these qubits come from? Quantum theory allows us to answer this question; but the answer is not quite what we expected. It suggests that these qubits come from nowhere! There is no prior information required in order for information to exist. Information can be created from emptiness. In presenting a solution to the sticky question of ‘law without law’ we find that information breaks the infinite chain of regression in which we always seem to need a more fundamental law to explain the current one. This feature of information, ultimately coming from our understanding of quantum theory, is what distinguishes information from any other concept that could potentially unify our view of reality, such as matter or energy. Information is, in fact, unique in this respect. (…) p. 215

This book will argue that information (and not matter or energy or love) is the building block on which everything is constructed. Information is far more fundamental than matter or energy because it can be successfully applied to both macroscopic interactions, such as economic and social phenomena, and, as I will argue, information can also be used to explain the origin and behaviour of microscopic interactions such as energy and matter.

The question of everything from nothing, creation ex nihilo

As pointed out by David Deutsch and John Archibald Wheeler, however, whatever candidate is proposed for the fundamental building block of the Universe, it still needs to explain its ‘own’ ultimate origin too. In other words, the question of everything from nothing, creation ex nihilo, is key. So if, as I claim, information is this common thread, the question of creation ex nihilo reduces to explaining how some information arises out of no information. Not only will I show how this is possible, I will also argue that information, in contrast to matter and energy, is the only concept that we currently have that can explain its own origin. (…) p.10

This desire to compress information and the natural increase of information in the Universe may initially seem like independent processes, but as we will explore in much more detail later there may be a connection. As we compress and find all-encompassing principles describing our reality, it is these principles that then indicate how much more information there is in our Universe to find. In the same way that Feuerbach states that ‘Man first creates God, and then God creates Man’, we can say that we compress information into laws from which we construct our reality, and this reality then tells us how to further compress information. (…)

I believe this view of reality being defined through information compression is closer to the spirit of science as well as its practice. (…) It is also closer to the scientific meaning of information in that information reflects the degree of uncertainty in our knowledge of a system. (…)

Information is the underlying thread that connects all phenomena we see around us as well as explaining their origin. Our reality is ultimately made up of information. (…) p. 12-13

Information is the language Nature uses to convey its messages and this information comes in discrete units. We use these units to construct our reality. (…) p. 23

Do we define information as a quantity which we can use to do something useful or could we still call it information even if it wasn’t of any use to us? Is information objective or is it subjective? For example, would the same message or piece of news carry the same information for two different people? Is information inherently human or can animals also process information? Going even beyond this, is it a good thing to have a lot of information and to be able to process it quickly or can too much information drown you? These questions all add some colour and vigour to the challenge of achieving an agreed and acceptable definition of information.

The second trouble with information is that, once defined in a rigorous manner, it is measured in a way that is not easy to convey without mathematics. You may be very surprised to hear that even scientists balk at the thought of yet another equation. (…) p. 26-27

By stripping away all irrelevant details we can distil the essence of what information means. (…) Unsurprisingly, we find the basis of our modern concept of information in Ancient Greece. The Ancient Greeks laid the groundwork for its definition when they suggested that the information content of an event somehow depends only on how probable this event really is. Philosophers like Aristotle reasoned that the more surprised we are by an event the more information the event carries. By this logic, having a clear sunny autumn day in England would be a very surprising event, whilst experiencing drizzle randomly throughout this period would not shock anyone. This is because it is very likely, that is, the probability is high, that it will rain in England at any given instant of time. From this we can conclude that less likely events, the ones for which the probability of happening is very small, are those that surprise us more and therefore are the ones that carry more information.

Following this logic, we conclude that information has to be inversely proportional to probability, i.e. events with smaller probability carry more information. In this way, information is reduced to only probabilities and in turn probabilities can be given objective meaning independent of human interpretation or anything else (meaning that whilst you may not like the fact that it rains a lot in England, there is simply nothing you can do to change its probability of occurrence). (…) p. 29

As we saw in the initial chapter on creation ex nihilo, the fundamental question is why there is any information in the first place. For the replication of life we saw that we needed four main components, the protein synthesizer machine [a universal constructing machine], M, the DNA Xerox copier X, the enzymes which act as controllers, C, and the DNA information set [the set of instructions required to construct these three], I. (…) With these it is possible to then create an entity that self-replicates indefi nitely.

A macromolecule responsible for storing the instructions, I, in living systems is called DNA. DNA has four bases: A, C, T, and G. When DNA replicates inside our cells, each base has a specifi c pairing partner. There is huge redundancy in how bases are combined to form amino acid chains. This is a form of error correction. The digital encoding mechanism of DNA ensures that the message gets propagated with high fidelity. Random mutations aided by natural selection necessarily lead to an increase in complexity of life. 

The process of creating biological information from no prior biological information is another example of the question of creation ex nihilo. Natural selection does not tell us where biological information comes from – it just gives us a framework of how it propagates. (…) p. 54-55

My argument is that life paradoxically ends not when it underdoses on fuel, but, more fundamentally, when it overdoses on ‘information’ (i.e. when it reaches a saturation point and can no longer process any further information). We have all experienced instances where we feel we cannot absorb any more information. (…)

The Second Law of thermodynamics tells us that in physical terms, a system reaches its death when it reaches its maximum disorder (i.e. it contains as much information as it can handle). This is sometimes (cheerfully) referred to as thermal death, which could really more appropriately be called information overload. This state of maximum disorder is when life effectively becomes a part of the rest of the lifeless Universe. Life no longer has any capacity to evolve and remains entirely at the mercy of the environment. (…) p. 58-59

Physical entropy, which describes how disordered a system is, tends to increase with time. This is known as the Second Law of thermodynamics. The increasing complexity of life is driven by the overall increase in disorder in the Universe. (…) p. 76

Mutual information

This concept is very important in understanding a diverse number of phenomena in Nature and will be the key when we explain the origin of structure in any society.

Mutual information is the formal word used to describe the situation when two (or more) events share information about one another. Having mutual information between events means that they are no longer independent; one event has something to tell you about the other. For example, when someone asks if you’d like a drink in a bar, how many times have you replied ‘I’ll have one if you have one’? This statement means that you are immediately correlating your actions with the actions of the person offering you a drink. If they have a drink, so will you; if they don’t, neither will you. Your choice to drink-or-not-to-drink is completely tied to theirs and hence, in information theory parlance, you both have maximum mutual information.

A little more formally, the whole presence of mutual information can be phrased as an inference indicator. Two things have mutual information if by looking at just one of them you can infer something about one of the properties of the other one. So, in the above example, if I see that you have a drink in front of you that means logically that the person offering you a drink also has a glass in front of them (given that you only drink when the person next to you drinks). (…)

Whenever we discuss mutual information we are really asking how much information an object/person/idea has about another object/person/idea. (…)

When it comes to DNA, its molecules share information about the protein they encode. Different strands of DNA share information about each other as well (we know that A only binds to G and C only binds to T). Furthermore the DNA molecules of different people also share information about one another (a father and a son, for example, share half of their DNA genetic material) and the DNA is itself sharing information with the environment – in that the environment determines through natural selection how the DNA evolves. (…)

One of the phenomena we will try to understand here, using mutual information, is what we call ‘globalization’, or the increasing interconnectedness of disparate societies. (…)

Before we delve further into social phenomena, I need to explain an important concept in physics called a phase transition. Stated somewhat loosely, phase transitions occur in a system when the information shared between the individual constituents become large (so for a gas in a box, for an iron rod in a magnetic field, and for a copper wire connected into an electric circuit, all their constituents share some degree of mutual information).

A high degree of mutual information often leads to a fundamentally different behaviour, although the individual constituents are still the same. To elaborate this point, the individual constituents are not affected on an individual basis, but as a group they exhibit entirely different behaviour. The key is how the individual constituents relate to one another and create a group dynamic. This is captured by the phrase ‘more is different’, by the physicist Philip Anderson, who contributed a great deal to the subject, culminating in his Nobel Prize in 1977.

A common example of a group dynamic is the effect we observe when boiling or freezing water (i.e. conversion of a liquid to a gas or conversion of a liquid to a solid). These extreme and visible changes of structures and behaviour are known as phase transitions. When water freezes, the phase transition occurs as the water molecules becomes more tightly correlated and these correlations manifest themselves in stronger molecular bonds and a more solid structure. The formation of societies and significant changes in every society – such as a revolution or a civil war or the attainment of democracy – can, in fact, be better understood using the language of phase transitions.

I now present one particular example that will explain phase transitions in more detail. This example will then act as our model to explain various social phenomena that we will tackle later in the chapter. Let us imagine a simple solid, made up of a myriad of atoms (billions and billions of them). Atoms usually interact with each other, although these interactions hardly ever stretch beyond their nearest neighbours. So, atoms next to each other will feel each other’s presence only, while the ones that are further apart from each other will typically never directly exchange any information.

It would now be expected that as a result of the ‘nearest neighbour’ interaction, only the atoms next to each other share information while this is not possible where there is no interaction. Though this may sound logical, it is in fact entirely incorrect. Think of a whip: you shake one end and this directly infl uences the speed and range at which the other end moves. You are transferring movement using the interconnectedness of atoms in the whip. Information can be shared between distant atoms because one atom interacts with its neighbours, but the neighbours also interact with their neighbours, and so on. This concept can be explained more elegantly through the concept of ‘six degrees of separation’. You often see it claimed that each person on this planet is at most six people away from any other person. (…) p. 94-97

Why is this networking between people important? You might argue that decisions made by society are to a high degree controlled by individuals – who ultimately think for themselves. It is clear, however, that this thinking is based on the common information shared between individuals. It is this interaction between individuals that is responsible for the different structures within society as well as society itself. (…) In this case, the information shared between individuals becomes much more important. So how do all people agree to make a decision, if they only interact locally, i.e. with a very limited number of neighbours?

In order to understand how local correlations can lead to the establishment of structures within society, let us return to the example of a solid. Solids are regular arrays of atoms. This time, however, rather than talking about how water becomes ice, let’s consider how a solid becomes a magnet. Every atom in a solid can be thought of as a little magnet on its own. Initially these magnets are completely independent of one another and there is no common north/south alignment – meaning that they are all pointing in random directions. The whole solid – the whole collection of atoms – would then be a random collection of magnets and would not be magnetized as a whole (this is known as a paramagnet). All the random little atomic magnets would simply cancel each other out in effect and there would be no net magnetic field.

However, if the atoms interact, then they can affect each other’s state, i.e. they can cause their neighbours to line up with them. Now through the same principle as six degrees of separation, each atom affects the other atoms it is connected to, and in turn these affect their own neighbours, eventually correlating all the atoms in the solid. If the interaction is stronger than the noise due to the external temperature, then all magnets will eventually align in the same direction and the solid as a whole generates a net magnetic field and hence becomes magnetic! All atoms now behave coherently in tune, just like one big magnet. The point at which all atoms ‘spontaneously’ align is known as the point of phase transition, i.e. the point at which a solid becomes a magnet. (…)

You may object that atoms are simple systems compared to humans. After all humans can think, feel, get angry, while atoms are not alive and their range of behaviour is far simpler. But this is not the point! The point is that we are only focusing on one relevant property of humans (or atoms) here. Atoms are not all that simple either, but we are choosing to make them so by looking only at their magnetic properties. Humans are much more complicated still, but now we only want to know about their political preference, and these can be quite simple in practice. (…)

Small-world network

This unevenness in the number of contacts leads to a very important model where there is a great deal of interaction with people close by and then, every once in a while, there is a long-distance interaction with someone far away. This is called a ‘small world network’ and is an excellent model for how and why disease propagates rapidly in our world. When we get ill, disease usually spreads quickly to our closest neighbours. Then it is enough that only one of the neighbours takes a long-distance flight and this can then make the virus spread in distant places. And this is why we are very worried about swine flu and all sorts of other potential viruses that can kill humans.

Let us now consider why some people believe – rightly or wrongly – that the information revolution has and will transform our society more than any other revolution in the past – such as the industrial revolution discussed in earlier chapters. Some sociologists, such as Manuel Castells, believe that the Internet will inflict much more profound transformations in our society than ever previously seen in history. His logic is based on the above idea of phase transitions, though, being a sociologist, he may not be interpreting them in quite the same way as a physicist does mathematically.

To explain, we can think of early societies as very ‘local’ in nature. One tribe exists here, another over there, but with very little communication between them. Even towards the end of the nineteenth century, transfer of ideas and communication in general were still very slow. So for a long time humans have lived in societies where communication was very short range. And, in physics, this would mean that abrupt changes are impossible. Societies have other complexities, so I would say that ‘fundamental change is unlikely’ rather than ‘impossible’. Very recently, through the increasing availability of technology we can travel far and wide, and through the Internet we can learn from and communicate with virtually anyone
in the world.

Early societies were like the Ising model, while later ones are more like the small world networks. Increasingly, however, we are approaching the stage where everyone can and does interact with everyone else. And this is exactly when phase transitions become increasingly more likely. Money (and even labour) can travel from one end of the globe to another in a matter of seconds or even faster. This, of course, has an effect on all elements of our society.

Analysing social structures in terms of information theory can frequently reveal very counterintuitive features. This is why it is important to be familiar with a language of information theory, because without a formalized framework, some of the most startling and beautiful effects are much harder to understand in terms of root causes.(…) p. 98

Universe as a quantum computer

Konrad Zuse, a famous German mathematician who pioneered many cryptographic techniques used during World War II, was the first to view the Universe as a computer. (…) The problem, however, is that all these models assume that the Universe is a classical computer. By now, however, we know that the Universe should be understood as a quantum computer.

Our reality evolves because every once in a while we find that we need to edit part of the program that describes reality. We may find that this piece of the program, based on a certain model, is refuted (the underlying model is found to be inaccurate), and hence the program needs to be updated. Refuting a model and changing a part of the program is, as we saw, crucial to changing reality itself because refutations carry much more information than simply confirming a model. (…) p. 192

We can construct our whole reality in this way by looking at it in terms of two distinct but inter-related arrows of knowledge. We have the spontaneous creation of mutual information in the Universe as events unfold, without any prior cause. This kicks off the interplay between the two arrows. On the one hand, through our observations and a series of conjectures and refutations, we compress the information in the Universe into a set of natural laws. These laws are the shortest programs to represent all our observations. On the other hand, we run these programs to generate our picture of reality. It is this picture that then tells us what is, and isn’t, possible to accomplish, in other words, what our limitations are.

The Universe starts empty but potentially with a huge amount of information. The key event that gives the Universe some direction is the first act of ‘symmetry breaking’, the first cut of the sculptor. This act, which we consider as completely random, i.e. without any prior cause, just decides on why one tiny aspect in the Universe is one way rather than another. This first event sets in motion a chain reaction in which, once one rule has been decided, the rest of the Universe needs to proceed in a consistent manner. (…)

This is where the first arrow of knowledge begins. We compress the spontaneous, yet consistent information in the Universe, into a set of natural laws that continuously evolve as we test and discard the erroneous ones. Just as man evolved through a compression of biological information (a series of optimizations for the changing environment), our understanding of the Universe (our reality) has also evolved as we better synthesize and compress the information that we are presented with into more and more accurate laws of Nature. This is how the laws of Nature emerge, and these are the physical, biological, and social principles that our knowledge is based on.

The second arrow of knowledge is the flip-side to the first arrow. Once we have the laws of Nature, we explore their meaning in order to define our reality, in terms of what is and isn’t possible within it. It is a necessary truth that whatever our reality, it is based exclusively on our understanding of these laws. For example, if we have no knowledge of natural selection, all of the species look independently created and without any obvious connection. Of course this is all dynamic in that when we find an event that doesn’t fit our description of reality, then we go back and change the laws, so that the subsequently generated reality also explains this event.

The basis for these two arrows is the darkness of reality, a void from which they were created and within which they operate. Following the first arrow, we ultimately arrive at nothing (ultimately there is no reality, no law without law). The second arrow then lifts us from this nothingness and generates a picture of reality as an interconnected whole.

So our two arrows seem to point in opposite directions to one another. The first compresses the information available into succinct knowledge and the second decompresses the resulting laws into a colourful picture of reality. In this sense our whole reality is encoded into the set of natural laws. We already said that there was an overall direction for information flow in the Universe, i.e. that entropy (disorder) in the Universe can only increase. This gives us a well defined directionality to the Universe, commonly known as the ‘arrow of time’. (…)

The first arrow of knowledge clearly acts like a Maxwell’s demon. It constantly combats the arrow of time and tirelessly compresses disorder into something more meaningful. It connects seemingly random and causeless events into a string of mutually inter-related facts. The second arrow of knowledge, however, acts in the opposite direction of increasing the disorder. By changing our view of reality it instructs us that there are more actions we can take within the new reality than we could with the previous, more limited view.

Within us, within all objects in the Universe, lie these two opposing tendencies. So, is this a constant struggle between new information and hence disorder being created in the Universe, and our efforts to order this into a small set of rules? If so, is this a losing battle? (…)

Scientific knowledge proceeds via a dialogue with Nature. We ask ‘yes-no’ questions through our observations of various phenomena.

Information in this way is created out of no information. By taking a stab in the dark we set a marker which we can then use to refine our understanding by asking such ‘yes-no’ questions. (…)

The whole of our reality emerges by first using the conjectures and refutations to compress observations and then from this compression we deduce what is and isn’t possible. (…) p. 211-214

Viewing reality as information leads us to recognize two competing trends in its evolution. These trends, or let’s call them arrows, work hand in hand, but point in opposite directions. The first arrow orders the world against the Second Law of thermodynamics and compresses all the spontaneously generated information in the Universe into a set of well-defined principles. The second arrow then generates our view of reality from these principles.

It is clear that the more efficient we are in compressing all the spontaneously generated information, the faster we can expand our reality of what is and isn’t possible. But without the second arrow, without an elementary view of our reality, we cannot even begin to describe the Universe. We cannot access parts of the Universe that have no corresponding basis in our reality. After all, whatever is outside our reality is unknown to us. (…)

By exploring our reality we better understand how to look for and compress the information that the Universe produces. This in turn then affects our reality. Everything that we have understood, every piece of knowledge, has been acquired by feeding these two arrows into one another. Whether it is biological propagation of life, astrophysics, economics, or quantum mechanics, these are all a consequence of our constant re-evaluation of reality. So it’s clear that not only does the second arrow depend on the first, it is natural that the first arrow also depends on the second. (…)

We compress information to generate our laws of Nature, and then use these laws of Nature to generate more information, which then gets compressed back into upgraded laws of Nature.

The dynamics of the two arrows is driven by our desire to understand the Universe. As we drill deeper and deeper into our reality we expect to find a better understanding of the Universe. We believe that the Universe to some degree behaves independently of us and the Second Law tells us that the amount of information in the Universe is increasing. But what if with the second arrow, which generates our view of reality, we can affect parts of the Universe and create new information? In other words, through our existence could we affect the Universe within which we exist? This would make the information generated by us a part of the new information the Second Law talks about.

A scenario like this presents no conceptual problem within our picture. This new information can also be captured by the first arrow, as it fights, through conjectures and refutations, to incorporate any new information into the basic laws of Nature. However, could it be that there is no other information in the Universe than that generated by us as we create our own reality?

This leads us to a startling possibility. If indeed the randomness in the Universe, as demonstrated by quantum mechanics, is a consequence of our generation of reality then it is as if we create our own destiny. It is as if we exist within a simulation, where there is a program that is generating us and everything that we see around us. Think back to the movie The Matrix, where Keanu Reeves lives in a simulation until he is offered a way out, a way back into reality. If the randomness in the Universe is due to our own creation of reality, then there is no way out for us. This is because, in the end, we are creators of our own simulation. In such a scenario, Reeves would wake up in his reality only to find himself sitting at the desk programming his own simulation. This closed loop was echoed by John Wheeler who said: ‘physics gives rise to observer-participancy; observer-participancy gives rise to information; information gives rise to physics.

But whether reality is self-simulating (and hence there is no Universe required outside of it) is, by definition, something that we will never know. What we can say, following the logic presented in this book, is that outside of our reality there is no additional description of the Universe that we can understand, there is just emptiness. This means that there is no scope for the ultimate law or supernatural being – given that both of these would exist outside of our reality and in the darkness. Within our reality everything exists through an interconnected web of relationships and the building blocks of this web are bits of information. We process, synthesize, and observe this information in order to construct the reality around us. As information spontaneously emerges from the emptiness we take this into account to update our view of reality. The laws of Nature are information about information and outside of it there is just darkness. This is the gateway to understanding reality.

And I finish with a quote from the Tao Te Ching, which some 2500 years earlier, seems to have beaten me to the punch-line:

The Tao that can be told is not the eternal Tao.
The name that can be named is not the eternal name.
The nameless is the beginning of heaven and earth.
The named is the mother of the ten thousand things.
Ever desireless, one can see the mystery.
Ever desiring, one sees the manifestations.
These two spring from the same source but differ in name; this
appears as darkness.
Darkness within darkness.
The gate to all mystery.

p. 215-218

Vlatko Vedral, Professor of Physics at the University of Oxford and CQT (Centre for Quantum Technologies) at the National University of Singapore, Decoding Reality: the universe as quantum information, Oxford University Press, 2010 (Illustration source)

Vlatko Vedral: Everything is information

Physicist Vlatko Vedral explains to Aleks Krotoski why he believes the fundamental stuff of the universe is information and how he hopes that one day everything will be explained in this way.

"In Decoding Reality, Vedral argues that we should regard the entire universe as a gigantic quantum computer. Wacky as that may sound, it is backed up by hard science. The laws of physics show that it is not only possible for electrons to store and flip bits: it is mandatory. For more than a decade, quantum-information scientists have been working to determine just how the universe processes information at the most microscopic scale." — The universe is a quantum computer, New Scientist, 22 March 2010

See also:

☞ Vlatko Vedral, Living in a Quantum World (pdf), Scientific American, 2011
☞ Mark Buchanan, Quantum minds: Why we think like quarks - ‘To be human is to be quantum’, New Scientist, 05 Sep 2011
The Concept of Laws. The special status of the laws of mathematics and physics
David Deutsch: A new way to explain explanation, TED
Stephen Hawking on the univers’s origin
The Relativity of Truth - a brief résumé
☞ Vlatko Vedral, Information and Physics, University of Oxford, National University of Singapore (2012)

Jun
2nd
Thu
permalink

David Deutsch: A new way to explain explanation

For tens of thousands of years our ancestors understood the world through myths, and the pace of change was glacial. The rise of scientific understanding transformed the world within a few centuries. Why?

"Before the scientific revolution, they believed that everything important, knowable, was already known, enshrined in ancient writings, institutions, and in some genuinely useful rules of thumb — which were, however, entrenched as dogmas, along with many falsehoods. So they believed that knowledge came from authorities that actually knew very little. And therefore progress depended on learning how to reject the authority of learned men, priests, traditions and rulers. Which is why the scientific revolution had to have a wider context: the Enlightenment, a revolution in how people sought knowledge, trying not to rely on authority. "Take no one’s word for it." (…)

What creationist and empiricists both ignore is that, in that sense, no one has ever seen a bible either, that the eye only detects light, which we don’t perceive. Brains only detect nerve impulses. And they don’t perceive even those as what they really are, namely electrical crackles. So we perceive nothing as what it really is.

Our connection to reality is never just perception. It’s always, as Karl Popper put it, theory-laden. Scientific knowledge isn’t derived from anything. It’s like all knowledge. It’s conjectural, guesswork, tested by observation, not derived from it. So, were testable conjectures the great innovation that opened the intellectual prison gates? No. Contrary to what’s usually said, testability is common, in myths and all sorts of other irrational modes of thinking. Any crank claiming the sun will go out next Tuesday has got a testable prediction. (…)

This easy variability is the sign of a bad explanation. Because, without a functional reason to prefer one of countless variants, advocating one of them, in preference to the others, is irrational. So, for the essence of what makes the difference to enable progress, seek good explanations, the ones that can’t be easily varied, while still explaining the phenomena.

Now, our current explanation of seasons is that the Earth’s axis is tilted like that, so each hemisphere tilts toward the sun for half the year, and away for the other half. Better put that up. (Laughter) That’s a good explanation: hard to vary, because every detail plays a functional role. For instance, we know, independently of seasons, that surfaces tilted away from radiant heat are heated less, and that a spinning sphere, in space, points in a constant direction. And the tilt also explains the sun’s angle of elevation at different times of year, and predicts that the seasons will be out of phase in the two hemispheres. If they’d been observed in phase, the theory would have been refuted. But now, the fact that it’s also a good explanation, hard to vary, makes the crucial difference.

If the ancient Greeks had found out about seasons in Australia, they could have easily varied their myth to predict that. For instance, when Demeter is upset, she banishes heat from her vicinity, into the other hemisphere, where it makes summer. So, being proved wrong by observation, and changing their theory accordingly, still wouldn’t have got the ancient Greeks one jot closer to understanding seasons, because their explanation was bad: easy to vary. And it’s only when an explanation is good that it even matters whether it’s testable. If the axis-tilt theory had been refuted, its defenders would have had nowhere to go. No easily implemented change could make that tilt cause the same seasons in both hemispheres.

The search for hard-to-vary explanations is the origin of all progress. It’s the basic regulating principle of the Enlightenment. So, in science, two false aproaches blight progress. One is well known: untestable theories. But the more important one is explanationless theories. Whenever you’re told that some existing statistical trend will continue, but you aren’t given a hard-to-vary account of what causes that trend, you’re being told a wizard did it.

When you are told that carrots have human rights because they share half our genes — but not how gene percentages confer rights — wizard. When someone announces that the nature-nurture debate has been settled because there is evidence that a given percentage of our political opinions are genetically inherited, but they don’t explain how genes cause opinions, they’ve settled nothing. They are saying that our opinions are caused by wizards, and presumably so are their own. That the truth consists of hard to vary assertions about reality is the most important fact about the physical world. It’s a fact that is, itself, unseen, yet impossible to vary.

David Deutsch, Israeli-British physicist at the University of Oxford, David Deutsch: A new way to explain explanation, TED.com, July 2009 (tnx WildCat) (transcript)

See also:

David Deutsch on our place in the cosmos, (transcript), TED video

[14:23] “We can survive, and we can fail to survive. But it depends not on chance, but on whether we create the relevant knowledge in time. The danger is not at all unprecedented. Species go extinct all the time. Civilizations end. The overwhelming majority of all species and all civilizations that have ever existed are now history. And if we want to be the exception to that, then logically our only hope is to make use of the one feature that distinguishes our species, and our civilization, from all the others. Namely, our special relationship with the laws of physics. Our ability to create new explanations, new knowledge — to be a hub of existence. (…)

I’m a physicist, but I’m not the right kind of physicist. In regard to global warming, I’m just a layman. And the rational thing for a layman to do is to take seriously the prevailing scientific theory. And according to that theory, it’s already too late to avoid a disaster. Because if it’s true that our best option at the moment is to prevent CO2 emissions with something like the Kyoto Protocol, with its constraints on economic activity and its enormous cost of hundreds of billions of dollars or whatever it is, then that is already a disaster by any reasonable measure. (…)”

Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
Science Is Not About Certainty. Science is about overcoming our own ideas and a continuous challenge of common sense
Why It’s Good To Be Wrong. David Deutsch on Fallibilism, Lapidarium notes

Apr
13th
Wed
permalink

When is it meaningful to say that one culture is more advanced than another?

                   
                                  The Fractal Pattern of an African Village

"Is there a way to say that one culture is more advanced than another in a way that is not racist, ethnocentric, or uselessly broad? Some basic ideas in biology and thermodynamics may help.

For example, consider Kleiber’s Law. When it was first developed by Max Kleiber in the 1930s, it was used to describe animal metabolism. It says that that the bigger the animal, the more metabolically efficient it is. An elephant that is 10,000 times the mass of a guinea pig will not consume 10,000 times as much energy. Rather, it will consume only 1,000 times as much energy. Pound for pound, it is a more efficient energy user.

This isn’t surprising. But the law also applies to cities, which is very surprising. When a city doubles in size, it only consumes 85% as much energy. It becomes more efficient. The fact that the law works across completely different entities - plants, guinea pigs, elephants, cities - makes one wonder if it applies to all organized entities. To skip a number of qualifiers and exceptions, here’s the question I’m asking: Could it be said that the more metabolically efficient a society is, the more advanced it is?

To be sure, you will puzzle over the way I am using the word “advanced.” Set that aside for a moment and consider another example. Geoffrey West of the Santa Fe Institute has shown that with each doubling of a city’s population, the inhabitants become 15% wealthier, more productive, and more innovative. If one regards a city as a distinct culture, then one could say that larger cultures are inherently more advanced than smaller ones - assuming that by “advanced” one means greater productivity per capita. (The advancement is not always to the comfort of its inhabitants; West shows that crime also goes up by 15%. That is, the criminals become more productive, too.)

(For more on Kleiber’s Law and West’s Law, see Jonah Lehrer’s A Physicist Solves the City  and Steven Johnson’s Where Good Ideas Come From , pp. 7-10.) (…)

As Ong explains, preliterate thought is aggregative rather than analytic, situational rather than abstract. It is very different from literate thought - but literate people still have preliterate modes of aggregation and situational thinking available to them. We can understand preliterate minds with a little bit of imaginative effort.

In short, modern minds include all of the fundamental elements of medieval minds. (Again, I don’t mean that they include medieval skills; medieval people knew far more about herbs than just about anyone today. But we retain the idea of herbology and could easily resurrect it.)

As Kevin Kelly says in What Technology Wants , nothing invented is ever abandoned. It is always carried forward. You may have to dust off some old memories and study up a bit, but you’re rediscovering rather than learning from scratch. Our medieval visitor has no such advantage.

So I want to suggest that a culture can be said to be more advanced than another if it includes most or all of the other culture’s basic elements. That gives its individuals a superior edge in understanding and communication.

I got this idea of inclusiveness, by the way, from Ken Wilber's book Sex, Ecology, Spirituality , in which he writes, “Each emergent holon transcends but includes its predecessors” (p. 59). Wilber’s point is that evolution always builds on top of what went before, incorporating preceding elements while also transcending them.

To be sure, this doesn’t mean that a more advanced culture will treat a less advanced one with decency. The Europeans used their technological superiority over Native Americans to wipe most of them out. But they at least had a mental framework with which to categorize the peoples they met, however unjustly. Europeans had spent many thousands of years living in preliterate, tribal cultures and had that experience to draw upon. (…)

It focuses on communication, the key aspect of living in a society that makes it worthwhile (or not.) It’s a more humanly meaningful measure than variables like population, information content, and the number of available products.

Incidentally, it also makes diverse societies almost automatically more advanced than monocultural ones, all else equal. They simply contain more.

Non-ethnocentric, communication-oriented; that sounds pretty good, doesn’t it? But there is a hidden assumption in this reasoning that I will now make explicit. I am assuming that cultures follow a universal trajectory of development. Here’s one possible trajectory, for example: nomadic clans —> agricultural villages —> feudal towns —> city-states —> nation-states —> world-states. In this schema, each element includes all of the elements of the previous ones.

This particular trajectory seems pretty reasonable, but that’s because it’s both highly general and limited to one dimension of progress. The question is, is there a single, cross-cultural trajectory that specifies scientific, technological, moral, artistic, informational, and economic progress?

I don’t know. To be sure, counterexamples would seem to abound. Contingency is everywhere one looks. Arabs and Jews focused on abstract art while Europeans focused on representational art. Chinese medicine is holistic while Western medicine is reductive. These are generalizations, of course, but they are cases where neither includes the other.

But perhaps one just has to look at history with a broader focus. Do all cultures go through similar moral stages? Do they develop essentially similar social institutions for art? Will all cultures, given enough time, develop the computer? Is there a logic to history? (…)

We could argue, based on thermodynamics and the “inclusion” hypothesis, that there is a broad logic to history independent of biochemistry and accidents of culture. If so, then we could expect aliens to understand us, much as we’d understand our 13th-century Englishman. We probably couldn’t understand them, given our newness as a technological culture. But if they had a past that included tribalism, feudalism, nation-states, and world-states, then they’d have some idea of what we’re about. And maybe, just maybe, we’d learn of worthy futures to which we might aspire.”

Michael Chorost, American writer and teacher, Ph.D, Is There a Logic to History?, Psychology Today, April 11, 2011. (Illustration source)