Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Dec
27th
Tue
permalink

'To understand is to perceive patterns'

                  

"Everything we care about lies somewhere in the middle, where pattern and randomness interlace."

James Gleick, The Information: A History, a Theory, a Flood, Pantheon, 2011

"Humans are pattern-seeking story-telling animals, and we are quite adept at telling stories about patterns, whether they exist or not."

Michael Shermer

"The pattern, and it alone, brings into being and causes to pass away and confers purpose, that is to say, value and meaning, on all there is. To understand is to perceive patterns. (…) To make intelligible is to reveal the basic pattern.”

Isaiah Berlin, British social and political theorist, philosopher and historian, (1909-1997), The proper study of mankind: an anthology of essays, Chatto & Windus, 1997, p. 129.

"One of the most wonderful things about the emerging global superbrain is that information is overflowing on a scale beyond what we can wrap our heads around. The electronic, collective, hive mind that we know as the Internet produces so much information that organizing this data — and extracting meaning from it — has become the conversation of our time.

Sanford Kwinter’s Far From Equilibrium tackles everything from technology to society to architecture under the thesis that creativity, catharsis, transformation and progressive breakthroughs occur far from equilibrium. So even while we may feel overwhelmed and intimidated by the informational overload and radical transformations of our times, we should, perhaps, take refuge in knowing that only good can come from this. He writes:

“(…) We accurately think of ourselves today not only as citizens of an information society, but literally as clusters of matter within an unbroken informational continuum: "We are all," as the great composer Karlheinz Stockhausen once said, "transistors, in the literal sense. We send, receive and organize [and] so long as we are vital, our principle work is to capture and artfully incorporate the signals that surround us.” (…)

Clay Shirky often refers to the “Cognitive Surplus,” the overflowing output of the billion of minds participating in the electronic infosphere. A lot of this output is silly, but a lot of it is meaningful and wonderful. The key lies in curation; which is the result of pattern-recognition put into practice. (…)

Matt Ridley’s TED Talk, “When Ideas Have Sex” points to this intercourse of information and how it births new thought-patterns. Ideas, freed from the confines of space and time by the invisible, wireless metabrain we call The Internet, collide with one another and explode into new ideas; accelerating the collective intelligence of the species. Creativity thrives when minds come together. The last great industrial strength creative catalyst was the city: It is no coincidence than when people migrate to cities in large numbers, creativity and innovation thrives.  

Now take this very idea and apply it to the web:  the web  essentially is a planetary-scale nervous system where individual minds take on the role of synapses, firing electrical pattern-signals to one another at light speed — the net effect being an astonishing increase in creative output. (…)

Ray Kurzweil too, expounds on this idea of the power of patterns:

“I describe myself as a patternist, and believe that if you put matter and energy in just the right pattern you create something that transcends it. Technology is a good example of that: you put together lenses and mechanical parts and some computers and some software in just the right combination and you create a reading machine for the blind. It’s something that transcends the semblance of parts you’ve put together. That is the nature of technology, and it’s the nature of the human brain.

Biological molecules put in a certain combination create the transcending properties of human intelligence; you put notes and sounds together in just the rightcombination, and you create a Beethoven symphony or a Beatles song. So patterns have a power that transcends the parts of that pattern.”

R. Buckminster Fuller refers to us as “pattern integrities.” “Understanding order begins with understanding patterns,” he was known to say E.J. White, who worked with Fuller, says that:

“For Fuller, the thinking process is not a matter of putting anything into the brain or taking anything out; he defines thinking as the dismissal of irrelevancies, as the definition of relationships” — in other words, thinking is simultaneously a form of filtering out the data that doesn’t fit while highlighting the things that do fit together… We dismiss whatever is an “irrelevancy” and retain only what fits, we form knowledge by ‘connecting the dots’… we understand things by perceiving patterns — we arrive at conclusions when we successfully reveal these patterns. (…)

Fuller’s primary vocation is as a poet. All his disciplines and talents — architect, engineer, philosopher, inventor, artist, cartographer, teacher — are just so many aspects of his chief function as integrator… the word “poet" is a very general term for a person who puts things together in an era of great specialization when most people are differentiating or taking things apart… For Fuller, the stuff of poetry is the patterns of human behavior and the environment, and the interacting hierarchies of physics and design and industry. This is why he can describe Einstein and Henry Ford as the greatest poets of the 20th century.” (…)

In a recent article in Reality Sandwich, Simon G Powell proposed that patterned self-organization is a default condition of the universe: 

“When you think about it, Nature is replete with instances of self-organization. Look at how, over time, various exquisitely ordered patterns crystallise out of the Universe. On a macroscopic scale you have stable and enduring spherical stars, solar systems, and spiral galaxies. On a microscopic scale you have atomic and molecular forms of organization. And on a psychological level, fed by all this ambient order and pattern, you have consciousness which also seems to organise itself into being (by way of the brain). Thus, patterned organisation of one form or another is what nature is proficient at doing over time

This being the case, is it possible that the amazing synchronicities and serendipities we experience when we’re doing what we love, or following our passions — the signs we pick up on when we follow our bliss- represent an emerging ‘higher level’ manifestation of self-organization? To make use of an alluring metaphor, are certain events and cultural processes akin to iron filings coming under the organising influence of a powerful magnet? Is serendipity just the playing out on the human level of the same emerging, patterned self-organization that drives evolution?

Barry Ptolemy's film Transcendent Man reminds us that the universe has been unfolding in patterns of greater complexity since the beginning of time. Says Ptolemy:

First of all we are all patterns of information. Second, the universe has been revealing itself as patterns of information of increasing order since the big bang. From atoms, to molecules, to DNA, to brains, to technology, to us now merging with that technology. So the fact that this is happening isn’t particularly strange to a universe which continues to evolve and unfold at ever accelerating rates.”

Jason Silva, Connecting All The Dots - Jason Silva on Big think, Imaginary Fundation, Dec 2010

"Networks are everywhere. The brain is a network of nerve cells connected by axons, and cells themselves are networks of molecules connected by biochemical reactions. Societies, too, are networks of people linked by friendships, familial relationships and professional ties. On a larger scale, food webs and ecosystems can be represented as networks of species. And networks pervade technology: the Internet, power grids and transportation systems are but a few examples. Even the language we are using to convey these thoughts to you is a network, made up of words connected by syntactic relationships.”

'For decades, we assumed that the components of such complex systems as the cell, the society, or the Internet are randomly wired together. In the past decade, an avalanche of research has shown that many real networks, independent of their age, function, and scope, converge to similar architectures, a universality that allowed researchers from different disciplines to embrace network theory as a common paradigm.”

Albert-László Barabási , physicist, best known for his work in the research of network theory, and Eric Bonabeau, Scale-Free Networks, Scientific American, April 14, 2003.

Coral reefs are sometimes called “the cities of the sea”, and part of the argument is that we need to take the metaphor seriously: the reef ecosystem is so innovative because it shares some defining characteristics with actual cities. These patterns of innovation and creativity are fractal: they reappear in recognizable form as you zoom in and out, from molecule to neuron to pixel to sidewalk. Whether you’re looking at original innovations of carbon-based life, or the explosion of news tools on the web, the same shapes keep turning up. (…) When life gets creative, it has a tendency to gravitate toward certain recurring patterns, whether those patterns are self-organizing, or whether they are deliberately crafted by human agents.”

— Steven Johnson, author of Where Good Ideas Come From, cited by Jason Silva

"Network systems can sustain life at all scales, whether intracellularly or within you and me or in ecosystems or within a city. (…) If you have a million citizens in a city or if you have 1014 cells in your body, they have to be networked together in some optimal way for that system to function, to adapt, to grow, to mitigate, and to be long term resilient."

Geoffrey West, British theoretical physicist, The sameness of organisms, cities, and corporations: Q&A with Geoffrey West, TED, 26 July 2011.

“Recognizing this super-connectivity and conductivity is often accompanied by blissful mindbody states and the cognitive ecstasy of multiple “aha’s!” when the patterns in the mycelium are revealed. That Googling that has become a prime noetic technology (How can we recognize a pattern and connect more and more, faster and faster?: superconnectivity and superconductivity) mirrors the increased speed of connection of thought-forms from cannabis highs on up. The whole process is driven by desire not only for these blissful states in and of themselves, but also as the cognitive resource they represent.The devices of desire are those that connect,” because as Johnson says “chance favors the connected mind”.

Google and the Myceliation of Consciousness, Reality Sandwich, 10-11-2007

Jason Silva, Venezuelan-American television personality, filmmaker, gonzo journalist and founding producer/host for Current TV, To understand is to perceive patterns, Dec 25, 2011 (Illustration: Color Blind Test)

[This note will be gradually expanded]

See also:

The sameness of organisms, cities, and corporations: Q&A with Geoffrey West, TED, 26 July 2011.
☞ Albert-László Barabási and Eric Bonabeau, Scale-Free Networks, Scientific American, April 14, 2003.
Google and the Myceliation of Consciousness, Reality Sandwich, 10.11.2007
The Story of Networks, Lapidarium notes
Geoffrey West on Why Cities Keep Growing, Corporations and People Always Die, and Life Gets Faster
☞ Manuel Lima, visualcomplexity.com, A visual exploration on mapping complex networks
Constructal theory, Wiki
☞ A. Bejan, Constructal theory of pattern formation (pdf), Duke University
Pattern recognition, Wiki
Patterns tag on Lapidarium
Patterns tag on Lapidarium notes

Nov
12th
Sat
permalink

Non-Western Philosophy. The Ladder, the Museum, and the Web

           

"In philosophy today, (…) though everyone officially abjures the ladder model of human cultures, it continues to determine much of our reasoning about what counts as philosophy and what does not.

It is worth pointing out that all societies that have produced anything that we are able to easily recognize as philosophy are ladder societies. We might in fact argue, if not here, that philosophy as a discrete domain of activity in a society is itself a side-effect of inequality. The overwhelming authority of the church in medieval Europe, the caste system in ancient India, the control of intellectual life by the mandarin class in ancient China (meritocratically produced by the Confucian examination system, but still elite) present themselves as three compelling examples of the sort of social nexus that has left us with significant philosophical works. (…)

Imagine, for comparison, an archaeologist who has spent a career working on Bronze Age Scandinavia, and then switches to the Mayan or the Indus Valley civilization. Would anyone think to suggest that this scientist is moving from a myopic Eurocentrism to an appreciation for minority cultures and their achievements? Of course not! The archaeologist studies human material culture on the presumption that, within certain parameters, human beings may be found to do more or less the same sorts of thing wherever they reside and whatever phenotype they may have, and moreover that wherever they are found, human cultures have always been linked in complicated, constitutive ways to other cultures, so that in fact the process of ‘globalization’ is coeval with the earliest out-of-Africa migrations. (…)

It seems to me that the progress of the study of the history of material culture might serve as a model for the study of the history of intellectual culture, which in certain times and places has been written down and distilled into what we are able to recognize as ‘philosophy’.

And here we come to the third possible model for thinking about non-Western philosophy: beyond the ladder and the museum, there is the web. This is the same web that has always linked the material cultures of at least Eurasia to one another, whatever distinctive regional flavors might also be discerned. The possibility of approaching the history of intellectual culture in the same way seems particularly auspicious right now, given the recent, very promising results of the so-called cognitive turn in the study of material culture, that is, the turn to the study of cultural artifacts as traces of distributed or exosomatic cognition, as material and intentional at once. So material-cultural history already is intellectual history of a sort, even if it is not the kind that interests philosophers: there is a great gap between stone tools and, say, medieval logic treatises, and different skills are required for studying the one than for the other. But both are material traces of human intention, and both emerge out of particular kinds of societies only. To know them fully is to know what kind of societies are able to produce them.

When we accept this final point – surely the most heterodox, from the point of view of most philosophers– we are for the first time in a position to study and to teach Indian, Chinese, European, and Arabic philosophy alongside one another in a serious and adequate way. When we accept, for example, that all of the great Axial Age civilizations, to use Karl Jaspers’s helpful label, are the product of a single suite of broad historical changes that swept the Eurasian continent, and thus that Chinese, Indian, and Greek thought-worlds are not aboriginal in any meaningful sense (neither are Cree or Huron or Inuit, for that matter, but this can be dealt with another time), then all of a sudden it becomes possible to study, say, the Buddha and his followers not as an expression of some absolutely other Eastern ‘wisdom’, but instead as a local expression of global developments, or as a node in a web. (…)

What makes it so hard to see that this might be the proper approach to the study of the history of philosophy as a global phenomenon is that philosophy is not supposed to work in the same way as folk beliefs. It is supposed to be a pursuit of culture-independent truth. Yet this article of faith has had the awkward and unintended consequence of making the available defenses of the de-Eurocentrization of philosophy –something most in the field hold to be desirable for political reasons– quaint at best and incoherent at worst. If philosophy is independent of culture, then we cannot go, so to speak, underneath the philosophy and examine the broader social dynamics that sustain it. But we need to look at these dynamics in order to see the connections between one tradition and another.

There are, so to speak, tunnels in the basement between India and Greece, but we’re afraid to go down there. And so the result is that we are not so much liberating philosophy from culture, as we are making each culture’s philosophy irreducibly and incomparably its own, just as if it were a matter of displaying folk costumes in some Soviet ethnographic museum, or in the opening ceremonies of the Olympics. This is unscientific, unrigorous, and unacceptable in any other academic discipline.”

Justin E. H. Smith, Ph.D. in Philosophy at Columbia University, he teaches philosophy at Concordia University in Montreal, and is currently a member of the School of Historical Studies at the Institute for Advanced Study in Princeton.

To read full essay click What Is ‘Non-Western’ Philosophy?, Berfrois, Nov 10, 2011.

Oct
26th
Wed
permalink

Researchers find a country’s wealth correlates with its collective knowledge

"What causes the large gap between rich and poor countries has been a long-debated question. Previous research has found some correlation between a nation’s economic prosperity and factors such as how the country is governed, the average amount of formal education each individual receives, and the country’s overall competiveness. But now a team of researchers from Harvard and MIT has discovered that a new measure based on a country’s collective knowledge can account for the enormous income differences between the nations of the world better than any other factor. (…)

A country’s economy can be measured by a factor they call “economic complexity.” From this perspective, the more diverse and specialized jobs a country’s citizens have, the greater the country’s ability to produce complex products that few other countries can produce, making the country more prosperous.

“The total amount of knowledge embedded in a hunter-gatherer society is not very different from that which is embedded in each one of its members,” the researchers write in their book. “The secret of modern societies is not that each person holds much more productive knowledge than those in a more traditional society. The secret to modernity is that we collectively use large volumes of knowledge, while each one of us holds only a few bits of it. Society functions because its members form webs that allow them to specialize and share their knowledge with others.” (…)

Getting poorer countries to begin producing more complex products is not as simple as offering individuals a formal education in which they learn facts and figures - what the authors refer to as “explicit” knowledge. Instead, the most productive knowledge is the “tacit” kind (for example, how to run a business), which is much harder to teach. For this reason, countries tend to expand their production capabilities by moving from the products they already produce to others that require a similar set of embedded knowledge capabilities.”

— Lisa Zyga, Researchers find a country’s wealth correlates with its collective knowledge, Physorg, Oct 26, 2011 Illustration: This network shows the product space of the US. Image credit: The Atlas of Economic Complexity

“The essential theory … is that countries grow based on the knowledge of making things,” Mr. Hausmann said in a phone interview. “It’s not years of schooling. It’s what are the products that you know how to make. And what drives growth is the difference between how much knowledge you have and how rich you are.”

Thus, nations with extensive productive knowledge but relatively little wealth haven’t met their potential, and will eventually catch up, Mr. Hausmann said. Those countries will experience the most growth through 2020, according to the report.

That bodes well for China, which tops the list of expected growth in per-capita gross domestic product. According to the method outlined in the report, China’s growth in GDP per capita will be 4.32% though 2020. India and Thailand are second and third, respectively.

The U.S., however, is ranked 91, with expected growth in per-capita GDP at 2.01%. “The U.S. is very rich already and has a lot of productive knowledge, but it doesn’t have an excess of productive knowledge relative to its income,” Mr. Hausmann said.

The method, when applied to the years 1999-2009, proved to be much more accurate at predicting future growth than any other existing methods, including the World Economic Forum’s Global Competitiveness Index, according to the report.”

— Josh Mitchell, ‘Complexity’ Predicts Nations’ Future Growth, The Wall Street Journal, Oct 26, 2011

See also:

"The Atlas of Economic Complexity” (pdf). The 364-page report, a study led by Harvard’s Ricardo Hausmann and MIT’s Cesar A. Hidalgo, is the culmination of nearly five years of research by a team of economists at Harvard’s Center for International Development.
Economic inequality, Wiki
☞ Heiner Rindermann and James Thompson, Cognitive Capitalism: The Effect of Cognitive Ability on Wealth, as Mediated Through Scientific Achievement and Economic Freedom (pdf), Chemnitz University of Technology, University College London, 2011.

"Traditional economic theories stress the relevance of political, institutional, geographic, and historical factors for economic growth. In contrast, human-capital theories suggest that peoples’ competences, mediated by technological progress, are the deciding factor in a nation’s wealth. Using three large-scale assessments, we calculated cognitive-competence sums for the mean and for upper- and lower-level groups for 90 countries and compared the influence of each group’s intellectual ability on gross domestic product. In our cross-national analyses, we applied different statistical methods (path analyses, bootstrapping) and measures developed by different research groups to various country samples and historical periods.

Our results underscore the decisive relevance of cognitive ability—particularly of an intellectual class with high cognitive ability and accomplishments in science, technology, engineering, and math—for national wealth. Furthermore, this group’s cognitive ability predicts the quality of economic and political institutions, which further determines the economic affluence of the nation. Cognitive resources enable the evolution of capitalism and the rise of wealth.”

Sep
29th
Thu
permalink

Vannevar Bush on the new relationship between thinking man and the sum of our knowledge (1945)

                             

Tim O’Reilly on the Birth of the global mind

“Computer scientist Danny Hillis once remarked, “Global consciousness is that thing responsible for deciding that pots containing decaffeinated coffee should be orange.” (…)

The web is a perfect example of what engineer and early computer scientist Vannevar Bush called “intelligence augmentation” by computers, in his 1945 article As We May Think” in The Atlantic. He described a future in which human ability to follow an associative knowledge trail would be enabled by a device he called “the memex”. This would improve on human memory in the precision of its recall. Google is today’s ultimate memex. (…)

This is man-computer symbiosis at its best, where the computer program learns from the activity of human teachers, and its sensors notice and remember things the humans themselves would not. This is the future: massive amounts of data created by people, stored in cloud applications that use smart algorithms to extract meaning from it, feeding back results to those people on mobile devices, gradually giving way to applications that emulate what they have learned from the feedback loops between those people and their devices.”

Tim O’Reilly, the founder of O’Reilly Media, a supporter of the free software and open source movements, Birth of the global mind, Financial Times, Sept 23, 2011

"In this significant article he [Vannevar Bush] holds up an incentive for scientists when the fighting has ceased. He urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge. For years inventions have extended man’s physical powers rather than the powers of his mind. Trip hammers that multiply the fists, microscopes that sharpen the eye, and engines of destruction and detection are new results, but not the end results, of modern science. Now, says Dr. Bush, instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work. Like Emerson’s famous address of 1837 on “The American Scholar,” this paper by Dr. Bush calls for a new relationship between thinking man and the sum of our knowledge.” - The Atlantic’ editor

"Assume a linear ratio of 100 for future use. Consider film of the same thickness as paper, although thinner film will certainly be usable. Even under these conditions there would be a total factor of 10,000 between the bulk of the ordinary record on books, and its microfilm replica. The Encyclopoedia Britannica could be reduced to the volume of a matchbox. A library of a million volumes could be compressed into one end of a desk. If the human race has produced since the invention of movable type a total record, in the form of magazines, newspapers, books, tracts, advertising blurbs, correspondence, having a volume corresponding to a billion books, the whole affair, assembled and compressed, could be lugged off in a moving van. Mere compression, of course, is not enough; one needs not only to make and store a record but also be able to consult it, and this aspect of the matter comes later. Even the modern great library is not generally consulted; it is nibbled at by a few. (…)

We may some day click off arguments on a machine with the same assurance that we now enter sales on a cash register. But the machine of logic will not look like a cash register, even of the streamlined model.

So much for the manipulation of ideas and their insertion into the record. Thus far we seem to be worse off than before—for we can enormously extend the record; yet even in its present bulk we can hardly consult it. This is a much larger matter than merely the extraction of data for the purposes of scientific research; it involves the entire process by which man profits by his inheritance of acquired knowledge. The prime action of use is selection, and here we are halting indeed. There may be millions of fine thoughts, and the account of the experience on which they are based, all encased within stone walls of acceptable architectural form; but if the scholar can get at only one a week by diligent search, his syntheses are not likely to keep up with the current scene. (…)

Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and, to coin one at random, “memex" will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. Otherwise it looks like an ordinary desk.

In one end is the stored material. The matter of bulk is well taken care of by improved microfilm. Only a small part of the interior of the memex is devoted to storage, the rest to mechanism. Yet if the user inserted 5000 pages of material a day it would take him hundreds of years to fill the repository, so he can be profligate and enter material freely.

Most of the memex contents are purchased on microfilm ready for insertion. Books of all sorts, pictures, current periodicals, newspapers, are thus obtained and dropped into place. Business correspondence takes the same path. And there is provision for direct entry. On the top of the memex is a transparent platen. On this are placed longhand notes, photographs, memoranda, all sorts of things. When one is in place, the depression of a lever causes it to be photographed onto the next blank space in a section of the memex film, dry photography being employed.

There is, of course, provision for consultation of the record by the usual scheme of indexing. If the user wishes to consult a certain book, he taps its code on the keyboard, and the title page of the book promptly appears before him, projected onto one of his viewing positions. Frequently-used codes are mnemonic, so that he seldom consults his code book; but when he does, a single tap of a key projects it for his use. Moreover, he has supplemental levers. On deflecting one of these levers to the right he runs through the book before him, each page in turn being projected at a speed which just allows a recognizing glance at each. If he deflects it further to the right, he steps through the book 10 pages at a time; still further at 100 pages at a time. Deflection to the left gives him the same control backwards.

A special button transfers him immediately to the first page of the index. Any given book of his library can thus be called up and consulted with far greater facility than if it were taken from a shelf. As he has several projection positions, he can leave one item in position while he calls up another. He can add marginal notes and comments, taking advantage of one possible type of dry photography, and it could even be arranged so that he can do this by a stylus scheme, such as is now employed in the telautograph seen in railroad waiting rooms, just as though he had the physical page before him.

All this is conventional, except for the projection forward of present-day mechanisms and gadgetry. It affords an immediate step, however, to associative indexing, the basic idea of which is a provision whereby any item may be caused at will to select immediately and automatically another. This is the essential feature of the  memex. The process of tying two items together is the important thing. (…)

The owner of the memex, let us say, is interested in the origin and properties of the bow and arrow. Specifically he is studying why the short Turkish bow was apparently superior to the English long bow in the skirmishes of the Crusades. He has dozens of possibly pertinent books and articles in his memex. First he runs through an encyclopedia, finds an interesting but sketchy article, leaves it projected. Next, in a history, he finds another pertinent item, and ties the two together. Thus he goes, building a trail of many items. Occasionally he inserts a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item. When it becomes evident that the elastic properties of available materials had a great deal to do with the bow, he branches off on a side trail which takes him through textbooks on elasticity and tables of physical constants. He inserts a page of longhand analysis of his own. Thus he builds a trail of his interest through the maze of materials available to him.

And his trails do not fade. Several years later, his talk with a friend turns to the queer ways in which a people resist innovations, even of vital interest. He has an example, in the fact that the outraged Europeans still failed to adopt the Turkish bow. In fact he has a trail on it. A touch brings up the code book. Tapping a few keys projects the head of the trail. A lever runs through it at will, stopping at interesting items, going off on side excursions. It is an interesting trail, pertinent to the discussion. So he sets a reproducer in action, photographs the whole trail out, and passes it to his friend for insertion in his own memex, there to be linked into the more general trail.

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities. The patent attorney has on call the millions of issued patents, with familiar trails to every point of his client’s interest. The physician, puzzled by a patient’s reactions, strikes the trail established in studying an earlier similar case, and runs rapidly through analogous case histories, with side references to the classics for the pertinent anatomy and histology. The chemist, struggling with the synthesis of an organic compound, has all the chemical literature before him in his laboratory, with trails following the analogies of compounds, and side trails to their physical and chemical behavior.

The historian, with a vast chronological account of a people, parallels it with a skip trail which stops only on the salient items, and can follow at any time contemporary trails which lead him all over civilization at a particular epoch. There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. The inheritance from the master becomes, not only his additions to the world’s record, but for his disciples the entire scaffolding by which they were erected.

Thus science may implement the ways in which man produces, stores, and consults the record of the race. It might be striking to outline the instrumentalities of the future more spectacularly, rather than to stick closely to methods and elements now known and undergoing rapid development, as has been done here. Technical difficulties of all sorts have been ignored, certainly, but also ignored are means as yet unknown which may come any day to accelerate technical progress as violently as did the advent of the thermionic tube. In order that the picture may not be too commonplace, by reason of sticking to present-day patterns, it may be well to mention one such possibility, not to prophesy but merely to suggest, for prophecy based on extension of the known has substance, while prophecy founded on the unknown is only a doubly involved guess. (…)

In the outside world, all forms of intelligence whether of sound or sight, have been reduced to the form of varying currents in an electric circuit in order that they may be transmitted. Inside the human frame exactly the same sort of process occurs. Must we always transform to mechanical movements in order to proceed from one electrical phenomenon to another? It is a suggestive thought, but it hardly warrants prediction without losing touch with reality and immediateness.

Presumably man’s spirit should be elevated if he can better review his shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory. His excursions may be more enjoyable if he can reacquire the privilege of forgetting the manifold things he does not need to have immediately at hand, with some assurance that he can find them again if they prove important.

The applications of science have built man a well-supplied house, and are teaching him to live healthily therein. They have enabled him to throw masses of people against one another with cruel weapons. They may yet allow him truly to encompass the great record and to grow in the wisdom of race experience. He may perish in conflict before he learns to wield that record for his true good. Yet, in the application of science to the needs and desires of man, it would seem to be a singularly unfortunate stage at which to terminate the process, or to lose hope as to the outcome.”

Vannevar Bush, (1890-1974) American engineer and science administrator known for his work on analog computing, his political role in the development of the atomic bomb as a primary organizer of the Manhattan Project, the founding of Raytheon, and the idea of the memex, an adjustable microfilm viewer which is somewhat analogous to the structure of the World Wide Web, As We May Think, The Atlantic, July 1945 (Illustration: James Ferguson, FT)

See also:

Video archive of Oct 12-13 1995 MIT/Brown Symposium on the 50th Anniversary of As We May Think
"As We May Think" - A Celebration of Vannevar Bush’s 1945 Vision, at Brown University
Computing Pages by Francesc Hervada-Sala - “As We May Think” by Vannevar Bush
Timeline of hypertext technology (Wiki)
The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks

Sep
8th
Thu
permalink

Google and the Myceliation of Consciousness
    

"Is this the largest organism in the world? This 2,400-acre (9.7 km2) site in eastern Oregon had a contiguous growth of mycelium before logging roads cut through it. Estimated at 1,665 football fields in size and 2,200 years old, this one fungus has killed the forest above it several times over, and in so doing has built deeper soil layers that allow the growth of ever-larger stands of trees. Mushroom-forming forest fungi are unique in that their mycelial mats can achieve such massive proportions.”

Paul Stamets, American mycologist, author, Mycelium Running

"What Stamet calls the mycelial archetype [Mycelial nets are designed the same as brain cells: centers with branches reaching out, whole worlds. 96% of dark matter threads]. He compares the mushroom mycelium with the overlapping information-sharing systems that comprise the Internet, with the networked neurons in the brain, and with a computer model of dark matter in the universe. All share this densely intertwingled filamental structure. Stamets says in Mycelium Running,

“I believe that the mycelium operates at a level of complexity that exceeds the computational powers of our most advanced supercomputers. I see the mycelium as the Earth’s natural Internet, a consciousness with which we might be able to communicate.” (…)

This super-connectivity and conductivity is often accompanied by blissful mindbody states and the cognitive ecstasy of multiple “aha’s!” when the patterns in the mycelium are revealed. The Googling that has become a prime noetic technology (How can we recognize a pattern and connect more and more, faster and faster?: superconnectivity and superconductivity) mirrors the increased speed of connection of thought-forms from cannabis highs on up. The whole process is driven by desire not only for these blissful states in and of themselves, but also as the cognitive resource they represent. (…) The devices of desire are those that connect. The Crackberry is just the latest super-connectivity and conductivity device-of-desire.

The psilocybin mushroom embeds the form of its own life-cycle into consciousness when consciousness is altered by the mushroom, and this template, brought home to Google Earth, made into tools of connectivity, potentiates the mycelium of knowledge, connecting all cultural production. The traditional repositories—the books and print and CD and DVD materials—swarm online, along with intimate glimpses of Everyblogger’s Life in multimediated detail.

Here on Google watch, I’m tracking the form of this whole wildly interconnecting activity that this desire to connect inscribes, the millions of simultaneous individual expressions of desire: searches, adclicks, where am I?, what’s near me?, who’s connected to whom? The desire extends the filaments, and energizes the constant linking and unlinking of the vast signaling system that lights up the mycelium. Periodic visits to the psychedelic sphere reveal the progress of this mycelial growth, as well as its back-history, future, origins, inhabitants, and purpose. Google is growing the cultural mycelial mat, advancing this process exponentially. Google is the first psychedelically informed super-power to shape the noosphere and NASDAQ. Google is part of virtually everybody’s online day. The implications are staggering. (…)

In the domain of consciousness, super-connectivity and super-conductivity also reign. Superconductivity: speed is of the essence. Speed of conductivity of meaning. How fast can consciousness make meaning out of the flux of perceptions? (…)

When Google breaks through the natural language barrier and catches a glimpse, at least, of what it’s like to operate cognition entirely outside the veil of natural language, they will truly be Masters of Meaning. (…) Meaning manifests independently of language, though often finds itself entombed therein. But from this bootstrap move outside language, new insights arise regarding the structures and functions of natural language from a perspective that handles cognition with different tools, perceptions, sensory modalities—and produces new forms of language with new feature sets. (…)

This is the download Terence McKenna kept cycling through, and represents the key noetic technology for the stabilization of the transformation of consciousness in a sharable conceptual architecture. In Terence’s words,

It’s almost as though the project of communication becomes high-speed sculpture in a conceptual dimension made of light and intentionality. This would remain a kind of esoteric performance on the part of shamans at the height of intoxication if it were not for the fact that electronics and electronic cultural media, computers, make it possible for us to actually create records of these higher linguistic modalities.”

RoseRose, “paleoanthropologist from a distant timeframe”, in deep cover on Google Earth as a video performance artist, Google and the Myceliation of Consciousness, Reality Sandwich, Nov 10, 2007 (Illustration source)

See also:

☞ Paul Stamets, Six Ways Mushrooms Can Save the World, TED.com, 2008 (video)

Sep
2nd
Fri
permalink

Kevin Kelly on Why the Impossible Happens More Often

     
                                                   Noosphere by Tatiana Plakhova

"Everyone "knew" that people don’t work for free, and if they did, they could not make something useful without a boss. But today entire sections of our economy run on software instruments created by volunteers working without pay or bosses. Everyone knew humans were innately private beings, yet the impossibility of total open round-the-clock sharing still occurred. Everyone knew that humans are basically lazy, and they would rather watch than create, and they would never get off their sofas to create their own TV. It would be impossible that millions of amateurs would produce billions of hours of video, or that anyone would watch any of it. Like Wikipedia, or Linux, YouTube is theoretically impossible. But here this impossibility is real in practice. (…)

As far as I can tell the impossible things that happen now are in every case manifestations of a new, bigger level of organization. They are the result of large-scale collaboration, or immense collections of information, or global structures, or gigantic real-time social interactions. Just as a tissue is a new, bigger level of organization for a bunch of individual cells, these new social structures are a new bigger level for individual humans. And in both cases the new level breeds emergence. New behaviors emerge from the new level that were impossible at the lower level. Tissue can do things that cells can’t. The collectivist organizations of wikipedia, Linux, the web can do things that industrialized humans could not. (…)

The cooperation and coordination breed by irrigation and agriculture produced yet more impossible behaviors of anticipation and preparation, and sensitivity to the future. Human society unleashed all kinds of previously impossible human behaviors into the biosphere.

The technium is accelerating the creation of new impossibilities by continuing to invent new social organizations. (…)

When we are woven together into a global real-time society, the impossibilities will really start to erupt. It is not necessary that we invent some kind of autonomous global consciousness. It is only necessary that we connect everyone to everyone else. Hundreds of miracles that seem impossible today will be possible with this shared human awareness. (…)

In large groups the laws of statistics take over and our brains have not evolved to do statistics. The amount of data tracked is inhuman; the magnitudes of giga, peta, and exa don’t really mean anything to us; it’s the vocabulary of machines. Collectively we behave differently than individuals. Much more importantly, as individuals we behave differently in collectives. (…)

We are swept up in a tectonic shift toward large, fast, social organizations connecting us in novel ways. There may be a million different ways to connect a billion people, and each way will reveal something new about us. Something hidden previously. Others have named this emergence the Noosphere, or MetaMan, or Hive Mind. We don’t have a good name for it yet. (…)

I’ve used the example of the bee before. One could exhaustively study a honey bee for centuries and never see in the lone individual any of the behavior of a bee hive. It is just not there, and can not emerge until there are a mass of bees. A single bee lives 6 weeks, so a memory of several years is impossible, but that’s how long a hive of individual bees can remember. Humanity is migrating towards its hive mind. Most of what “everybody knows” about us is based on the human individual. Collectively, connected humans will be capable of things we cannot imagine right now. These future phenomenon will rightly seem impossible. What’s coming is so unimaginable that the impossibility of wikipedia will recede into outright obviousness.

Connected, in real time, in multiple dimensions, at an increasingly global scale, in matters large and small, with our permission, we will operate at a new level, and we won’t cease surprising ourselves with impossible achievements.”

Kevin Kelly, writer, the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, Why the Impossible Happens More Often, The Technium, 26 August 2011

Jun
24th
Fri
permalink

Collective intelligence and the “genetic” structure of groups

“The average intelligence of the people in the group and the maximum intelligence of the people in the group doesn’t predict group intelligence.” (…)

“Just getting a lot of smart people in a group does not necessarily make a smart group.” Furthermore, the researchers found, group intelligence is also only moderately correlated with qualities you’d think would be pretty crucial when it comes to group dynamics — things like group cohesion, satisfaction, “psychological safety,” and motivation. It’s not just that a happy group or a close-knit group or an enthusiastic group doesn’t necessarily equal a smart group; it’s also that those psychological elements have only some effect on groups’ ability to solve problems together.

Group intelligence is correlated, Malone and his colleagues found, with the average social sensitivity — the openness, and receptiveness, to others — of a group’s constituents. The emotional intelligence of group members, in other words, serves the cognitive intelligence of the group overall. And this means that — wait for it — groups with more women tend to be smarter than groups with more men. (As Malone put it: “More females, more intelligence.”) That’s largely mediated by the researchers’ social sensitivity findings: Women tend to be more socially sensitive than men — per Science — which means that, overall, more women = more emotional intelligence = more group intelligence. (…)

Just as understanding humans’ genetic code can lead us to a molecular understanding of ourselves as individuals, mapping the genome of groups may help us understand ourselves as we behave within a broader collective. And that knowledge, just as with the human genome, might help us gain an ability to manipulate group structures. (…)

If you understand what makes groups smart, you can adjust their factors to make them even smarter.”

Anita Woolley and Thomas Malone, Defend Your Research: What Makes a Team Smarter? More Women, Harvard Business Review, June 2011

Jun
16th
Thu
permalink

The Physics of Intelligence

                                   

The laws of physics may well prevent the human brain from evolving into an ever more powerful thinking machine.

  • Human intelligence may be close to its evolutionary limit. Various lines of research suggest that most of the tweaks that could make us smarter would hit limits set by the laws of physics.
  • Brain size, for instance, helps up to a point but carries diminishing returns: brains become energy-hungry and slow. Better “wiring” across the brain also would consume energy and take up a disproportionate amount of space.
  • Making wires thinner would hit thermodynamic limitations similar to those that affect transistors in computer chips: communication would get noisy.
  • Humans, however, might still achieve higher intelligence collectively. And technology, from writing to the Internet, enables us to expand our mind outside the confines of our body.

"What few people realize is that the laws of physics place tough constraints on our mental faculties as well. Anthropologists have speculated about anatomic roadblocks to brain expansion—for instance, whether a larger brain could fit through the birth canal of a bipedal human. If we assume, though, that evolution can solve the birth canal problem, then we are led to the cusp of some even more profound questions.

One might think, for example, that evolutionary processes could increase the number of neurons in our brain or boost the rate at which those neurons exchange information and that such changes would make us smarter. But several recent trends of investigation, if taken together and followed to their logical conclusion, seem to suggest that such tweaks would soon run into physical limits. Ultimately those limits trace back to the very nature of neurons and the statistically noisy chemical exchanges by which they communicate. “Information, noise and energy are inextricably linked,” says Simon Laughlin, a theoretical neuroscientist at the University of Cambridge. “That connection exists at the thermodynamic level.”

Do the laws of thermodynamics, then, impose a limit on neuron-based intelligence, one that applies universally, whether in birds, primates, porpoises or praying mantises? This question apparently has never been asked in such broad terms, but the scientists interviewed for this article generally agree that it is a question worth contemplating. “It’s a very interesting point,” says Vijay Balasubramanian, a physicist who studies neural coding of information at the University of Penn­sylvania. “I’ve never even seen this point discussed in science fiction.”

Intelligence is of course a loaded word: it is hard to measure and even to define. Still, it seems fair to say that by most metrics, humans are the most intelligent animals on earth. But as our brain has evolved, has it approached a hard limit to its ability to process information? Could there be some physical limit to the evolution of neuron-based intelligence—and not just for humans but for all of life as we know it? (…)

Staying in Touch

Much of the energetic burden of brain size comes from the organ’s communication networks: in the human cortex, communications account for 80 percent of energy consumption. But it appears that as size increases, neuronal connectivity also becomes more challenging for subtler, structural reasons. (…)

A typical neuron has an elongated tail called the axon. At its end, the axon branches out, with the tips of the branches forming synapses, or contact points, with other cells. Axons, like telegraph wires, may connect different parts of the brain or may bundle up into nerves that extend from the central nervous system to the various parts of the body.

In their pioneering efforts, biologists measured the diameter of axons under microscopes and counted the size and density of nerve cells and the number of synapses per cell. They surveyed hundreds, sometimes thousands, of cells per brain in dozens of species. Eager to refine their mathematical curves by extending them to ever larger beasts, they even found ways to extract intact brains from whale carcasses. The five-hour process, meticulously described in the 1880s by biologist Gustav Adolf Guldberg, involved the use of a two-man lumberjack saw, an ax, a chisel and plenty of strength to open the top of the skull like a can of beans.

These studies revealed that as brains expand in size from species to species, several subtle but probably unsustainable changes happen. First, the average size of nerve cells increases. This phenomenon allows the neurons to connect to more and more of their compatriots as the overall number of neurons in the brain increases. But larger cells pack into the cerebral cortex less densely, so the distance between cells increases, as does the length of axons required to connect them. And because longer axons mean longer times for signals to travel between cells, these projections need to become thicker to maintain speed (thicker axons carry signals faster).

Researchers have also found that as brains get bigger from species to species, they are divided into a larger and larger number of distinct areas. You can see those areas if you stain brain tissue and view it under a microscope: patches of the cortex turn different colors. These areas often correspond with specialized functions, say, speech comprehension or face recognition. And as brains get larger, the specialization unfolds in another dimension: equivalent areas in the left and right hemispheres take on separate functions—for example, spatial versus verbal reasoning.

For decades this dividing of the brain into more work cubicles was viewed as a hallmark of intelligence. But it may also reflect a more mundane truth, says Mark Changizi, a theoretical neurobiologist at 2AI Labs in Boise, Idaho: specialization compensates for the connectivity problem that arises as brains get bigger. As you go from a mouse brain to a cow brain with 100 times as many neurons, it is impossible for neurons to expand quickly enough to stay just as well connected. Brains solve this problem by segregating like-functioned neurons into highly interconnected modules, with far fewer long-distance connections between modules. The specialization between right and left hemispheres solves a similar problem; it reduces the amount of information that must flow between the hemispheres, which minimizes the number of long, interhemispheric axons that the brain needs to maintain. “All of these seemingly complex things about bigger brains are just the backbends that the brain has to do to satisfy the connectivity problem” as it gets larger, Changizi argues. “It doesn’t tell us that the brain is smarter.”

Jan Karbowski, a computational neuroscientist at the Polish Academy of Sciences in Warsaw, agrees. “Somehow brains have to optimize several parameters simultaneously, and there must be trade-offs,” he says. “If you want to improve one thing, you screw up something else.” What happens, for example, if you expand the corpus callosum (the bundle of axons connecting right and left hemispheres) quickly enough to maintain constant connectivity as brains expand? And what if you thicken those axons, so the transit delay for signals traveling between hemispheres does not increase as brains expand? The results would not be pretty. The corpus callosum would expand—and push the hemispheres apart—so quickly that any performance improvements would be neutralized.

These trade-offs have been laid into stark relief by experiments showing the relation between axon width and conduction speed. At the end of the day, Karbowski says, neurons do get larger as brain size increases, but not quite quickly enough to stay equally well connected. And axons do get thicker as brains expand, but not quickly enough to make up for the longer conduction delays.

Keeping axons from thickening too quickly saves not only space but energy as well, Balasubramanian says. Doubling the width of an axon doubles energy expenditure, while increasing the velocity of pulses by just 40 percent or so. Even with all of this corner cutting, the volume of white matter (the axons) still grows more quickly than the volume of gray matter (the main body of neurons containing the cell nucleus) as brains increase in size. To put it another way, as brains get bigger, more of their volume is devoted to wiring rather than to the parts of individual cells that do the actual computing, which again suggests that scaling size up is ultimately unsustainable.

The Primacy of Primates

It is easy, with this dire state of affairs, to see why a cow fails to squeeze any more smarts out of its grapefruit-size brain than a mouse does from its blueberry-size brain. But evolution has also achieved impressive work­arounds at the level of the brain’s building blocks. When Jon H. Kaas, a neuroscientist at Vanderbilt University, and his colleagues compared the morphology of brain cells across a spectrum of primates in 2007, they stumbled on­to a game changer—one that has probably given humans an edge. (…)

Humans pack 100 billion neurons into 1.4 kilograms of brain, but a rodent that had followed its usual neuron-size scaling law to reach that number of neurons would now have to drag around a brain weighing 45 kilograms. And metabolically speaking, all that brain matter would eat the varmint out of house and home. “That may be one of the factors in why the large rodents don’t seem to be [smarter] at all than the small rodents,” Kaas says.

Having smaller, more densely packed neurons does seem to have a real impact on intelligence. In 2005 neurobiologists Gerhard Roth and Urusula Dicke, both at the University of Bremen in Germany, reviewed several traits that predict intelligence across species (as measured, roughly, by behavioral complexity) even more effectively than the encephalization quotient does. “The only tight correlation with intelligence,” Roth says, “is in the number of neurons in the cortex, plus the speed of neuronal activity,” which decreases with the distance between neurons and increases with the degree of myelination of axons. Myelin is fatty insulation that lets axons transmit signals more quickly.

If Roth is right, then primates’ small neurons have a double effect: first, they allow a greater increase in cortical cell number as brains enlarge; and second, they allow faster communication, because the cells pack more closely. Elephants and whales are reasonably smart, but their larger neurons and bigger brains lead to inefficiencies. “The packing density of neurons is much lower,” Roth says, “which means that the distance between neurons is larger and the velocity of nerve impulses is much lower.”

In fact, neuroscientists have recently seen a similar pattern in variations within humans: people with the quickest lines of communication between their brain areas also seem to be the brightest. One study, led in 2009 by Martijn P. van den Heuvel of the University Medical Center Utrecht in the Netherlands, used functional magnetic resonance imaging to measure how directly different brain areas talk to one another—that is, whether they talk via a large or a small number of intermediary areas. Van den Heuvel found that shorter paths between brain areas correlated with higher IQ. Edward Bullmore, an imaging neuroscientist at the University of Cambridge, and his collaborators obtained similar results the same year using a different approach. They compared working memory (the ability to hold several numbers in one’s memory at once) among 29 healthy people. They then used magnetoencephalographic recordings from their subjects’ scalp to estimate how quickly communication flowed between brain areas. People with the most direct communication and the fastest neural chatter had the best working memory.

It is a momentous insight. We know that as brains get larger, they save space and energy by limiting the number of direct connections between regions. The large human brain has relatively few of these long-distance connections. But Bullmore and van den Heuvel showed that these rare, nonstop connections have a disproportionate influence on smarts: brains that scrimp on resources by cutting just a few of them do noticeably worse. “You pay a price for intelligence,” Bullmore concludes, “and the price is that you can’t simply minimize wiring.”

Intelligence Design

If communication between neurons, and between brain areas, is really a major bottleneck that limits intelligence, then evolving neurons that are even smaller (and closer together, with faster communication) should yield smarter brains. Similarly, brains might become more efficient by evolving axons that can carry signals faster over longer distances without getting thicker. But something prevents animals from shrinking neurons and axons beyond a certain point. You might call it the mother of all limitations: the proteins that neurons use to generate electrical pulses, called ion channels, are inherently unreliable.

Ion channels are tiny valves that open and close through changes in their molecular folding. When they open, they allow ions of sodium, potassium or calcium to flow across cell membranes, producing the electrical signals by which neurons communicate. But being so minuscule, ion channels can get flipped open or closed by mere thermal vibrations. A simple biology experiment lays the defect bare. Isolate a single ion channel on the surface of a nerve cell using a microscopic glass tube, sort of like slipping a glass cup over a single ant on a sidewalk. When you adjust the voltage on the ion channel—a maneuver that causes it to open or close—the ion channel does not flip on and off reliably like your kitchen light does. Instead it flutters on and off randomly. Sometimes it does not open at all; other times it opens when it should not. By changing the voltage, all you do is change the likelihood that it opens.

It sounds like a horrible evolutionary design flaw—but in fact, it is a compromise. “If you make the spring on the channel too loose, then the noise keeps on switching it,” Laughlin says—as happens in the biology experiment described earlier. “If you make the spring on the channel stronger, then you get less noise,” he says, “but now it’s more work to switch it,” which forces neurons to spend more energy to control the ion channel. In other words, neurons save energy by using hair-trigger ion channels, but as a side effect the channels can flip open or close accidentally. The trade-off means that ion channels are reliable only if you use large numbers of them to “vote” on whether or not a neuron will generate an impulse. But voting becomes problematic as neurons get smaller. “When you reduce the size of neurons, you reduce the number of channels that are available to carry the signal,” Laughlin says. “And that increases the noise.”

In a pair of papers published in 2005 and 2007, Laughlin and his collaborators calculated whether the need to include enough ion channels limits how small axons can be made. The results were startling. “When axons got to be about 150 to 200 nanometers in diameter, they became impossibly noisy,” Laughlin says. At that point, an axon contains so few ion channels that the accidental opening of a single channel can spur the axon to deliver a signal even though the neuron did not intend to fire. The brain’s smallest axons probably already hiccup out about six of these accidental spikes per second. Shrink them just a little bit more, and they would blather out more than 100 per second. “Cortical gray matter neurons are working with axons that are pretty close to the physical limit,” Laughlin concludes.

This fundamental compromise between information, energy and noise is not unique to biology. It applies to everything from optical-fiber communications to ham radios and computer chips. Transistors act as gatekeepers of electrical signals, just like ion channels do. For five decades engineers have shrunk transistors steadily, cramming more and more onto chips to produce ever faster computers. Transistors in the latest chips are 22 nanometers. At those sizes, it becomes very challenging to “dope” silicon uniformly (doping is the addition of small quantities of other elements to adjust a semiconductor’s properties). By the time they reach about 10 nanometers, transistors will be so small that the random presence or absence of a single atom of boron will cause them to behave unpredictably.

Engineers might circumvent the limitations of current transistors by going back to the drawing board and redesigning chips to use entirely new technologies. But evolution cannot start from scratch: it has to work within the scheme and with the parts that have existed for half a billion years, explains Heinrich Reichert, a developmental neurobiologist at the University of Basel in Switzerland—like building a battleship with modified airplane parts.

Moreover, there is another reason to doubt that a major evolutionary leap could lead to smarter brains. Biology may have had a wide range of options when neurons first evolved, but 600 million years later a peculiar thing has happened. The brains of the honeybee, the octopus, the crow and intelligent mammals, Roth points out, look nothing alike at first glance. But if you look at the circuits that underlie tasks such as vision, smell, navigation and episodic memory of event sequences, “very astonishingly they all have absolutely the same basic arrangement.” Such evolutionary convergence usually suggests that a certain anatomical or physiological solution has reached maturity so that there may be little room left for improvement.

Perhaps, then, life has arrived at an optimal neural blueprint. That blueprint is wired up through a step-by-step choreography in which cells in the growing embryo interact through signaling molecules and physical nudging, and it is evolutionarily entrenched.

Bees Do It

So have humans reached the physical limits of how complex our brain can be, given the building blocks that are available to us? Laughlin doubts that there is any hard limit on brain function the way there is one on the speed of light. “It’s more likely you just have a law of diminishing returns,” he says. “It becomes less and less worthwhile the more you invest in it.” Our brain can pack in only so many neurons; our neurons can establish only so many connections among themselves; and those connections can carry only so many electrical impulses per second. Moreover, if our body and brain got much bigger, there would be costs in terms of energy consumption, dissipation of heat and the sheer time it takes for neural impulses to travel from one part of the brain to another.

The human mind, however, may have better ways of expanding without the need for further biological evolution. After all, honeybees and other social insects do it: acting in concert with their hive sisters, they form a collective entity that is smarter than the sum of its parts. Through social interaction we, too, have learned to pool our intelligence with others.

And then there is technology. For millennia written language has enabled us to store information outside our body, beyond the capacity of our brain to memorize. One could argue that the Internet is the ultimate consequence of this trend toward outward expansion of intelligence beyond our body. In a sense, it could be true, as some say, that the Internet makes you stupid: collective human intelligence—culture and computers—may have reduced the impetus for evolving greater individual smarts.

Douglas Fox, a freelance writer living in San Francisco. He is a frequent contributor to New Scientist, The Limits of Intelligence, Scientific American, July 2011.

See also:
Quantum Approaches to Consciousness, Stanford Encyclopedia of Philosophy
Dunbar’s Number: Why We Can’t Have More Than 150 Friends

Sep
18th
Sat
permalink

Two is the magic number: Joshua Wolf Shenk on the role of collaboration in creativity

                           

                                                  Paul McCartney & John Lennon

"This pervasive belief in individualism can be traced to the idea most forcefully articulated by René Descartes. “Each self inhabits its own subjective realm,” he declared, “and its mental life has an integrity prior to and independent of its interaction with other people.” Though Descartes had his challengers, his idea became a core assumption of the Enlightenment, as did Thomas Hobbes' assertion that the natural state of man was “solitary” (as well as “poor, nasty, brutish, and short.”) (…)

Beyond illness, the fundamentals of healthy life took root from the idea of the atomized person. Jean Piaget, who created modern development theory—the system of thought about how children’s minds work and grow—emphasized relationships to objects, not people. Even the most basic relational tool—the way we speak—was shaped by individualism, following Noam Chomsky's notion of language as an expression of inborn, internal capacities. (…)

The triumphant Western position in the Cold War established individual liberty and individual choice as the root unit of society—in opposition to the Marxist emphasis on collective achievement.

The ultimate triumph of the idea of individualism is that it’s not really seen as an idea at all. It has seeped into our mental groundwater. Basic descriptions of inter-relatedness—enabling, co-dependency—are headlines for dysfunction. The Oxford American Dictionary defines individualism as, first, “the habit or principle of being independent and self-reliant” and, second, as “a social theory favoring freedom of action for individuals over collective or state control.” This lopsided contrast of “freedom” vs. “state control” is telling. Even our primary reference on meaning, the dictionary, tilts in favor of the self.

But a new body of research has begun to show how growth and achievement emerge from relationships. (…) But a burgeoning field has shown that, from the very first days of life, relationships shape our experience, our character, even our biology. This research, which has flowered in the last ten years, took root in the 1970s. One reason, explains the psychologist and philosopher Alison Gopnik, was the advent of the simple video camera. It allowed researchers to easily capture and analyze the exchanges between babies and their caregivers. (…)

Emotions, Susan Vaughan asserts, are “peopled” from the start. This dynamic turns out to play a critical role in the development of neural circuits that shape not only interaction, but autonomy too. In other words, the way we experience ourselves is inextricably linked to the way we experience others—so much so that, on close view, it’s hard to draw a concrete distinction between the other and the self. (This in turn raises questions about what the “self” actually is.)

The sensation of “mirror neurons” helped further dissolve the distinction. About 10 years ago, a team of Italian researchers showed that certain neurons that fire during actions by macaque monkeys—when they pick up a peanut, for example—also fire when they watch someone else pick up the peanut. It’s probably overblown to say—as many have—that this phenomenon can explain everything from empathy and altruism to the evolution of human culture. But the point is that our brains register individual and social experience in tandem. (…)

What we consider the “self” is in its essence social. “It sounds like an oxymoron,”  John T. Cacioppo says. “But it’s not. In fact, the idea that the center of our psychological universe, and even our physiological experience, is ‘me’—this just fundamentally misrepresents us as a species.”

It’s not an accident that this new work is ascendant at a time when the Western world no longer identifies itself in opposition to collectivism, and where the Internet and social media have offered an obvious metaphor for webs of connections. “We’re ready for a Copernican revolution in psychology,” Cacioppo says. If it comes, the era of the self will yield to something that may be much more interesting. (…)

The Myth of the Lone Genius

If relationships shape us so fundamentally, how—in the study of creativity—could they also be so obscure? Why are we preoccupied with the lone genius, with great men (and, more now than in the past, great women)? Evolutionary psychologists might point to how our ancestors focused on the alpha male of a pack or the headman of a tribe. But there are contemporary explanations.

For one thing, male-female acts have often kept one partner behind the curtain. The eminent psychoanalyst and social theorist Erik Erikson acknowledged that his wife of 66 years, Joan Erikson, worked with him so closely that it was hard to tell where her work left off and his began. But he drew the salary; his name went on the cover of Young Man Luther. He is among history’s most famous social scientists (…)

The custom of hidden partners is often industry standard: Tenure committees insist on judging individual work, even though collaborations are core to academic culture. CEOs have become like synecdoches for their companies, though their effectiveness depends on partners and teams. (Could Steve Jobs have reinvented Apple without his design guru Jonathan Ive?) (…)

Even when a creative partnership is inescapable, principals may resist acknowledging its influence. Maxwell Perkins, the great editor who discovered and shaped the works of F. Scott Fitzgerald and Ernest Hemingway, also made magic with Thomas Wolfe. Their collaboration made Wolfe’s sprawling manuscripts into the epic novels Look Homeward Angeland Of Time and the River. (…)

The other reason the lone genius myth persists is that “collaboration” gets defined so narrowly, as though the only relationships that matter are between peers of roughly equal power. In fact, it is often the most independent virtuosos who need relationships the most. Take golf, for example. By PGA tour rules, professional golfers play the links without coaches or managers. So the role of psychologist, strategist, and counselor falls to the caddie. (…)

But ignorance alone can’t be entirely responsible for the myth of the individual creator. At times, the myth is so pervasive, and so wrong, that it points to a basic problem in our thinking. Consider Emily Dickinson, who in the popular mind personifies the lone genius, composing poetry in the stillness of her room, clad in monastic white, only occasionally lowering her basket from her bedroom window.

But Dickinson was actually deeply engaged with a number of contemporaries, who were vital to her work. Some, like Thomas Wentworth Higginson, are at least acknowledged (more so in the wake of Brenda Wineapple’s book White Heat.)

But popular history has lost, and literary history has only lately recovered, the essential, decades-long bond between Dickinson and her sister-in-law Susan Huntington Dickinson. As Martha Nell Smith and Ellen Louise Hart show in Open Me Carefully, the poet was on fire from within but her connection to Susan—whom she called “Imagination” itself, and a source of knowledge second only to Shakespeare—helped fuel the flames. (…)

Even historically, it’s not as though the idea of the individual hero has lacked for dissidents. Critics from Herbert Spencer to Howard Zinn have challenged the “great man” theory of history, emphasizing cultures instead (in Zinn’s case, the culture of ordinary people). A more extreme challenge to the self-centered worldview comes from process philosophy, a school inspired by Alfred North Whitehead, which emphasizes exchange and inter-dependency—not only among people but between species; and not only moment to moment but across time.

In the best and most problematic ways, process thought makes the head spin. It feels right to many people, when they consider it, that truth is dynamic, not static; that we create one another; that our “selves” converge in the present from a relationship to the past and future. That human creativity stems from culture is hard to deny. Just look at the transcendentalists or the Bloomsbury circle—or the famous, peculiar cultures at Apple or Google—or the intense ferment in the scientists who split the atom.

But myths take hold for a reason. It’s easy and satisfying to reduce a big, complex cast to a single character—giving Edison sole credit for the light bulb, or Freud for psychoanalysis. (…)

The human mind depends on narrative, characters, and concrete action, while the idea of interdependence easily dissolves into abstraction. Say, for example, we trace the influences on Einstein, and draw concentric circles around him, first with his immediate peers (including Michele Besso, with whom Einstein worked out the theory of relativity in conversation), then to the scientific circle of his era, then to the influences of the previous generation. Where do we stop—with the ancient Greeks? Even if you acknowledge the depth and breadth of Einstein’s connections, it’s near irresistible to call him a genius and go on your way. Give an audience a big enough ensemble cast, their eyes will naturally seek a star.

1 + 1 = Infinity

To take on the myth of the lone genius, we need not only to draw on the best science and history, we also need to focus on the fundamental social unit: the pair. As Tony Kushner writes in his notes to Angels in America, “the smallest indivisible unit is two people, not one; one is a fiction.” Buckminster Fuller got at the same idea when he wrote that “[u]nity is plural and, at minimum, is two.”

In the sphere of romantic love, most of us already accept the primacy of pairs. And much of the new relationship science is focused on romantic and personal intimacy. (…)

Hidden partners need scrutiny, as do the frontman and his sidekick, mentors and mentees, masters and muses. Let’s define collaboration broadly, as a mutuality that shapes a body of work.”

Joshua Wolf Shenk, Two is the magic number: a new science of creativity, Slate Magazine Sept. 14, 2010

Jul
19th
Mon
permalink
Blair Bolles on the development of human speech

“The most important breakthrough came when individuals trusted one another enough to share their thoughts and believe what somebody else told them. That milestone was passed about 2.5 million years ago. Attempts to teach sign language have been successful enough to show that apes are smart enough to learn a few hundred words and even put a couple of them together, but in the wild they never actually do it. And even when trained to use sign language, they use it only to manipulate their trainers or in response to a question. So it cannot be intelligence that keeps apes from using any language at all.

The problem appears to be that there is no benefit from sharing information. If I tell you what I know and you only give me hogwash in return, I lose, you win. Somehow our ancestors came to trust one another and reap the great benefits that come from sharing knowledge honestly. More than intelligence, more than syntax, that social change made language possible.” “
Blair Bolles in interview with T. DeLene Beeland, Why humans speak: It’s a matter of trust, The Charlotte Observer, Jul. 12, 2010 (via xixidu)
Mar
21st
Sun
permalink
Collective Intelligence. Interdependence of Networks for Human Development by Pierre Lévy
The hexagrams of the figure represent the six poles of collective intelligence and should be read in the following manner. Starting from the top, each line of a hexagram symbolizes a “semantic primitive”: Emptiness, Virtual, Actual, Sign, Being, Thing. Full lines mean that the corresponding primitives are “ON” and broken lines that the corresponding primitives are “OFF”. The diagram highlight the symmetry between two dialectics. Vertically, the virtual/actual binary dialectic juxtaposes and joins the two complementary triples: know-want-can / documents-persons-body. Horizontally, the ternary dialectic sign/being/thing juxtaposes and joins the three complementary pairs: know-documents / want-persons / can-bodies.
"I have adopted here the network or actor-network theory that is broadly used in human and social sciences, leading to the integration of mathematical tools of graph theory. This diagram shows essentially that a sustainable collective intelligence implies a continuous exchange of resources between the six kinds of human capitals. How can we develop methods to measure the value of the six kinds of human capitals and their flow of exchange? How can we extract these measurements from the analysis (preferably automatic) of data and online transactions of a given community or of a network of communities?"
— from The networks of collective intelligence, Random notes in French & English from Pierre Lévy

Collective Intelligence. Interdependence of Networks for Human Development by Pierre Lévy

The hexagrams of the figure represent the six poles of collective intelligence and should be read in the following manner. Starting from the top, each line of a hexagram symbolizes a “semantic primitive”: Emptiness, Virtual, Actual, Sign, Being, Thing. Full lines mean that the corresponding primitives are “ON” and broken lines that the corresponding primitives are “OFF”. The diagram highlight the symmetry between two dialectics. Vertically, the virtual/actual binary dialectic juxtaposes and joins the two complementary triples: know-want-can / documents-persons-body. Horizontally, the ternary dialectic sign/being/thing juxtaposes and joins the three complementary pairs: know-documents / want-persons / can-bodies.

"I have adopted here the network or actor-network theory that is broadly used in human and social sciences, leading to the integration of mathematical tools of graph theory. This diagram shows essentially that a sustainable collective intelligence implies a continuous exchange of resources between the six kinds of human capitals. How can we develop methods to measure the value of the six kinds of human capitals and their flow of exchange? How can we extract these measurements from the analysis (preferably automatic) of data and online transactions of a given community or of a network of communities?"

— from The networks of collective intelligence, Random notes in French & English from Pierre Lévy