Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Pensieri a caso
Photography
A Box Of Stories
Reading Space
Homepage

Twitter
Facebook

Contact

Archive

Sep
29th
Sun
permalink

Kevin Kelly: The Improbable is the New Normal

"The improbable consists of more than just accidents. The internets are also brimming with improbable feats of performance — someone who can run up a side of a building, or slide down suburban roof tops, or stack up cups faster than you can blink. Not just humans, but pets open doors, ride scooters, and paint pictures. The improbable also includes extraordinary levels of super human achievements: people doing astonishing memory tasks, or imitating all the accents of the world. In these extreme feats we see the super in humans.

Every minute a new impossible thing is uploaded to the internet and that improbable event becomes just one of hundreds of extraordinary events that we’ll see or hear about today. The internet is like a lens which focuses the extraordinary into a beam, and that beam has become our illumination. It compresses the unlikely into a small viewable band of everyday-ness. As long as we are online - which is almost all day many days — we are illuminated by this compressed extraordinariness. It is the new normal.

That light of super-ness changes us. We no longer want mere presentations, we want the best, greatest, the most extraordinary presenters alive, as in TED. We don’t want to watch people playing games, we want to watch the highlights of the highlights, the most amazing moves, catches, runs, shots, and kicks, each one more remarkable and improbable than the other.

We are also exposed to the greatest range of human experience, the heaviest person, shortest midgets, longest mustache — the entire universe of superlatives! Superlatives were once rare — by definition — but now we see multiple videos of superlatives all day long, and they seem normal. Humans have always treasured drawings and photos of the weird extremes of humanity (early National Geographics), but there is an intimacy about watching these extremities on video on our phones while we wait at the dentist. They are now much realer, and they fill our heads.

I see no end to this dynamic. Cameras are becoming ubiquitous, so as our collective recorded life expands, we’ll accumulate thousands of videos showing people being struck by lightening. When we all wear tiny cameras all the time, then the most improbable accident, the most superlative achievement, the most extreme actions of anyone alive will be recorded and shared around the world in real time. Soon only the most extraordinary moments of our 6 billion citizens will fill our streams. So henceforth rather than be surrounded by ordinariness we’ll float in extraordinariness. (…)

When the improbable dominates the archive to the point that it seems as if the library contains ONLY the impossible, then these improbabilities don’t feel as improbable. (…)

To the uninformed, the increased prevalence of improbable events will make it easier to believe in impossible things. A steady diet of coincidences makes it easy to believe they are more than just coincidences, right? But to the informed, a slew of improbably events make it clear that the unlikely sequence, the outlier, the black swan event, must be part of the story. After all, in 100 flips of the penny you are just as likely to get 100 heads in a row as any other sequence. But in both cases, when improbable events dominate our view — when we see an internet river streaming nothing but 100 heads in a row — it makes the improbable more intimate, nearer.

I am unsure of what this intimacy with the improbable does to us. What happens if we spend all day exposed to the extremes of life, to a steady stream of the most improbable events, and try to run ordinary lives in a background hum of superlatives? What happens when the extraordinary becomes ordinary?

The good news may be that it cultivates in us an expanded sense of what is possible for humans, and for human life, and so expand us. The bad news may be that this insatiable appetite for supe-superlatives leads to dissatisfaction with anything ordinary.”

Kevin Kelly, is the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, The Improbable is the New Normal, The Technium, 7 Jan, 2013. (Photo source)

Feb
3rd
Sun
permalink

'Elegance,' 'Symmetry,' and 'Unity': Is Scientific Truth Always Beautiful?

                   image

"Today the grandest quest of physics is to render compatible the laws of quantum physics—how particles in the subatomic world behave—with the rules that govern stars and planets. That’s because, at present, the formulas that work on one level implode into meaninglessness at the other level. This is deeply ungainly, and significant when the two worlds collide, as occurs in black holes. The quest to unify quantum physics (micro) and general relativity (macro) has spawned heroic efforts, the best-known candidate for a grand unifying concept presently being string theory. String theory proposes that subatomic particles are not particles at all but closed or open vibrating strings, so tiny, a hundred billion billion times shorter than an atomic nucleus’s diameter, that no human instrument can detect them. It’s the “music of the spheres”—think vibrating harp strings—made literal.

A concept related to string theory is “supersymmetry.” Physicists have shown that at extremely high energy levels, similar to those that existed a micro-blink after the big bang, the strength of the electromagnetic force, and strong and weak nuclear forces (which work only on subatomic levels), come tantalizingly close to converging. Physicists have conceived of scenarios in which the three come together precisely, an immensely intellectually and aesthetically pleasing accomplishment. But those scenarios imply the existence of as-yet-undiscovered “partners” for existing particles: The electron would be joined by a “selectron,” quarks by “squarks,” and so on. There was great hope that the $8-billion Large Hadron Collider would provide indirect evidence for these theories, but so far it hasn’t. (…)

[Marcelo Gleiser]: “We look out in the world and we see a very complicated pattern of stuff, and the notion of symmetry is an important way to make sense of the mess. The sun and moon are not perfect spheres, but that kind of approximation works incredibly well to simulate the behavior of these bodies.”

But the idea that what’s beautiful is true and that “symmetry rules,” as Gleiser puts it, “has been catapulted to an almost religious notion in the sciences,” he says. In his own book A Tear at the Edge of Creation (Free Press), Gleiser made a case for the beauty inherent in asymmetry—in the fact that neutrinos, the most common particles in the universe, spin only in one direction, for example, or that amino acids can be produced in laboratories in “left-handed” or “right-handed” forms, but only the “left-handed” form appears in nature. These are nature’s equivalent of Marilyn Monroe’s mole, attractive because of their lopsidedness, and Orrell also makes use of those examples.

But Weinberg, the Nobel-winning physicist at the University of Texas at Austin, counters: “Betting on beauty works remarkably well.” The Large Hadron Collider’s failure to produce evidence of supersymmetry is “disappointing,” he concedes, but he notes that plenty of elegant theories have waited years, even decades, for confirmation. Copernicus’s theory of a Sun-centered universe was developed entirely without experiment—he relied on Ptolemy’s data—and it was eventually embraced precisely because his description of planetary motion was simply more economical and elegant than those of his predecessors; it turned out to be true.

Closer to home, Weinberg says his own work on the weak nuclear force and electromagnetism had its roots in remarkably elegant, purely abstract theories of researchers who came before him, theories that, at first, seemed to be disproved by evidence but were too elegant to stop thinking about. (…)

To Orrell, it’s not just that many scientists are too enamored of beauty; it’s that their notion of beauty is ossified. It is “kind of clichéd,” Orrell says. “I find things like perfect symmetry uninspiring.” (In fairness, the Harvard theoretical physicist Lisa Randall has used the early unbalanced sculptures of Richard Serra as an example of how the asymmetrical can be as fascinating as the symmetrical, in art as in physics. She finds this yin-yang tension perfectly compatible with modern theorizing.)

Orrell also thinks it is more useful to study the behavior of complex systems rather than their constituent elements. (…)

Outside of physics, Orrell reframes complaints about “perfect-model syndrome” in aesthetic terms. Classical economists, for instance, treat humans as symmetrical in terms of what motivates decision-making. In contrast, behavioral economists are introducing asymmetry into that field by replacing Homo economicus with a quirkier, more idiosyncratic and human figure—an aesthetic revision, if you like. (…)

The broader issue, though, is whether science’s search for beautiful, enlightening patterns has reached a point of diminishing returns. If science hasn’t yet hit that point, might it be approaching it? The search for symmetry in nature has had so many successes, observes Stephon Alexander, a Dartmouth physicist, that “there is a danger of forgetting that nature is the one that decides where that game ends.”

Christopher Shea, American writer and editor, Is Scientific Truth Always Beautiful?, The Chronicle of Higher Education, Jan 28, 2013.

The Asymmetry of Life

                 image
                                     Image courtesy of Ben Lansky

"Look into a mirror and you’ll simultaneously see the familiar and the alien: an image of you, but with left and right reversed.

Left-right inequality has significance far beyond that of mirror images, touching on the heart of existence itself. From subatomic physics to life, nature prefers asymmetry to symmetry. There are no equal liberties when neutrinos and proteins are concerned. In the case of neutrinos, particles that spill out of the sun’s nuclear furnace and pass through you by the trillions every second, only leftward-spinning ones exist. Why? No one really knows.

Proteins are long chains of amino acids that can be either left- or right-handed. Here, handedness has to do with how these molecules interact with polarized light, rotating it either to the left or to the right. When synthesized in the lab, amino acids come out fifty-fifty. In living beings, however, all proteins are made of left-handed amino acids. And all sugars in RNA and DNA are right-handed. Life is fundamentally asymmetric.

Is the handedness of life, its chirality (think chiromancer, which means “palm reader”), linked to its origins some 3.5 billion years ago, or did it develop after life was well on its way? If one traces life’s origins from its earliest stages, it’s hard to see how life began without molecular building blocks that were “chirally pure,” consisting solely of left- or right-handed molecules. Indeed, many models show how chirally pure amino acids may link to form precursors of the first protein-like chains. But what could have selected left-handed over right-handed amino acids?

My group’s research suggests that early Earth’s violent environmental upheavals caused many episodes of chiral flip-flopping. The observed left-handedness of terrestrial amino acids is probably a local fluke. Elsewhere in the universe, perhaps even on other planets and moons of our solar system, amino acids may be right-handed. But only sampling such material from many different planetary platforms will determine whether, on balance, biology is lefthanded, right-handed, or ambidextrous.”

Marcelo Gleiser, The Asymmetry of Life, § SEEDMAGAZINE, Sep 7, 2010.

"One of the deepest consequences of symmetries of any kind is their relationship with conservation laws. Every symmetry in a physical system, be it balls rolling down planes, cars moving on roads, planets orbiting the Sun, a photon hitting an electron, or the expanding Universe, is related to a conserved quantity, a quantity that remains unchanged in the course of time. In particular, external (spatial and temporal) symmetries are related to the conservation of momentum and energy, respectively: the total energy and momentum of a system that is temporally and spatially symmetric remains unchanged.

The elementary particles of matter live in a reality very different from ours. The signature property of their world is change: particles can morph into one another, changing their identities. […] One of the greatest triumphs of twentieth-century particle physics was the discovery of the rules dictating the many metamorphoses of matter particles and the symmetry principles behind them. One of its greatest surprises was the realization that some of the symmetries are violated and that these violations have very deep consequences. (…) p.27

Even though matter and antimatter appear in equal footing on the equations describing relativistic particles, antimatter occurs only rarely. […] Somehow, during its infancy, the cosmos selected matter over antimatter. This imperfection is the single most important factor dictating our existence. (…)

Back to the early cosmos: had there been an equal quantity of antimatter particles around, they would have annihilated the corresponding particles of matter and all that would be left would be lots of gamma-ray radiation and some leftover protons and antiprotons in equal amounts. Definitely not our Universe. The tiny initial excess of matter particles is enough to explain the overwhelming excess of matter over antimatter in today’s Universe. The existence of mattter, the stuff we and everything else are made of, depends on a primordial imperfection, the matter-antimatter asymmetry. (…) p.29.

We have seen how the weak interactions violate a series of internal symmetries: charge conjugation, parity, and even the combination of the two. The consequences of these violations are deeply related to our existence: they set the arrow of time at the microscopic level, providing a viable mechanism to generate the excess of matter over antimatter. […] The message from modern particle physics and cosmology is clear: we are the products of imperfections in Nature. (…)

It is not symmetry and perfection that should be our guiding principle, as it has been for millennia. We don’t have to look for the mind of God in Nature and try to express it through our equations. The science we create is just that, our creation. Wonderful as it is, it is always limited, it is always constrained by what we know of the world. […] The notion that there is a well-defined hypermathematical structure that determines all there is in the cosmos is a Platonic delusion with no relationship to physical reality. (…) p. 35.

The critics of this idea miss the fact that a meaningless cosmos that produced humans (and possibly other intelligences) will never be meaningless to them (or to the other intelligences). To exist in a purposeless Universe is even more meaningful than to exist as the result of some kind of mysterious cosmic plan. Why? Because it elevates the emergence of life and mind to a rare event, as opposed to a ubiquitous and premeditated one. For millennia, we believed that God (or gods) protected us from extinction, that we were chosen to be here and thus safe from ultimate destruction. […]

When science proposes that the cosmos has a sense of purpose where in life is a premeditated outcome of natural events, a similar safety blanket mechanism is at play: if life fails here, it will succeed elsewhere. We don’t really need to preserve it. To the contrary, I will argue that unless we accept our fragility and cosmic loneliness, we will never act to protect what we have. (…)

The laws of physics and the laws of chemistry as presently understood have nothing to say about the emergence of life. As Paul Davies remarked in Cosmic Jackpot, notions of a life principle suffer from being teleologic, explaining life as the end goal, a purposeful cosmic strategy. The human mind, of course, would be the crown jewel of such creative drive. Once again we are “chosen” ones, a dangerous proposal. […] Arguments shifting the “mind of God” to the “mind of the cosmos” perpetuate our obsession with the notion of Oneness. Our existence need not be planned to be meaningful.” (…) p.49.

Unified theories, life principles, and self-aware universes are all expressions of our need to find a connection between who we are and the world we live in. I do not question the extreme importance of understanding the connection between man and the cosmos. But I do question that it has to derive from unifying principles. (…) p.50.

My point is that there is no Final Truth to be discovered, no grand plan behind creation. Science advances as new theories engulf or displace old ones. The growth is largely incremental, punctuated by unexpected, worldview-shattering discoveries about the workings of Nature. […]

Once we understand that science is the creation of human minds and not the pursuit of some divine plan (even if metaphorically) we shift the focus of our search for knowledge from the metaphysical to the concrete. (…) p.51.

For a clever fish, water is “just right“ for it to swim in. Had it been too cold, it would freeze; too hot, it would boil. Surely the water temperature had to be just right for the fish to exist. “I’m very important. My existence cannot be an accident,” the proud fish would conclude. Well, he is not very important. He is just a clever fish. The ocean temperature is not being controlled with the purpose of making it possible for it to exist. Quite the opposite: the fish is fragile. A sudden or gradual temperature swing would kill it, as any trout fisherman knows. We so crave for meaningful connections that we see them even when they are not there.

We are soulful creatures in a harsh cosmos. This, to me, is the essence of the human predicament. The gravest mistake we can make is to think that the cosmos has plans for us, that we are somehow special from a cosmic perspective. (…) p.52

We are witnessing the greatest mass extinction since the demise of the dinosaurs 65 million years ago. The difference is that for the first time in history, humans, and not physical causes, are the perpetrators. […] Life recovered from the previous five mass extinctions because the physical causes eventually ceased to act. Unless we understand what is happening and start acting toghether as a species we may end up carving the path toward our own destruction. (…)” p.56

Marcelo Gleiser is the Appleton Professor of Natural Philosophy at Dartmouth College, A Tear at the Edge of Creation, Free Press, 2010.

See also:

Symmetry in Physics - Bibliography - PhilPapers
The Concept of Laws. The special status of the laws of mathematics and physics, Lapidarium notes
Universe tag on Lapidarium notes

Apr
26th
Thu
permalink

Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

   

"That’s what we do with all of our art. A beautiful cathedral, a beautiful painting, a beautiful song—-all of those are ecstatic visions held in stasis; in some sense the artist is saying “here is a glimpse I had of something ephemeral and fleeting and magical, and I’m doing my best to instantiate that into stone, into paint, into stasis.” And that’s what human beings have always done, we try to capture these experiences before they go dim, we try to make sure that what we glimpse doesn’t fade away before we get hungry or sleepy later. (…)

We want to transcend our biological limitations. We don’t want biology or entropy to interrupt the ecstasy of consciousness. Consciousness, when it’s unburdened by the body, is something that’s ecstatic; we use the mind to watch the mind, and that’s the meta-nature of our consciousness, we know that we know that we know, and that’s such a delicious feeling, but when it’s unburdened by biology and entropy it becomes more than delicious; it becomes magical. I mean, think of the unburdening of the ego that takes place when we watch a film; we sit in a dark room, it’s sort of a modern church, we turn out the lights and an illumination beams out from behind us creating these ecstatic visions. We lose ourselves in the story, we experience a genuine catharsis, the virtual becomes real—-it’s total transcendence, right? (…)

This haunting idea of the passing of time, of the slipping away of the treasured moments of our lives, became a catalyst for my thinking a lot about mortality. This sense that the moment is going to end, the night will be over, and that we’re all on this moving walkway headed towards death; I wanted a diversion from that reality. In Ernest Becker's book The Denial of Death, he talks about how the neurotic human condition is not a product of our sexual repression, but rather our repression in the face of death anxiety. We have this urgent knot in our stomach because we’re keenly aware that we’re mortal, and so we try to find these diversions so that we don’t think about it—-and these have manifested into the religious impulse, the romantic impulse, and the creative impulse.

As we increasingly become sophisticated, cosmopolitan people, the religious impulse is less relevant. The romantic impulse has served us well, particularly in popular culture, because that’s the impulse that allows us to turn our lovers into deities; we say things like “she’s like salvation, she’s like the wind,” and we end up worshipping our lovers. We invest in this notion that to be loved by someone is to be saved by someone. But ultimately no relationship can bear the burden of godhood; our lovers reveal their clay feet and their frailties and they come back down to the world of biology and entropy. 

So then we look for salvation in the creative impulse, this drive to create transcendent art, or to participate in aesthetic arrest. We make beautiful architecture, or beautiful films that transport us to this lair where we’re like gods outside of time. But it’s still temporal. The arts do achieve that effect, I think, and so do technologies to the extent that they’re extensions of the human mind, extensions of our human longing. In a way, that is the first pathway to being immortal gods. Particularly with technologies like the space shuttle, which make us into gods in the sense that they let us hover over the earth looking down on it. But then we’re not gods, because we still age and we die.

But even if you see the singularity only as a metaphor, you have to admit it’s a pretty wonderful metaphor, because human nature, if nothing else, consists of this desire to transcend our boundaries—-the entire history of man from hunter gatherer to technologist to astronaut is this story of expanding and transcending our boundaries using our tools. And so whether the metaphor works for you or not, that’s a wonderful way to live your life, to wake up every day and say, “even if I am going to die I am going to transcend my human limitations.” And then if you make it literal, if you drop this pretense that it’s a metaphor, you notice that we actually have doubled our lifespan, we really have improved the quality of life across the world, we really have created magical devices that allow us to send our thoughts across space at nearly the speed of light. We really are on the cusp of reprogramming our biology like we program computers. 

All of the sudden this metaphor of the singularity spills over into the realm of the possible, and it makes it that much more intoxicating; it’s like going from two dimensions to three dimensions, or black and white to color. It just keeps going and going, and it never seems to hit the wall that other ideas hit, where you have to stop and say to yourself “stop dreaming.” Here you can just kind of keep dreaming, you can keep making these extrapolations of Moore’s Law, and say “yeah, we went from building-sized supercomputers to the iPhone, and in forty-five years it will be the size of a blood cell.” That’s happening, and there’s no reason to think it’s going to stop.

Q: Going through your videos, I noticed that one vision of the singularity that you keep returning to is this idea of “substrate-independent minds.” Can you explain what a substrate independent mind is, and why it makes for such a compelling vision of the future?

Jason Silva: That has to do with what’s called STEM compression, which is this notion that all technologies become compressed in terms of space, time, energy and matter (STEM) as they evolve. Our brain is a great example of this; it’s got this dizzying level of complexity for such a small space, but the brain isn’t optimal. The optimal scenario would be to have brain-level complexity, or even higher-level complexity in something that’s the size of cell. If we radically upgrade our bodies with biotech, we might find that in addition to augmenting our biological capabilities, we’re also going to be replacing more of our biology with non-biological components, so that things are backed up and decentralized and not subject to entropy. More and more of the data processing that makes up our consciousness is going to be non-biological, and eventually we might be able to discard biology altogether, because we’ll have finally invented a computational substrate that supports the human mind. 

At that point, if we’re doing computing at the nano scale, or the femto scale, which is even smaller, you could see extraordinary things. What if we could store all of the computing capacity of the world’s computer networks in something that operates at the femto scale? What if we could have thinking, dreaming, conscious minds operating at the femto scale? That would be a substrate independent mind.

You can even go beyond that. John Smart has this really interesting idea he calls the Transcension Hypothesis. It’s this idea that that all civilizations hit a technological singularity, after which they stop expanding outwards, and instead become subject to STEM compression that pushes them inward into denser and denser computational states until eventually we disappear out of the visible universe, and we enter into a black-hole-like condition. So you’ve got digital minds exponentially more powerful than the ones we use today, operating in the computational substrate, at the femto scale, and they’re compressing further and further into a black hole state, because a black hole is the most efficient computational substrate that physics has ever described. I’m not a physicist, but I have read physicists who say that black holes are the ultimate computers, and that’s why the whole STEM compression idea is so interesting, especially with substrate independent minds; minds that can hop back and forth between different organizational structures of matter.  (…)

With technology, we’ve been doing the same thing we used to with religion, which is to dream of a better way to exist, but technology actually gives you real ways to extend your thoughts and your vision. (…)



The mind is always participating in these feedback loops with the spaces it resides in; whatever is around us is a mirror that we’re holding up to ourselves, because everything we’re thinking about we’re creating a model of in our heads. So when you’re in constrained spaces you’re having constrained thoughts, and when you’re in vast spaces you have vast thoughts. So when you get to sit and contemplate actual outer space, solar systems, and galaxies, and super clusters—-think about how much that expands your inner world. That’s why we get off on space. 

I also get off on synthetic biology, because I love the metaphors that exist between technology and biology: the idea that we may be able to reprogram the operating system, or upgrade the software of our biology. It’s a great way to help people understand what’s possible with biology, because people already understand the power we have over the digital world—-we’re like gods in cyberspace, we can make anything come into being. When the software of biology is subject to that very same power, we’re going to be able to do those same things in the realm of living things. There’s this Freeman Dyson line that I have quoted a million times in my videos, to the point where people are actually calling me out about it, but the reason I keep coming back to it is that it’s so emblematic of my awe in thinking about this stuff—-he says that "in the future, a new generation of artists will be writing genomes as fluently as Blake and Byron wrote verses." It’s a really well placed analogy, because the alphabet is a technology; you can use it to engender alphabetic rapture with literature and poetry. Guys like Shakespeare and Blake and Byron were technologists who used the alphabet to engineer wonderful things in the world. With biology, new generations of artists will be able to perform the same miracles that Shakespeare and those guys did with words, only they’ll be doing it with genes.

Q: You romanticize technology in some really interesting ways; in one of your videos you say that if you could watch the last century in time lapse you would see ideas spilling out of the human mind and into the physical universe. Do you expect that interface between the mind and the physical to become even more lubricated as time passes? Or are there limits, physical or otherwise, that we’re eventually going to run up against?

Jason Silva: It’s hard to say, because as our tools become more powerful they shrink the buffer time between our dreams and our creations. Today we still have this huge lag time between thinking and creation. We think of something, and then we have to go get the stuff for it, and then we have to build it—-it’s not like we can render it at the speed of thought. But eventually it will get to the point where it will be like that scene in Inception where he says that we can create and perceive our world at the same time. Because, again, if you look at human progress in time lapse, it is like that scene in Inception. People thought “airplane, aviation, jet engine” and then those things were in the world. If you look at the assembly line of an airplane in time lapse it actually looks self-organizing; you don’t see all of these agencies building it, instead it’s just being formed. And when you see the earth as the biosphere, as this huge integrated system, then you see this stuff just forming over time, just popping into existence. There’s this process of intention, imagination and instantiation, and the buffer time between each of those steps is getting smaller and smaller. (…)”

Jason Silva, Venezuelan-American television personality, filmmaker, gonzo journalist and founding producer/host for Current TV, A Timothy Leary for the Viral Video Age, The Atlantic, Apr 12, 2012.

Turning Into Gods - ‘Concept Teaser’ by Jason Silva

"Turning Into Gods is a new feature length documentary exploring mankind’s journey to ‘play jazz with the universe’… it is a story of our ultimate potential, the reach of our intelligence, the scope of our scientific and engineering abilities and the transcendent quality of our heroic and noble calling.

Thinking, feeling, striving, man is whatPierre Teilhard de Chardin called “the ascending arrow of the great biological synthesis.”… today we walk a tight-rope between ape and Nietzsche’s Overman… how will we make it through, and what is the texture and color of our next refined and designed evolutionary leap? (…)

"We’re on the cusp of a bio-tech/nanotech/artificial-intelligence revolution that will open up new worlds of exploration. And we should open our minds to the limitless, mind-boggling possibilities.”

Why We Could All Use a Heavy Dose of Techno-optimism, Vanity Fair, May 7, 2010.

See also:

‘To understand is to perceive patterns’, Lapidarium notes
Wildcat and Jason Silva on immortality
☞ Jason Silva, The beginning of infinity (video)
Kevin Kelly on information, evolution and technology: ‘The essence of life is not energy but ideas’, Lapidarium notes
Kevin Kelly on Why the Impossible Happens More Often
Waking Life ☞ animated film focuses on the nature of dreams, consciousness, and existentialism. Eamonn Healy speaks about telescopic evolution and the future of humanity
Mark Changizi on Humans, Version 3.0.
Science historian George Dyson: Unravelling the digital code
Technology tag on Lapidarium notes

Jan
17th
Tue
permalink

The Rise of Complexity. Scientists replicate key evolutionary step in life on earth

                        
         Green cells are undergoing cell death, a cellular division-of-labor—fostering new life.

More than 500 million years ago, single-celled organisms on Earth’s surface began forming multi-cellular clusters that ultimately became plants and animals. (…)

The yeast “evolved” into multi-cellular clusters that work together cooperatively, reproduce and adapt to their environment—in essence, they became precursors to life on Earth as it is today. (…)

The finding that the division-of-labor evolves so quickly and repeatedly in these ‘snowflake’ clusters is a big surprise. (…) The first step toward multi-cellular complexity seems to be less of an evolutionary hurdle than theory would suggest.” (…)

"To understand why the world is full of , including humans, we need to know how one-celled organisms made the switch to living as a group, as multi-celled organisms.” (…)

"This study is the first to experimentally observe that transition," says Scheiner, "providing a look at an event that took place hundreds of millions of years ago." (…)

The scientists chose Brewer’s yeast, or Saccharomyces cerevisiae, a species of yeast used since ancient times to make bread and beer because it is abundant in nature and grows easily.

They added it to nutrient-rich culture media and allowed the cells to grow for a day in test tubes.

Then they used a centrifuge to stratify the contents by weight.

As the mixture settled, cell clusters landed on the bottom of the tubes faster because they are heavier. The biologists removed the clusters, transferred them to fresh media, and agitated them again.

                   
    First steps in the transition to multi-cellularity: ‘snowflake’ yeast with dead cells stained red.

Sixty cycles later, the clusters—now hundreds of cells—looked like spherical snowflakes.

Analysis showed that the clusters were not just groups of random cells that adhered to each other, but related cells that remained attached following cell division.

That was significant because it meant that they were genetically similar, which promotes cooperation. When the clusters reached a critical size, some cells died off in a process known as apoptosis to allow offspring to separate.

The offspring reproduced only after they attained the size of their parents. (…)

                       
     Multi-cellular yeast individuals containing central dead cells, which promote reproduction.

"A cluster alone isn’t multi-cellular," William Ratcliff says. "But when cells in a cluster cooperate, make sacrifices for the common good, and adapt to change, that’s an evolutionary transition to multi-cellularity."

In order for multi-cellular organisms to form, most cells need to sacrifice their ability to reproduce, an altruistic action that favors the whole but not the individual. (…)

For example, all cells in the human body are essentially a support system that allows sperm and eggs to pass DNA along to the next generation.

Thus multi-cellularity is by its nature very cooperative.

"Some of the best competitors in nature are those that engage in cooperation, and our experiment bears that out. (…)

Evolutionary biologists have estimated that multi-cellularity evolved independently in about 25 groups.”

Scientists replicate key evolutionary step in life on earth, Physorg, Jan 16, 2012.

Evolution: The Rise of Complexity

"Let’s rewind time back about 3.5 billion years. Our beloved planet looks nothing like the lush home we know today – it is a turbulent place, still undergoing the process of formation. Land is a fluid concept, consisting of molten lava flows being created and destroyed by massive volcanoes. The air is thick with toxic gasses like methane and ammonia which spew from the eruptions. Over time, water vapor collects, creating our first weather events, though on this early Earth there is no such thing as a light drizzle. Boiling hot acid rain pours down on the barren land for millions of years, slowly forming bubbling oceans and seas. Yet in this unwelcoming, violent landscape, life begins.

The creatures which dared to arise are called cyanobacteria, or blue-green algae. They were the pioneers of photosynthesis, transforming the toxic atmosphere by producing oxygen and eventually paving the way for the plants and animals of today. But what is even more incredible is that they were the first to do something extraordinary – they were the first cells to join forces and create multicellular life. (…)

William Ratcliff and his colleagues at the University of Minnesota. In a PNAS paper published online this week, they show how multicellular yeast can arise in less than two months in the lab. (…)

All of their cultures went from single cells to snowflake-like clumps in less than 60 days. “Although known transitions to complex multicellularity, with clearly differentiated cell types, occurred over millions of years, we have shown that the first crucial steps in the transition from unicellularity to multicellularity can evolve remarkably quickly under appropriate selective conditions,” write the authors. These clumps weren’t just independent cells sticking together for the sake of it – they acted as rudimentary multicellular creatures. They were formed not by random cells attaching but by genetically identical cells not fully separating after division. Furthermore, there was division of labor between cells. As the groups reached a certain size, some cells underwent programmed cell death, providing places for daughter clumps to break from. Since individual cells acting as autonomous organisms would value their own survival, this intentional culling suggests that the cells acted instead in the interest of the group as a whole organism.

Given how easily multicellular creatures can arise in test tubes, it might then come as no surprise that multicellularity has arisen at least a dozen times in the history of life, independently in bacteria, plants and of course, animals, beginning the evolutionary tree that we sit atop today. Our evolutionary history is littered with leaps of complexity. While such intricacies might seem impossible, study after study has shown that even the most complex structures can arise through the meandering path of evolution. In Evolution’s Witness, Ivan Schwab explains how one of the most complex organs in our body, our eyes, evolved. (…)

Eyes are highly intricate machines that require a number of parts working together to function. But not even the labyrinthine structures in the eye present an insurmountable barrier to evolution.

Our ability to see began to evolve long before animals radiated. Visual pigments, like retinal, are found in all animal lineages, and were first harnessed by prokaryotes to respond to changes in light more than 2.5 billion years ago. But the first complex eyes can be found about 540 million years ago, during a time of rapid diversification colloquially referred to as the Cambrian Explosion. It all began when comb jellies, sponges and jellyfish, along with clonal bacteria, were the first to group photoreceptive cells and create light-sensitive ‘eyespots’. These primitive visual centers could detect light intensity, but lacked the ability to define objects. That’s not to say, though, that eyespots aren’t important – eyespots are such an asset that they arose independently in at least 40 different lineages. But it was the other invertebrate lineages that would take the simple eyespot and turn it into something incredible.

According to Schwab, the transition from eyespot to eye is quite small. “Once an eyespot is established, the ability to recognize spatial characteristics – our eye definition – takes one of two mechanisms: invagination (a pit) or evagination (a bulge).” Those pits or bulges can then be focused with any clear material forming a lens (different lineages use a wide variety of molecules for their lenses). Add more pigments or more cells, and the vision becomes sharper. Each alteration is just a slight change from the one before, a minor improvement well within bounds of evolution’s toolkit, but over time these small adjustments led to intricate complexity.

In the Cambrian, eyes were all the rage. Arthropods were visual trendsetters, creating compound eyes by using the latter approach, that of bulging, then combining many little bulges together. One of the era’s top predators, Anomalocaris, had over 16,000 lenses! So many creatures arose with eyes during the Cambrian that Andrew Parker, a visiting member of the Zoology Department at the University of Oxford, believes that the development of vision was the driver behind the evolutionary explosion. His ‘Light-Switch’ hypothesis postulates that vision opened the doors for animal innovation, allowing rapid diversification in modes and mechanisms for a wide set of ecological traits. Even if eyes didn’t spur the Cambrian explosion, their development certainly irrevocably altered the course of evolution.

                          
                     Fossilized compound eyes from Cambrian arthropods (Lee et al. 2011)

Our eyes, as well as those of octopuses and fish, took a different approach than those of the arthropods, putting photo receptors into a pit, thus creating what is referred to as a camera-style eye. In the fossil record, eyes seem to emerge from eyeless predecessors rapidly, in less than 5 million years. But is it really possible that an eye like ours arose so suddenly? Yes, say biologists Dan-E. Nilsson and Susanne Pelger. They calculated a pessimistic guess as to how long it would take for small changes – just 1% improvements in length, depth, etc per generation – to turn a flat eyespot into an eye like our own. Their conclusion? It would only take about 400,000 years – a geological instant.

How does complexity arise in the first place

But how does complexity arise in the first place? How did cells get photoreceptors, or any of the first steps towards innovations such as vision? Well, complexity can arise a number of ways.

Each and every one of our cells is a testament to the simplest way that complexity can arise: have one simple thing combine with a different one. The powerhouses of our cells, called mitochondria, are complex organelles that are thought to have arisen in a very simple way. Some time around 3 billion years ago, certain bacteria had figured out how to create energy using electrons from oxygen, thus becoming aerobic. Our ancient ancestors thought this was quite a neat trick, and, as single cells tend to do, they ate these much smaller energy-producing bacteria. But instead of digesting their meal, our ancestors allowed the bacteria to live inside them as an endosymbiont, and so the deal was struck: our ancestor provides the fuel for the chemical reactions that the bacteria perform, and the bacteria, in turn, produces ATP for both of them. Even today we can see evidence of this early agreement – mitochondria, unlike other organelles, have their own DNA, reproduce independently of the cell’s reproduction, and are enclosed in a double membrane (the bacterium’s original membrane and the membrane capsule used by our ancestor to engulf it).

Over time the mitochondria lost other parts of their biology they didn’t need, like the ability to move around, blending into their new home as if they never lived on their own. The end result of all of this, of course, was a much more complex cell, with specialized intracellular compartments devoted to different functions: what we now refer to as a eukaryote.

Complexity can arise within a cell, too, because our molecular machinery makes mistakes. On occasion, it duplicates sections of DNA, entire genes, and even whole chromosomes, and these small changes to our genetic material can have dramatic effects. We saw how mutations can lead to a wide variety of phenotypic traits when we looked at how artificial selection has shaped dogs. These molecular accidents can even lead to complete innovation, like the various adaptations of flowering plants that I talked about in my last Evolution post. And as these innovations accumulate, species diverge, losing the ability to reproduce with each other and filling new roles in the ecosystem. While the creatures we know now might seem unfathomably intricate, they are the product of billions of years of slight variations accumulating.

Of course, while I focused this post on how complexity arose, it’s important to note that more complex doesn’t necessarily mean better. While we might notice the eye and marvel at its detail, success, from the viewpoint of an evolutionary lineage, isn’t about being the most elaborate. Evolution only leads to increases in complexity when complexity is beneficial to survival and reproduction.

Indeed, simplicity has its perks: the more simple you are, the faster you can reproduce, and thus the more offspring you can have. Many bacteria live happy simple lives, produce billions of offspring, and continue to thrive, representatives of lineages that have survived billions of years. Even complex organisms may favor less complexity – parasites, for example, are known for their loss of unnecessary traits and even whole organ systems, keeping only what they need to get inside and survive in their host. Darwin referred to them as regressive for seemingly violating the unspoken rule that more complex arises from less complex, not the other way around. But by not making body parts they don’t need, parasites conserve energy, which they can invest in other efforts like reproduction.

When we look back in an attempt to grasp evolution, it may instead be the lack of complexity, not the rise of it, that is most intriguing.”

See also:

Scientists recreate evolution of complexity using ‘molecular time travel’
Nature Has A Tendency To Reduce Complexity
Emergence and Complexity - prof. Robert Sapolsky’s lecture, Stanford University (video)

Jan
13th
Fri
permalink

Can A Scientist Define “Life”?

"Defining life poses a challenge that’s downright philosophical. (…) When Portland State University biologist Radu Popa was working on a book about defining life, he decided to count up all the definitions that scientists have published in books and scientific journals. Some scientists define life as something capable of metabolism. Others make the capacity to evolve the key distinction. Popa gave up counting after about 300 definitions.

Things haven’t gotten much better in the years since Popa published Between Necessity and Probability: Searching for the Definition and Origin of Life in 2004. Scientists have unveiled even more definitions, yet none of them have been widely embraced. But now Edward Trifonov, a biologist at the University of Haifa in Israel (…) analyzed the linguistic structure of 150 definitions of life, grouping similar words into categories. He found that he could sum up what they all have in common in three words. Life, Trifonov declares, is simply self-reproduction with variations.

Trifonov argues that this minimal definition is useful because it encompasses both life as we know it and life as we may discover it to be. And as scientists tinker with self-replicating molecules, they may be able to put his definition to the test. It may be possible for them to create a system of molecules that meets the requirements. If it fails to come “alive,” it will show that the definition was missing something crucial about life. (…)

A number of the scientists who responded to Trifonov felt that his definition was missing one key feature or another, such as metabolism, a cell, or information. Eugene Koonin, a biologist at the National Center for Biotechnology Information, thinks that Trifonov’s definition is missing error correction. He argues that “self-reproduction with variation” is redundant, since the laws of thermodynamics ensure that error-free replication is impossible. “The problem is the exact opposite,” Koonin observes: if life replicates with too many errors, it stops replicating. He offers up an alternative: life requires “replications with an error rate below the sustainability threshold.”

Jack Szostak, a Nobel-prize winning Harvard biologist, simply rejects the search for any definition of life. “Attempts to define life are irrelevant to scientific efforts to understand the origin of life,” he writes (article PDF).

Szostak himself has spent two decades tinkering with biological molecules to create simple artificial life. Instead of using DNA to store genetic information and proteins to carry out chemical reactions, Szostak hopes to create cells that only contain single-stranded RNA molecules. Like many researchers, Szostak suspects that RNA-based life preceded DNA-based life. It may have even been the first kind of life on Earth, even if it cannot be found on the planet today.

Life, Szostak suspects, arose through a long series of steps, as small molecules began interacting with each other, replicating, getting enveloped into cells, and so on. Once there were full-blown cells that could grow, divide, and evolve, no one would deny that life had come to exist on Earth. But it’s pointless to try to find the precise point along the path where life suddenly sprang into being and met an arbitrary definition. “None of this matters, however, in terms of the fundamental scientific questions concerning the transitions leading from chemistry to biology,” says Szostak.

It’s conceivable that Mars has Earth-like life, either because one planet infected the other, or because chemistry became biology along the same path on both of them. In either case, Curiosity [rover] may be able to do some good science when it arrives at Mars this summer. But if it’s something fundamentally different, even the most sophisticated machines may not be able to help us until we come to a decision about what we’re looking for in the first place.”

Carl Zimmer, popular science writer and blogger, Can A Scientist Define “Life”?, Txchnologist, Jan 10, 2012. (Illustration: Russell Kightley)

Dec
17th
Sat
permalink

Infinite Stupidity. Social evolution may have sculpted us not to be innovators and creators as much as to be copiers


A review of some big events

"Obviously one of the big events in our history was the origin of our planet, about 4.5 billion years ago. And what’s fascinating is that about 3.8 billion years ago, only about seven or eight hundred million years after the origin of our planet, life arose. That life was simple replicators, things that could make copies of themselves. And we think that life was a little bit like the bacteria we see on earth today. It would be the ancestors of the bacteria we see on earth today.

That life ruled the world for 2 billion years, and then about 1.5 billion years ago, a new kind of life emerged. These were the eukaryotic cells. They were a little bit different kind of cell from bacteria. And actually the kind of cells we are made of. And again, these organisms that were eukaryotes were single-celled, so even 1.5 billion years ago, we still just had single-celled organisms on earth. But it was a new kind of life.

It was another 500 million years before we had anything like a multicellular organism, and it was another 500 million years after that before we had anything really very interesting. So, about 500 million years ago, the plants and the animals started to evolve. And I think everybody would agree that this was a major event in the history of the world, because, for the first time, we had complex organisms.

After about 500 million years ago, things like the plants evolved, the fish evolved, lizards and snakes, dinosaurs, birds, and eventually mammals. And then it was really just six or seven million years ago, within the mammals, that the lineage that we now call the hominins arose. And they would be direct descendants of us. And then, within that lineage that arose about six or seven million years ago, it was only about 200,000 years ago that humans finally evolved.

Idea of idea evolution

And so, this is really just 99.99 percent of the way through the history of this planet, humans finally arose. But in that 0.01 percent of life on earth, we’ve utterly changed the planet. And the reason is that, with the arrival of humans 200,000 years ago, a new kind of evolution was created. The old genetical evolution that had ruled for 3.8 billion years now had a competitor, and that new kind of evolution was ideas.

It was a true form of evolution, because now ideas could arise, and they could jump from mind to mind, without genes having to change. So, populations of humans could adapt at the level of ideas. Ideas could accumulate. We call this cumulative cultural adaptation. And so, cultural complexity could emerge and arise orders and orders of magnitude faster than genetic evolution.

Now, I think most of us take that utterly for granted, but it has completely rewritten the way life evolves on this planet because, with the arrival of our species, everything changed. Now, a single species, using its idea evolution, that could proceed apace independently of genes, was able to adapt to nearly every environment on earth, and spread around the world where no other species had done that. All other species are limited to places on earth that their genes adapt them to. But we were able to adapt at the level of our cultures to every place on earth. (…)

If we go back in our lineage 2 million years or so, there was a species known as Homo erectus. Homo erectus is an upright ape that lived on the African savannah. It could make tools, but they were very limited tools, and those tools, the archaeological record tells us, didn’t change for about 1.5 million years. That is, until about the time they went extinct. That is, they made the same tools over and over and over again, without any real changes to them.

If we move forward in time a little bit, it’s not even clear that our very close cousins that we know are related to us 99.5 or 99.6 percent in the sequences of their genes, the Neanderthals, it’s not even clear that they had what we call idea evolution. Sure enough, their tools that they made were more complex than our tools. But the 300,000 or so years that they spent in Europe, their toolkit barely changed. So there’s very little evolution going on.

So there’s something really very special about this new species, humans, that arose and invented this new kind of evolution, based on ideas. And so it’s useful for us to ask, what is it about humans that distinguishes them? It must have been a tiny genetic difference between us and the Neanderthals because, as I said, we’re so closely related to them genetically, a tiny genetic difference that had a vast cultural potential.

That difference is something that anthropologists and archaeologists call social learning. It’s a very difficult concept to define, but when we talk about it, all of us humans know what it means. And it seems to be the case that only humans have the capacity to learn complex new or novel behaviors, simply by watching and imitating others. And there seems to be a second component to it, which is that we seem to be able to get inside the minds of other people who are doing things in front of us, and understand why it is they’re doing those things. These two things together, we call social learning.

Many people respond that, oh, of course the other animals can do social learning, because we know that the chimpanzees can imitate each other, and we see all sorts of learning in animals like dolphins and the other monkeys, and so on. But the key point about social learning is that this minor difference between us and the other species forms an unbridgeable gap between us and them. Because, whereas all of the other animals can pick up the odd behavior by having their attention called to something, only humans seem to be able to select, among a range of alternatives, the best one, and then to build on that alternative, and to adapt it, and to improve upon it. And so, our cultures cumulatively adapt, whereas all other animals seem to do the same thing over and over and over again.

Even though other animals can learn, and they can even learn in social situations, only humans seem to be able to put these things together and do real social learning. And that has led to this idea evolution. What’s a tiny difference between us genetically has opened up an unbridgeable gap, because only humans have been able to achieve this cumulative cultural adaptation. (…)

I’m interested in this because I think this capacity for social learning, which we associate with our intelligence, has actually sculpted us in ways that we would have never anticipated. And I want to talk about two of those ways that I think it has sculpted us. One of the ways has to do with our creativity, and the other has to do with the nature of our intelligence as social animals.

One of the first things to be aware of when talking about social learning is that it plays the same role within our societies, acting on ideas, as natural selection plays within populations of genes. Natural selection is a way of sorting among a range of genetic alternatives, and finding the best one. Social learning is a way of sifting among a range of alternative options or ideas, and choosing the best one of those. And so, we see a direct comparison between social learning driving idea evolution, by selecting the best ideas —we copy people that we think are successful, we copy good ideas, and we try to improve upon them — and natural selection, driving genetic evolution within societies, or within populations.

I think this analogy needs to be taken very seriously, because just as natural selection has acted on genetic populations, and sculpted them, we’ll see how social learning has acted on human populations and sculpted them.

What do I mean by “sculpted them”? Well, I mean that it’s changed the way we are. And here’s one reason why. If we think that humans have evolved as social learners, we might be surprised to find out that being social learners has made us less intelligent than we might like to think we are. And here’s the reason why.

If I’m living in a population of people, and I can observe those people, and see what they’re doing, seeing what innovations they’re coming up with, I can choose among the best of those ideas, without having to go through the process of innovation myself. So, for example, if I’m trying to make a better spear, I really have no idea how to make that better spear. But if I notice that somebody else in my society has made a very good spear, I can simply copy him without having to understand why.

What this means is that social learning may have set up a situation in humans where, over the last 200,000 years or so, we have been selected to be very, very good at copying other people, rather than innovating on our own. We like to think we’re a highly inventive, innovative species. But social learning means that most of us can make use of what other people do, and not have to invest the time and energy in innovation ourselves.

Now, why wouldn’t we want to do that? Why wouldn’t we want to innovate on our own? Well, innovation is difficult. It takes time. It takes energy. Most of the things we try to do, we get wrong. And so, if we can survey, if we can sift among a range of alternatives of people in our population, and choose the best one that’s going at any particular moment, we don’t have to pay the costs of innovation, the time and energy ourselves. And so, we may have had strong selection in our past to be followers, to be copiers, rather than innovators.

This gives us a whole new slant on what it means to be human, and I think, in many ways, it might fit with some things that we realize are true about ourselves when we really look inside ourselves. We can all think of things that have made a difference in the history of life. The first hand axe, the first spear, the first bow and arrow, and so on. And we can ask ourselves, how many of us have had an idea that would have changed humanity? And I think most of us would say, well, that sets the bar rather high. I haven’t had an idea that would change humanity. So let’s lower the bar a little bit and say, how many of us have had an idea that maybe just influenced others around us, something that others would want to copy? And I think even then, very few of us can say there have been very many things we’ve invented that others would want to copy.

This says to us that social evolution may have sculpted us not to be innovators and creators as much as to be copiers, because this extremely efficient process that social learning allows us to do, of sifting among a range of alternatives, means that most of us can get by drawing on the inventions of others.

The formation of social groups

Now, why do I talk about this? It sounds like it could be a somewhat dry subject, that maybe most of us are copiers or followers rather than innovators. And what we want to do is imagine that our history over the last 200,000 years has been a history of slowly and slowly and slowly living in larger and larger and larger groups.

Early on in our history, it’s thought that most of us lived in bands of maybe five to 25 people, and that bands formed bands of bands that we might call tribes. And maybe tribes were 150 people or so on. And then tribes gave way to chiefdoms that might have been thousands of people. And chiefdoms eventually gave way to nation-states that might have been tens of thousands or even hundreds of thousands, or millions, of people. And so, our evolutionary history has been one of living in larger and larger and larger social groups.

What I want to suggest is that that evolutionary history will have selected for less and less and less innovation in individuals, because a little bit of innovation goes a long way. If we imagine that there’s some small probability that someone is a creator or an innovator, and the rest of us are followers, we can see that one or two people in a band is enough for the rest of us to copy, and so we can get on fine. And, because social learning is so efficient and so rapid, we don’t need all to be innovators. We can copy the best innovations, and all of us benefit from those.

But now let’s move to a slightly larger social group. Do we need more innovators in a larger social group? Well, no. The answer is, we probably don’t. We probably don’t need as many as we need in a band. Because in a small band, we need a few innovators to get by. We have to have enough new ideas coming along. But in a larger group, a small number of people will do. We don’t have to scale it up. We don’t have to have 50 innovators where we had five in the band, if we move up to a tribe. We can still get by with those three or four or five innovators, because all of us in that larger social group can take advantage of their innovations.

Language is the way we exchange ideas

And here we can see a very prominent role for language. Language is the way we exchange ideas. And our eyes allow us to see innovations and language allows us to exchange ideas. And language can operate in a larger society, just as efficiently as it can operate in a small society. It can jump across that society in an instant.

You can see where I’m going. As our societies get larger and larger, there’s no need, in fact, there’s even less of a need for any one of us to be an innovator, whereas there is a great advantage for most of us to be copiers, or followers. And so, a real worry is that our capacity for social learning, which is responsible for all of our cumulative cultural adaptation, all of the things we see around us in our everyday lives, has actually promoted a species that isn’t so good at innovation. It allows us to reflect on ourselves a little bit and say, maybe we’re not as creative and as imaginative and as innovative as we thought we were, but extraordinarily good at copying and following.

If we apply this to our everyday lives and we ask ourselves, do we know the answers to the most important questions in our lives? Should you buy a particular house? What mortgage product should you have? Should you buy a particular car? Who should you marry? What sort of job should you take? What kind of activities should you do? What kind of holidays should you take? We don’t know the answers to most of those things. And if we really were the deeply intelligent and imaginative and innovative species that we thought we were, we might know the answers to those things.

And if we ask ourselves how it is we come across the answers, or acquire the answers to many of those questions, most of us realize that we do what everybody else is doing. This herd instinct, I think, might be an extremely fundamental part of our psychology that was perhaps an unexpected and unintended, you might say, byproduct of our capacity for social learning, that we’re very, very good at being followers rather than leaders. A small number of leaders or innovators or creative people is enough for our societies to get by.

Now, the reason this might be interesting is that, as the world becomes more and more connected, as the Internet connects us and wires us all up, we can see that the long-term consequences of this is that humanity is moving in a direction where we need fewer and fewer and fewer innovative people, because now an innovation that you have somewhere on one corner of the earth can instantly travel to another corner of the earth, in a way that it would have never been possible to do 10 years ago, 50 years ago, 500 years ago, and so on. And so, we might see that there has been this tendency for our psychology and our humanity to be less and less innovative, at a time when, in fact, we may need to be more and more innovative, if we’re going to be able to survive the vast numbers of people on this earth.

That’s one consequence of social learning, that it has sculpted us to be very shrewd and intelligent at copying, but perhaps less shrewd at innovation and creativity than we’d like to think. Few of us are as creative as we’d like to think we are. I think that’s been one perhaps unexpected consequence of social learning.

Another side of social learning I’ve been thinking about - it’s a bit abstract, but I think it’s a fascinating one -goes back again to this analogy between natural selection, acting on genetic variation, and social learning, acting on variation in ideas. And any evolutionary process like that has to have both a sorting mechanism, natural selection, and what you might call a generative mechanism, a mechanism that can create variety.

We all know what that mechanism is in genes. We call it mutation, and we know that from parents to offspring, genes can change, genes can mutate. And that creates the variety that natural selection acts on. And one of the most remarkable stories of nature is that natural selection, acting on this mindlessly-generated genetic variation, is able to find the best solution among many, and successively add those solutions, one on top of the other. And through this extraordinarily simple and mindless process, create things of unimaginable complexity. Things like our cells, eyes and brains and hearts, and livers, and so on. Things of unimaginable complexity, that we don’t even understand and none of us could design. But they were designed by natural selection.

Where do ideas come from?

Now let’s take this analogy of a mindless process and take - there’s a parallel between social learning driving evolution at the idea level and natural selection driving evolution at the genetic level - and ask what it means for the generative mechanism in our brains.

Well, where do ideas come from? For social learning to be a sorting process that has varieties to act on, we have to have a variety of ideas. And where do those new ideas come from?

The idea that I’ve been thinking about, that I think is worth contemplating about our own minds is what is the generative mechanism? If we do have any creativity at all and we are innovative in some ways, what’s the nature of that generative mechanism for creating new ideas?

This is a question that’s been asked for decades. What is the nature of the creative process? Where do ideas come from? And let’s go back to genetic evolution and remember that, there, the generative mechanism is random mutation.

Now, what do we think the generative mechanism is for idea evolution? Do we think it’s random mutation of some sort, of ideas? Well, all of us think that it’s better than that. All of us think that somehow we can come up with good ideas in our minds. And whereas natural selection has to act on random variation, social learning must be acting on directed variation. We know what direction we’re going.

But, we can go back to our earlier discussion of social learning, and ask the question, well, if you were designing a new hand axe, or a new spear, or a new bow and a new arrow, would you really know how to make a spear fly better? Would you really know how to make a bow a better bow? Would you really know how to shape an arrowhead so that it penetrated its prey better? And I think most of us realize that we probably don’t know the answers to those questions. And that suggests to us that maybe our own creative process rests on a generative mechanism that isn’t very much better than random itself.

And I want to go further, and suggest that our mechanism for generating ideas maybe couldn’t even be much better than random itself. And this really gives us a different view of ourselves as intelligent organisms. Rather than thinking that we know the answers to everything, could it be the case that the mechanism that our brain uses for coming up with new ideas is a little bit like the mechanism that our genes use for coming up with new genetic variance, which is to randomly mutate ideas that we have, or to randomly mutate genes that we have.

Now, it sounds incredible. It sounds insane. It sounds mad. Because we think of ourselves as so intelligent. But when we really ask ourselves about the nature of any evolutionary process, we have to ask ourselves whether it could be any better than random, because in fact, random might be the best strategy.

Genes could never possibly know how to mutate themselves, because they could never anticipate the direction the world was going. No gene knows that we’re having global warming at the moment. No gene knew 200,000 years ago that humans were going to evolve culture. Well, the best strategy for any exploratory mechanism, when we don’t know the nature of the processes we’re exploring, is to throw out random attempts at understanding that field or that space we’re trying to explore.

And I want to suggest that the creative process inside our brains, which relies on social learning, that creative process itself never could have possibly anticipated where we were going as human beings. It couldn’t have anticipated 200,000 years ago that, you know, a mere 200,000 years later, we’d have space shuttles and iPods and microwave ovens.

What I want to suggest is that any process of evolution that relies on exploring an unknown space, such as genes or such as our neurons exploring the unknown space in our brains, and trying to create connections in our brains, and such as our brain’s trying to come up with new ideas that explore the space of alternatives that will lead us to what we call creativity in our social world, might be very close to random.

We know they’re random in the genetic case. We think they’re random in the case of neurons exploring connections in our brain. And I want to suggest that our own creative process might be pretty close to random itself. And that our brains might be whirring around at a subconscious level, creating ideas over and over and over again, and part of our subconscious mind is testing those ideas. And the ones that leak into our consciousness might feel like they’re well-formed, but they might have sorted through literally a random array of ideas before they got to our consciousness.

Karl Popper famously said the way we differ from other animals is that our hypotheses die in our stead; rather than going out and actually having to try out things, and maybe dying as a result, we can test out ideas in our minds. But what I want to suggest is that the generative process itself might be pretty close to random.

Putting these two things together has lots of implications for where we’re going as societies. As I say, as our societies get bigger, and rely more and more on the Internet, fewer and fewer of us have to be very good at these creative and imaginative processes. And so, humanity might be moving towards becoming more docile, more oriented towards following, copying others, prone to fads, prone to going down blind alleys, because part of our evolutionary history that we could have never anticipated was leading us towards making use of the small number of other innovations that people come up with, rather than having to produce them ourselves.

The interesting thing with Facebook is that, with 500 to 800 million of us connected around the world, it sort of devalues information and devalues knowledge. And this isn’t the comment of some reactionary who doesn’t like Facebook, but it’s rather the comment of someone who realizes that knowledge and new ideas are extraordinarily hard to come by. And as we’re more and more connected to each other, there’s more and more to copy. We realize the value in copying, and so that’s what we do.

And we seek out that information in cheaper and cheaper ways. We go up on Google, we go up on Facebook, see who’s doing what to whom. We go up on Google and find out the answers to things. And what that’s telling us is that knowledge and new ideas are cheap. And it’s playing into a set of predispositions that we have been selected to have anyway, to be copiers and to be followers. But at no time in history has it been easier to do that than now. And Facebook is encouraging that.

And then, as corporations grow … and we can see corporations as sort of microcosms of societies … as corporations grow and acquire the ability to acquire other corporations, a similar thing is happening, is that, rather than corporations wanting to spend the time and the energy to create new ideas, they want to simply acquire other companies, so that they can have their new ideas. And that just tells us again how precious these ideas are, and the lengths to which people will go to acquire those ideas.

A tiny number of ideas can go a long way, as we’ve seen. And the Internet makes that more and more likely. What’s happening is that we might, in fact, be at a time in our history where we’re being domesticated by these great big societal things, such as Facebook and the Internet. We’re being domesticated by them, because fewer and fewer and fewer of us have to be innovators to get by. And so, in the cold calculus of evolution by natural selection, at no greater time in history than ever before, copiers are probably doing better than innovators. Because innovation is extraordinarily hard. My worry is that we could be moving in that direction, towards becoming more and more sort of docile copiers.

But, these ideas, I think, are received with incredulity, because humans like to think of themselves as highly shrewd and intelligent and innovative people. But I think what we have to realize is that it’s even possible that, as I say, the generative mechanisms we have for coming up with new ideas are no better than random.

And a really fascinating idea itself is to consider that even the great people in history whom we associate with great ideas might be no more than we expect by chance. I’ll explain that. Einstein was once asked about his intelligence and he said, “I’m no more intelligent than the next guy. I’m just more curious.” Now, we can grant Einstein that little indulgence, because we think he was a pretty clever guy.

What does curiosity mean?

But let’s take him at his word and say, what does curiosity mean? Well, maybe curiosity means trying out all sorts of ideas in your mind. Maybe curiosity is a passion for trying out ideas. Maybe Einstein’s ideas were just as random as everybody else’s, but he kept persisting at them.

And if we say that everybody has some tiny probability of being the next Einstein, and we look at a billion people, there will be somebody who just by chance is the next Einstein. And so, we might even wonder if the people in our history and in our lives that we say are the great innovators really are more innovative, or are just lucky.

Now, the evolutionary argument is that our populations have always supported a small number of truly innovative people, and they’re somehow different from the rest of us. But it might even be the case that that small number of innovators just got lucky. And this is something that I think very few people will accept. They’ll receive it with incredulity. But I like to think of it as what I call social learning and, maybe, the possibility that we are infinitely stupid.”

Mark Pagel, Professor of Evolutionary Biology, Reading University, England and The Santa Fe Institute, Infinite Stupidity, Edge, Dec 16, 2011 (Illustration by John S. Dykes)

See also:

☞ Mark Pagel: How language transformed humanity



Biologist Mark Pagel shares an intriguing theory about why humans evolved our complex system of language. He suggests that language is a piece of “social technology” that allowed early human tribes to access a powerful new tool: cooperation. Mark Pagel: How language transformed humanity, TED.com, July 2011

The Kaleidoscopic Discovery Engine. ‘All scientific discoveries are in principle ‘multiples’’
Neal Gabler on The Elusive Big Idea - ‘We are living in a post ideas world where bold ideas are almost passé’

Nov
24th
Thu
permalink

Are You Totally Improbable Or Totally Inevitable?

                        

"If we have never been amazed by the very fact that we exist, we are squandering the greatest fact of all."

Will Durant, American writer, historian, and philosopher (1885-1981)

"Not only have you been lucky enough to be attached since time immemorial to a favored evolutionary line, but you have also been extremely — make that miraculously — fortunate in your personal ancestry. Consider the fact that for 3.8 billion years, a period of time older than the Earth’s mountains and rivers and oceans, every one of your forebears on both sides has been attractive enough to find a mate, healthy enough to reproduce, and sufficiently blessed by fate and circumstances to live long enough to do so. Not one of your pertinent ancestors was squashed, devoured, drowned, starved, stuck fast, untimely wounded or otherwise deflected from its life’s quest of delivering a tiny charge of genetic material to the right partner at the right moment to perpetuate the only possible sequence of hereditary combinations that could result — eventually, astoundingly, and all too briefly — in you. (…)

The number of people on whose cooperative efforts your eventual existence depends has risen to approximately 1,000,000,000,000,000,000, which is several thousand times the total number of people who have ever lived. (…)

We are awfully lucky to be here-and by ‘we’ I mean every living thing. To attain any kind of life in this universe of ours appears to be quite an achievement. As humans we are doubly lucky, of course: We enjoy not only the privilege of existence but also the singular ability to appreciate it and even, in a multitude of ways, to make it better. It is a talent we have only barely begun to grasp.”

Bill Bryson, A Short History of Nearly Everything, Black Swan, 2003

“Statistically, the probability of any one of us being here is so small that you’d think the mere fact of existing would keep us all in a contented dazzlement of surprise.”

Lewis Thomas, The Lives of a Cell, Bantam Books, 1984, p. 165.

“Life is one huge lottery where only the winning tickets are visible.”

Jostein Gaarder, The Orange Girl, Orion Publishing, 2004

“’We are the lucky ones for we shall die’, as there is an infinite number of possible forms of DNA all but a few billions of which will never burst into consciousness.”

Frank Close, a noted particle physicist who is currently Professor of Physics at the University of Oxford, The Void, Oxford University Press

"What are the odds that you exist, as you, today? Author Dr Ali Binazir attemps to quantify the probability that you came about and exist as you today, and reveals that the odds of you existing are almost zero.

Think about yourself.
You are here because…
Your dad met your mom.
Then your dad and mom conceived you.
So a particular egg in your mom
Joined a particular sperm from your dad
Which could only happen because not one of your direct ancestors, going all the way back to the beginning of life itself, died before passing on his or her genes…
So what are the chances of you happening?
Of you being here?

Author Ali Binazir did the calculations last spring and decided that the chances of anyone existing are one in 102,685,000. In other words (…) you are totally improbable.

— Robert Krulwich, Are You Totally Improbable Or Totally Inevitable?, NPR, Nov 21, 2011

"First, let’s talk about the probability of your parents meeting.  If they met one new person of the opposite sex every day from age 15 to 40, that would be about 10,000 people. Let’s confine the pool of possible people they could meet to 1/10 of the world’s population twenty years go (one tenth of 4 billion = 400 million) so it considers not just the population of the US but that of the places they could have visited. Half of those people, or 200 million, will be of the opposite sex.  So let’s say the probability of your parents meeting, ever, is 10,000 divided by 200 million:

104/2×108= 2×10-4, or one in 20,000.

Probability of boy meeting girl: 1 in 20,000.

So far, so unlikely.

Now let’s say the chances of them actually talking to one another is one in 10.  And the chances of that turning into another meeting is about one in 10 also.  And the chances of that turning into a long-term relationship is also one in 10.  And the chances of that lasting long enough to result in offspring is one in 2.  So the probability of your parents’ chance meeting resulting in kids is about 1 in 2000.

Probability of same boy knocking up same girl: 1 in 2000.

So the combined probability is already around 1 in 40 million — long but not insurmountable odds.  Now things start getting interesting.  Why?  Because we’re about to deal with eggs and sperm, which come in large numbers.

Each sperm and each egg is genetically unique because of the process of meiosis; you are the result of the fusion of one particular egg with one particular sperm.  A fertile woman has 100,000 viable eggs on average.  A man will produce about 12 trillion sperm over the course of his reproductive lifetime.  Let’s say a third of those (4 trillion) are relevant to our calculation, since the sperm created after your mom hits menopause don’t count.  So the probability of that one sperm with half your name on it hitting that one egg with the other half of your name on it is

1/(100,000)(4 trillion)= 1/(105)(4×1012)= 1 in 4 x 1017, or one in 400 quadrillion.

Probability of right sperm meeting right egg: 1 in 400 quadrillion.

But we’re just getting started.

Because the existence of you here now on planet earth presupposes another supremely unlikely and utterly undeniable chain of events.  Namely, that every one of your ancestors lived to reproductive age – going all the way back not just to the first Homo sapiens, first Homo erectus and Homo habilis, but all the way back to the first single-celled organism.  You are a representative of an unbroken lineage of life going back 4 billion years.

Let’s not get carried away here; we’ll just deal with the human lineage.  Say humans or humanoids have been around for about 3 million years, and that a generation is about 20 years.  That’s 150,000 generations.  Say that over the course of all human existence, the likelihood of any one human offspring to survive childhood and live to reproductive age and have at least one kid is 50:50 – 1 in 2. Then what would be the chance of your particular lineage to have remained unbroken for 150,000 generations?

Well then, that would be one in 2150,000 , which is about 1 in 1045,000– a number so staggeringly large that my head hurts just writing it down. That number is not just larger than all of the particles in the universe – it’s larger than all the particles in the universe if each particle were itself a universe.

Probability of every one of your ancestors reproducing successfully: 1 in 1045,000

But let’s think about this some more.  Remember the sperm-meeting-egg argument for the creation of you, since each gamete is unique?  Well, the right sperm also had to meet the right egg to create your grandparents.  Otherwise they’d be different people, and so would their children, who would then have had children who were similar to you but not quite you.  This is also true of your grandparents’ parents, and their grandparents, and so on till the beginning of time.  If even once the wrong sperm met the wrong egg, you would not be sitting here noodling online reading fascinating articles like this one.  It would be your cousin Jethro, and you never really liked him anyway.

That means in every step of your lineage, the probability of the right sperm meeting the right egg such that the exact right ancestor would be created that would end up creating you is one in 1200 trillion, which we’ll round down to 1000 trillion, or one quadrillion.

So now we must account for that for 150,000 generations by raising 400 quadrillion to the 150,000th power:

[4x1017]150,000 ≈ 102,640,000

That’s a ten followed by 2,640,000 zeroes, which would fill 11 volumes of a book the size of The Tao of Dating with zeroes.

To get the final answer, technically we need to multiply that by the 1045,000 , 2000 and 20,000 up there, but those numbers are so shrimpy in comparison that it almost doesn’t matter.  For the sake of completeness:

(102,640,000)(1045,000)(2000)(20,000) = 4x 102,685,007 ≈ 102,685,000

Probability of your existing at all: 1 in 102,685,000

As a comparison, the number of atoms in the body of an average male (80kg, 175 lb) is 1027.  The number of atoms making up the earth is about 1050. The number of atoms in the known universe is estimated at 1080.

So what’s the probability of your existing?  It’s the probability of 2 million people getting together – about the population of San Diego – each to play a game of dice with trillion-sided dice. They each roll the dice, and they all come up the exact same number – say, 550,343,279,001.”


                                                         Click image to enlarge

— Ali Binazir, What are the chances of your coming into being?, June 15, 2011

A lovely comment by PZ Myers, a biologist and associate professor at the University of Minnesota:

"You are a contingent product of many chance events, but so what? So is everything else in the universe. That number doesn’t make you any more special than a grain of sand on a beach, which also arrived at its precise shape, composition, and location by a series of chance events. (…)

You are one of 7 billion people, occupying an insignificant fraction of the volume of the universe, and you aren’t a numerical miracle at all — you’re actually rather negligible.”

— PZ Myers, A very silly calculation, Pharyngula, Nov 14, 2011

'Life is one huge lottery where only the winning tickets are visible'

   “Thirteen forty-nine,” was the first thing [he] said.
   “The Black Death,” I replied. I had a pretty good knowledge of history, but I had no idea what the Black Death had to do with coincidences.
   ”Okay,” he said, and off he went. “You probably know that half Norway’s population was wiped out during that great plague. But there’s a connection here I haven’t told you about. Did you know that you had thousands of ancestors at that time?” he continued.
   I shook my head in dispair. How could that possibly be?
   ”You have two parents, four grandparents, eight great-grandparents, sixteen great-great grandparents — and so on. If you work it out, right back to 1349 — there are quite a lot.
  “Then came the bubonic plague. Death spread from neighborhood to neighborhood, and the children were hit worst. Whole families died, sometimes one or two family members survived. A lot of your ancestors were children at this time, Hans Thomas. But none of them kicked the bucket.”
  “How can you be so sure about that?” I asked on amazement.
   He took a long drag on his cigarette and said, “Because you’re sitting here  looking out over the Adriatic.

  “The chances of of single ancestor of yours not dying while growing up is one in several billion. Because it isn’t just about the Black Death, you know. Actually all of your ancestors have grown up and had children — even during the worst natural disasters, even when the child mortality rate was enormous. Of course, a lot of them have suffered from illness, but they’ve always pulled through. In a way, you have been a millimeter from death billions of times, Hans Thomas.

Your life on this planet has been threatned by insects, wild animals, meteorites, lightning, sickness, war, flods, fires, poisoning, and attempted murders. In the battle of Stikelstad alone you were injured hundreds of times. Because you must have had ancestors on both sides — yes, really you were fighting against yourself and your chances of being born a thousand years later. You know, the same goes for the last world war. If Grandpa had been shot by good Norwegians during the occupation, then neither you nor I would have been born. The point is, this happened billions of times through history. Each time an arrow rained through the air, your chances of being born have been reduced to the minimum.”

   He continued: “I am talking about one long chain of coincidences. In fact, that chain goes right back to the first living cell, which divided in two, and from there gave birth to everything growing and sprouting on this planet today. The chance of my chain not being broken at one time or another duirng three or four billion years is so little it is almost inconceivable. But I have pulled through, you know. Damned right, I have. In return, I appreciate how fantastically lucky I am to be able to experience this planet this planet together with you. I realize how lucky every single little crawling insect on this planet is.”

   "What about the unlucky ones?" I asked.
   ”They don’t exist! They were never born. Life is one huge lottery where only the winning tickets are visible.”

Jostein Gaarder, The Orange Girl, Orion Publishing, 2004.

(Illustration source)

See also:

☞ Richard Dawkins, Unweaving the Rainbow, Lapidarium notes

Sep
8th
Thu
permalink

Curiosity as a mechanism for achieving and maintaining high levels of well-being and meaning in life

"A primary interest [of this study] was whether people high in trait curiosity derive greater well-being on days when they are more curious. We also tested whether trait and daily curiosity led to greater, sustainable well-being. Predictions were tested using trait measures and 21 daily diary reports from 97 college students.

We found that on days when they are more curious, people high in trait curiosity reported more frequent growth-oriented behaviors, and greater presence of meaning, search for meaning, and life satisfaction. Greater trait curiosity and greater curiosity on a given day also predicted greater persistence of meaning in life from one day into the next. People with greater trait curiosity reported more frequent hedonistic events but they were associated with less pleasure compared to the experiences of people with less trait curiosity.

The benefits of hedonistic events did not last beyond the day of their occurrence. As evidence of construct specificity, curiosity effects were not attributable to Big Five personality traits or daily positive or negative mood. Our results provide support for curiosity as an ingredient in the development of well-being and meaning in life.”

Todd B. Kashdan, Ph.D., is Associate Professor of Psychology at George Mason University, and Michael F. Steger, Assistant Professor in the Counseling Psychology and Applied Social Psychology programs at Colorado State University, Curiosity and pathways to well-being and meaning in life: Traits, states, and everyday behaviors, SpringerLink, Volume 31, Number 3, 159-173

See also:

☞ T. B. Kashdan, P. Rose, and F. D. Fincham, Curiosity and Exploration: Facilitating Positive Subjective Experiences and Personal Growth Opportunities (pdf), Department of Psychology State University of New York at Buffalo, 2004
Jonah Lehrer on the itch of Curiosity
Our brains are hardwired to fear creativity, Cornell University
Curiosity tag on Lapidarium

Jul
16th
Sat
permalink

Keri Smith on ‘How To Be An Explorer of the World’


                                                           (Click image to enlarge)

"Artists and scientists analyze the world in surprisingly similar ways."

Science – “The intellectual and practical activity encompassing the systematic study of structure and behavior of the physical and natural world through observation and experiment.”

— Oxford American Dictionary, cited in ibidem, p. 199.


p. 1

"[The residual purpose of art is] purposeless play. This play, however, is an affirmation of life - not an attempt to bring order out of chaos nor to suggest improvements in creation, but simply a way of waking up to the very life we’re living, which is so excellent once one gets one’s mind and one’s desires out of its way and lets it act of its own accord.”

John Cage, cited in ibidem, p. 104


p.116


p. 75

“Sometimes a tree can tell you more than can be read in a book.”

Carl Jung cited in ibidem, p. 138.

Keri Smith, author, illustrator, guerilla artist, How To Be An Explorer of the World: Portable Life Museum, Penguin Books, 2008.

Jun
6th
Mon
permalink

Geoffrey West on Why Cities Keep Growing, Corporations and People Always Die, and Life Gets Faster

         

What extent can biology and social organization (which are both quintessential complex adaptive systems) be put in a more quantitative, analytic, mathematizable, predictive framework so that we can understand them in the way that we understand “simple physical systems”?

It is very clear from the beginning that we will never have a theory of biological and social systems that is like physics — that is, something that’s precise that we can predict, like for example, the motion of the planets with great precision or the magnetic electron to 12 decimal places. Nothing approaching that can possibly be in these other sciences, because they are complex systems.

Nevertheless, that doesn’t mean that you couldn’t have a quantitative theory. It would simply mean that you would possibly have a theory that is cross-grained. Meaning that you would be able to ask questions, big questions, and answer them in an average idealized setting. (…)

Scaling phenomena

I started working some years ago on questions in biology. I started using the very powerful techniques developed in physics, and that have run through the history of physics, to think about scaling phenomena. The great thing about scaling is that if you observe scaling (that is, how the various characteristics of a system change when you change its size) and if you see regularity over several orders of magnitude, that typically means that there are underlying generic principles, that it is not an accident. If you see that in a system, it is opening a window onto some underlying, let’s use the word, “universal principle”.

The remarkable thing in biology that got me excited and has led to all of my present work (which has now gone beyond biology and into social organizations, cities, and companies) is that there was data, quite old and fundamental to all biological processes, about metabolism: Here is maybe the most complex physical chemical process possibly in the universe, and when you ask how it is scaled with size across mammals (as an example to keep it simple) you find that there is an extraordinary regularity.

This is surprising because we believe in natural selection, and natural selection has built into it this idea that history plays an important role. There’s the environmental niche for every organism, every component of an organism, every cell type is unique and has its own unique history. So if you plotted, for example the metabolic rate on the Y axis and size on the X axis, because of the extraordinary diversity and complexity of the system and the historical contingency, you would expect points all over the map representing, of course, history and geography and so on.

Well, you find quite the contrary. You find a very simple curve, and that curve has a very simple mathematical formula. It comes out to be a very simple power law. In fact, the power law not only is simple in itself mathematically, but here it has an exponent that is extraordinarily simple. The exponent was very close to the number three quarters.

First of all, that was amazing in itself, that you see scaling. But more importantly was that the scaling is manifested across all of life into eco-systems and down within cells. So this scaling law is truly remarkable. It goes from intracellular up to ecosystems almost 30 orders of magnitude. They’re the same phenomenon. (…)

That is, it scales as a simple power law. The extraordinary thing about it is that the power law has an exponent, which is always a simple multiple of one quarter. What you determine just from the data is that there’s this extraordinary simple number, four, which seems to dominate all biology and across all taxonomic groups from the microscopic to the macroscopic.

This can hardly be an accident. If you see scaling, it is manifesting something that transcends history, geography, and therefore the evolved engineered structure of the organism because it applies to me, all mammals, and the trees sitting out there, even though we’re completely different designs.

The big question is where in the hell does that number come from? And what is it telling us about the structure of the biology?  And what is it telling us about the constraints under which evolution occurred? That was the beginning of all this.

Are cities and companies just extensions of biology?

I’ll say a few words about what we propose as the solution. But to jump ahead, the idea was that once we had that body of work, understanding the origin of these scaling laws was to take it over into social organizations. And so the question that drove the extension of this work was, “are cities and companies just extensions of biology?”

They came out of biology. That’s where they came from. But is New York just actually, in some ways, a great big whale? And is Microsoft a great big elephant? Metaphorically we use biological terms, for example the DNA of the company or the ecology of the marketplace. But are those just metaphors or is there some serious substance that we can quantify with those?

There are two things that are very important that come out of the biology of the scaling —it’s theoretical and conceptual framework.

One: Since the metabolic rate scales non-linearly with size — all of these things scale non-linearly with size — and they scale with exponents that are less than one, what that means is that if the metabolic rate per cell is decreasing with size, the metabolic rate of our cells, my cells, are working harder than my horses. But my dogs are working even harder, in a systematic predictive way.

What does that say? That says there’s an extraordinary economy of scale.

Just to give you an example, if you increase the size of an organism by a factor of ten to the fourth, four is the magnitude, you would have expected naively to have ten to the fourth times as much energy. You would have the ten to the fourth times more cells. Ten thousand times more cells. Not true. You only need a thousand times. There’s an extraordinary savings in the energy use, and that cuts across all resources as well.

When we come to social organizations, there’s an interesting question. Do we have economies of scale or what? How do cities work, for example? How do companies work in this framework? That’s one thing.

The second thing is, (again, comes from the data and the conceptual framework explains it) the bigger you are, the slower everything is. The bigger you are, you live longer. Oxygen diffuses slower across your various membranes. You take longer to mature, you grow slower, but all in a systematic, mathematizable, predictable way. The pace of life systematically slows down following these quarter power scales. And again, we’ll ask those questions about life … social life and economies.

The work I got involved in was to try to understand these scaling laws. And to make it a very short story, what was proposed apart from the thinking was, look, this is universal. It cuts across the design of organisms. Whether you are insects, fish, mammals or birds, you get the same scaling laws. It is independent of design. Therefore, it must be something that is about the structure of the way things are distributed.

You recognize what the problem is. You have ten14cells. You have this problem. You’ve got to sustain them, roughly speaking, democratically and efficiently. And however natural selection solved it, it solved it by evolving hierarchical networks.

There is a very simple way of doing it. You take something macroscopic, you go through a hierarchy and you deliver them to very microscopic sites, like for example, your capillaries to your cells and so on. And so the idea was, this is true at all scales. It is true of an ecosystem; it is true within the cell. And what these scaling laws are manifesting are the generic, universal, mathematical, topological properties of networks.

The question is, what are the principles that are governing these networks that are independent of design? After a lot of work we postulated the following, just to give an idea.

First, they have to be space filling. They have to go everywhere. They have to feed every cell, every piece of the organism.

Secondly, they have things like invariant units. That is when you evolve from a human being to a whale (to make it a simple story) you do not change the basic units. The cells of the whale or the capillaries of whale, which are the kind of fundamental units, are pretty much indistinguishable from yours and mine. There is this invariance. When you evolve to a new species, you use the same units but you change the network. That’s the idea in this picture.

And the last one is of the infinitude of networks that have these properties - space filling and invariant total units. The ones that have actually evolved by the process of continuous feedback implicit in natural selection are those that have in some way optimized the system.

For example, the amount of work that your heart has to do to pump blood around your circulatory system to keep you alive is minimized with respect to the design of the system. You can put it into mathematics. You have a network theory, you mathematize the network, and then you make variations of the network and ask what is the one that minimizes the amount of energy your heart has to use to pump blood through it.

The principle is simple. Mathematically, it is quite complicated and challenging, but you can solve all of that. And you do that so that you can maximize the amount of energy you can put into fitness to make children. You want to minimize the amount of energy just to keep you alive, so that you can make more babies. That’s the simplest big picture.

All of those results about scaling are derived. A quarter, four, emerges. And what is the four?  It turns out the four isn’t a four. The four is actually a “three plus one”, meaning it’s the dimensionality of the space we live in plus one, which is actually to do, loosely speaking, with the fractal nature of these networks, the fact that there’s a sub-similar property.

In D dimensions, you read D plus one (that’s my physicist self speaking). Instead of being three quarters for metabolic rate, it would be D over D plus one.

Life in some funny way is actually five dimensional. It’s three space, one time, and one kind of fractal. That’s five. So we’re kind of five dimensional creatures in some curious way, mathematically.

This network theory was used to predict all kinds of things. You can answer questions like why is it we sleep eight hours. Why does a mouse have to sleep 15 hours?  And why does an elephant only have to sleep four and a whale two?  Well, we can answer that. Why do we evolve at the rate we do?  How does cancer work in terms of vasculature and its necrosis? And so on.

A whole bunch of questions can follow from this. One of the most important is growth. Understanding growth. How do we grow?  And why do we stop growing, for example?  Well, we can answer that. The theory answers that. And it’s quite powerful, and it explains why it is we have this so-called sigmoidal growth where you grow quickly and then you stop. And it explains why that is and it predicts when you stop, and it predicts the shape of that curve for an animal.

Here is this wonderful body of work that explains many things — some fundamental, some to do with very practical problems like understanding sleep, aging. The question is, can we take that over to other kinds of network systems. One of the obvious types of systems is a city. Another obvious one is a company. The first question you have to ask is, okay, this was based on the observation of scaling. Scaling was the window. It’s interesting of itself, but actually, it’s more interesting as a revelatory tool to open onto fundamental principles.

What did we learn from scaling in biology? We not only learned the network theory, but we learned that despite the fact that the whale lives in the ocean, the giraffe has a long neck, and the elephant a truck, and we walk on two feet and the mouse scurries around, at some 85, 90 percent level, we’re all scaled versions of one another.

There’s kind of one mammal, and every other mammal, no matter what size it is and where it existed, is actually some well-defined mathematically scaled version of that one master mammal, so to speak. And that is kind of amazing.

In other words, the size of a mammal, or any organism for that matter, can tell you how long it should live, how many children it should have, how oxygen diffuses across its lungs, what is the length of the ninth branch of its circulatory system, how its blood is flowing, how quickly it will grow, et cetera.

A provocative question is, is New York just a scaled up San Francisco, which is a scaled up Santa Fe?  Superficially, that seems unlikely because they look so different, especially Santa Fe. I live in Santa Fe and it’s a bunch of dopey buildings, and here I am in New York overwhelmed by huge skyscrapers. On the other hand, a whale doesn’t look much like a giraffe. But in fact, they’re scaled versions of one another, at this kind of cross-grained 85, 90 percent level.

Of course, you can’t answer this question just by sitting in an armchair. You have to go out and get the data and ask, “If I look at various metrics describing a city, do they scale in some simple way?”

Is there one line, so to speak, upon which all of them sit? Or when I look at all these metrics and I plot them, do I just see this random mess, which says that each city is unique and dominated by its geography and its history? In which case there’s not much you can do, and you’ve got to attack and think about cities as individual.

I got into this work, because first of all, I believe it’s a truly challenging, fundamental, science problem.

I think this is very much science of the 21st century, because it is the kind of problem that scientists have ignored. It is under the umbrella of a complex adaptive system and we need to come to terms with understanding the structure and dynamics and organization of such systems because they’re the ones that determine our lives and our extraordinary phenomenon that we have developed on this planet.

Can we understand them as scientists? The prevailing way of investigating them is social sciences and economics  — which have primarily less to do with generic principles and more to do with case studies and narrative (which is of course, very important). But the question is, can we complement them and make a science of cities, so to speak, and a science of corporations?

It is a very important question, certainly for scaling, because if it’s true that every city is unique, then of course, there’s no real science of cities. Every case would be special.

Another remarkable fact is that the planet has urbanizing at an exponential rate. Namely, 200 years ago, here sitting in Manhattan, almost everything around me would be a field. There would be a teeny settlement down at Wall Street somewhere of a small number of people. But most of the people would be living in these fields all the way up Manhattan into upstate New York. Indeed, at that time, less than four percent of the United States was urban. Primarily, it was agricultural. And now, only 200 years later, it’s almost the reverse. More like 82 percent is urban and less than 20 percent is agricultural. This has happened at an extraordinarily fast rate — and in fact, faster than exponential.

The point to recognize is that all of the tsunami of problems we’re facing, from global warming, the environment, to the questions of financial markets and risk, crime, pollution, disease and so forth, all of them are urban.

They all have their origin in cities. They have become dominant since the Industrial Revolution. Most importantly, they’ve been with us for the last two or 300 years, and somehow, we’ve only noticed them in the last ten or 15 years as if they’d never been here. Why? Because they’ve been increasing exponentially. We are on an exponential.

Cities are the cause of the problem, and they’re also the cause of the good life. They are the centers of wealth creation, creativity, innovation, and invention. They’re the exciting places. They are these magnets that suck people in. And that’s what’s been happening. And so they are the origin of the problems, but they are the origin of the solutions. And we need to come to terms with that, and we need to understand how cities work in a more scientific framework, meaning to what extent can we make it into a quantitative predictive, mathematizible kind of science.

Is that even possible?  And is it useful? That’s quest.

The first thing was to ask the question, do they scale? I put together a wonderful team of people, and I’d like to mention their names, because they play an extremely important and seminal role.

One is a man named Luis Bettencourt also a physicist who is at Los Alamos and the Santa Fe Institute. A man named José Lobo, who was at Cornell when I first got him involved, an urban economist and now he’s at Arizona State. Another is a student, Deborah Strumsky, who was at Harvard when she joined us, and is now at the University of North Carolina. And there are others, but these were the main characters. Most importantly, they were people that were part of a trans-disciplinary kind of group. And they brought together the data. They did the data mining, the statistics, analysis, et cetera. They have the expertise and the credentials.

The result of all of that was a long, tedious kind of process. To make a long story short, indeed, we found that cities scaled. Just amazing. Cities do scale. Not only do they scale, but also there’s universality to their scaling. Let me just tell you a little bit about of what we discovered from the data to begin with.

The first result that we actually got was with my German colleagues, Dirk Helbing, and his then student, Christian Kuhnert, who then worked with me. One of the first results was a very simple one —the number of gas stations as a function of city size in European cities.

What was discovered was that they behaved sort of like biology. You found that they are scaled beautifully, and it scaled as a power law, and the power law was less than one, indicating an economy of scale. Not surprisingly, the bigger the city, the less gas stations you need per capital. There is an economy of scale.

But it’s scaled! That is, it was systematic! You tell me the size of a city and I’ll tell you how many gas stations it has — that kind of idea. And not only that, it’s scaled at exactly the same way across all European cities. Kind of interesting!

But then, we discovered two things later that were quite remarkable. First, every infrastructural quantity you looked at from total length of roadways to the length of electrical lines to the length of gas lines, all the kinds of infrastructural things that are networked throughout a city, scaled in the same way as the number of gas stations. Namely, systematically, as you increase city size, I can tell you, roughly speaking, how many gas stations there are, what is the total length of roads, electrical lines, et cetera, et cetera. And it’s the same scaling in Europe, the United States, Japan and so on.

It is quite similar to biology. The exponent, instead of being three quarters was more like .85. So it’s a different exponent, but similar. But it’s an economy of scale.

The truly remarkable result was when we looked at quantities that I will call “socioeconomic”. That is, quantities that have no analog in biology. These are quantities, phenomena that did not exist until about 10,000 years ago when men and women started talking to one another and working together and forming serious communities leading to what we now call cities, i.e. things like wages, the number of educational institutions, the number of patents produced, et cetera. Things that have no analog in biology, things we invented.

And if you ask, first of all, do they scale? The answer is yes, in a regular way. Then, how do they scale?  And this was the surprise to me; I’m embarrassed to say. It should have been obvious prior, but they scaled in what we called a super linear fashion. Instead of being an exponent less than one, indicating economies of scale, the exponent was bigger than one, indicating what economists call increasing returns to scale.

What does that say? That says that systematically, the bigger the city, the more wages you can expect, the more educational institutions in principle, more cultural events, more patents are produced, it’s more innovative and so on. Remarkably, all to the same degree. There was a universal exponent which turned out to be approximately 1.15 which translated to English says something like the following:  If you double the size of a city from 50,000 to a hundred thousand, a million to two million, five million to ten million, it doesn’t matter what, systematically, you get a roughly 15 percent increase in productivity, patents, the number of research institutions, wages and so on, and you get systematically a 15 percent saving in length of roads and general infrastructure.

There are systematic benefits that come from increasing city size, both in terms of the individual getting something — which attracts people to the city, and in terms of the macroscopic economy. So the big cities are good in this sense.

However, some bad and ugly come with it. And the bad and ugly are things like a systematic increase in crime and various diseases, like AIDS, flu and so on. Interestingly enough, it scales all to the same 15 percent, if you double the size. Or put slightly differently, another way of saying it is, if you have a city of a million people and you broke it down into ten cities of a hundred thousand, you would require for that ten cities of a hundred thousand, 30 to 40 percent more roads, and 30 to 40 percent general infrastructure. And you would get a systematic decrease in wages and productivity and invention. Amazing. But you’d also get a decrease in crime, pollution and disease, systematically. So there are these trade-offs.

What does this mean?  What is this coming from?  And what do they imply?  Let me just say one of the things that they imply.

If cities are dominated by wealth creation and innovation, i.e. the super linear scaling laws, there’s increasing returns to scale. How does that impact growth? What does that do for growth? Well, it turns out, of course, had it been biology and it had been dominated by economies of scale, you would have got a sigmoid curve, and you would have stopped growing. Bad for cities, we believe, and bad for economies.

Economies must be, in a capitalist system, ever expanding. It’s good that we have super linear scaling, because what that says is you have open-ended growth. And that’s very good. Indeed, if you can check it against data, it agrees very well. But there’s something very bad about open-ended growth.

One of the bad things about open-ended growth, growing faster than exponentially, is that open-ended growth eventually leads to collapse. It leads to collapse mathematically because of something called finite times singularity. You hit something that’s called a singularity, which is a technical term, and it turns out as you approach this singularity, the system, if it reaches it, will collapse. You have to avoid that singularity in order to stop collapsing. It’s great on the one hand that you have this open ended growth. But if you kept going, of course, it doesn’t make any sense. Eventually, you run out of resources anyway, but you would collapse. And that’s what the theory says.

How do you avoid that? Well, how have we avoided it? We’ve avoided it by innovation. By making a major innovation that so to speak, resets the clock and you can kind of start over again with new boundary conditions. We’ve done that by making major discoveries or inventions, like we discover iron, we discover coal. Or we invent computers, or we invent IT. But it has to be something that really changes the cultural and economic paradigm. It kind of resets the clock and we start over again.

There’s a theorem you can prove that says that if you demand continuous open growth, you have to have continuous cycles of innovation. Well, that’s what people believe, and it’s the way people have suggested that’s how you get out of the Malthusian paradox. This all agrees within itself but there is a huge catch.

I said earlier that in biology you have economies of scale, scaling that is sub linear, three quarters less than one, and that the pace of life gets slower the bigger you are. In cities and social systems, you have the opposite. You have the super linear scaling. You have increasing returns to scale. The bigger you are, the more you have rather than less.

It turns out when you go through the theoretical framework that leads to the opposite to biology the pace of life increases with size. So everything that’s going on in New York today is systematically going faster than it is in San Francisco, than it is in Santa Fe, even the speed of walking.

There’s data, and if you plot it, you will see that the speed of walking in cities, actually, I said the data is actually taken primarily in European cities, but you can see this systematic increase in some reasonable agreement with the theory.

The first thing is that we have this increasing pace of life. We have open-ended growth, increase in pace of life, and the threat of collapse because of the singularity. But there’s a big catch about this innovation. Theory says, sure, you can get out of collapse by innovating, but you have to innovate faster and faster.

Something that took 10,000 years 20,000 years ago to make a change, now takes 25 years. So this is not the clock that is governing social life. There’s a clock that’s getting faster and faster. And so you have to innovate faster and faster in order to avoid the collapse. And it all comes out of this exponential growth driven by super linear scaling.

The question then is, is this sustainable? The system will collapse, because eventually you would have to be making a major innovation, like you know, IT every six months. Well, that’s completely crazy. First of all, we’re human beings. We can’t adapt to that, even.  But we can’t do it, so this is very threatening.

This leads then to all kinds of questions about global sustainability and how can you construct a conceptual framework that gives rise to having wealth creation, innovation, this kind of quality and standard of life, wealth production, and yet, not grow in such a way that you are probing the singularity and collapsing. That’s the challenge. That’s certainly something that we have to face.

Let me just say a few words about ideas as to why it is there’s scaling in cities. What we’ve shown is that there’s universality that on the one hand, you have this sub linear scaling, economies of scale for infrastructure like biology. But the dominant part of the city, wealth creation, innovation and the socio economic kinds of quantities, that have no analog in biology, scale super linearly.

This is true for any metric you want to think of and across the world. If you look at Japanese data or Chinese data or data from Chile or Colombia or the Netherlands or Portugal or the United States, it all looks the same. Yet these cities have nothing to do with one another.

It says that geography and history played a subdominant role as it did in biology in a sense. And so if you tell me the size of a city in the United States, I can tell you with some 85 percent accuracy how many police it would have, how many AIDS cases, how long the length of the roads are, how many patents it’s producing and so on, on the average.

Of course, you can use that as a baseline for talking about actual individual cities, how they over and underperform relative to this idealized scaling number. But the question is, where in the hell does that come from?  What is it that’s universal that transcends countries and cultures?

Well obviously, it’s what cities are really about, not these buildings and the roads and things, but the people. It’s people. What we believe is that the scaling laws are a manifestation of social networks, of the universality of the way human beings interact, what we’re doing now, talking to one another, exchanging ideas, and doing tasks together, and so on.

It is the nature of those networks and the clustering — very importantly, the hierarchical clustering of those networks, the family structure, the way families interact, and then all the way out through businesses and so on, that there’s a kind of universality to that that is representative of the kind of scale at which humans interact.

For example, even though families in China and the United States traditionally may look different, most people cannot interact seriously, in a serious, dedicated way with more than five or six people. It doesn’t matter how big the family is actually. Despite Facebook, you cannot have a hundred best friends anywhere in the world.

These things are representative of the universal nature of the social networking. Our belief is that it is the nature of that and the hierarchy of it. For example, not only the hierarchy in size, but the hierarchy in the fact that you’re strongest interaction is with your family. You have a much weaker interaction with your colleagues in your job, and in your job situation, you have a much weaker interaction even with the CEO of the company, and all the way around the hierarchy. There is this presumed self-similar structure that goes up through the hierarchy in terms of the size of the hierarchy and in terms of the strength of interaction.

We believe it is that hierarchy which is transcending all of the aspects of the city and is being represented by these kinds of laws. So how is it that when we plot, we can plot GDP of the city, the number of AIDS cases and wages on one plot, and they overlap one another? They’re just the same line. Well, that’s because from this viewpoint, they’re all manifestations of people interacting with one another.

Predominately companies are dominated by economies of scale rather than innovation

The last piece of this is to take it to companies. Again, I must say that when I first started working on this, I just assumed companies were little cities so to speak. I also assumed they were dominated by creativity and so on.

It took us a long time to get data for companies, because unlike cities, you have to pay for that data. But we’ve just done it. (…)

In fact, we’ve done it primarily without paying attention to sector, although we’ve done some decomposition into sector. What I’m going to talk about here is regardless of sector. If you just take all companies equally for a moment, indeed, what you see if you plot the various metrics of a company from sales and profits, taxation, assets versus company size, using the metric of employees (you could use others — you could use sales itself but we used employees) you find scaling.

There’s much more variation, many more outliers among companies than there are among cities, and more among cities than there are among organisms. But nevertheless, you see very good evidence of scaling. And the thing that surprised us about this scaling was that it was like biology, not like cities. It was sub linear predominately.

That was surprising because sub linear in the kind of conceptual framework we developed was a reflection of economies of scale, and super linear as a reflection of wealth creation and innovation. It is said that predominately companies are dominated by economies of scale rather than innovation.

If it were dominated by economies of scale, sub linear scaling, unlike cities (which have open ended growth) companies would grow and then stop growing. And not only that, if you extrapolated from biology, they would indeed, die, ultimately.

We looked at the growth curves as the metrics of the company, like its assets or its profits, as a function of time, or its number of employees as a function of time. Indeed, the generic behavior is a sigmoid. They grow fast and they stop. All the big companies stop at roughly the same value, which is intriguing of it self. I think that number is about half a trillion dollars.

We have a wonderful graph that has about ten thousand companies plotted on one graph and they are these growth curves. You see this kind of spaghetti looking graph by just eyeballing it. Everything grows and stops growing. That’s what it looks like. We’re still in the middle of analyzing a lot of this.

The picture emerges. Companies are more like organisms. They grow and asymptote. Cities are open ended.

More importantly, what we discovered is that on the one hand, sales increased linearly with company size. On the other hand, profits increased sub linearly of an exponent of about one eighth. This data is all U.S. data on publicly traded companies.

Sales to profits are systematically decreasing so that eventually, the profit to sales margin is going to zero. If you just extrapolate this, indeed, if you look at the data, you see that the fluctuations in all these quantities are proportional to the size of the company. The fluctuation is getting bigger and bigger. The profits are decreasing relative to sales. Even though the profits are increasing the bigger you are, where you think, “we made several billion dollars” what you realize is that you’re in an environment where the fluctuation is eventually bigger than that. This is possibly the mechanism by which companies die.

Let me tell you the interpretation. Again, this is still speculative.

The great thing about cities, the thing that is amazing about cities is that as they grow, so to speak, their dimensionality increases. That is, the space of opportunity, the space of functions, the space of jobs just continually increases. And the data shows that. If you look at job categories, it continually increases. I’ll use the word “dimensionality.”  It opens up. And in fact, one of the great things about cities is that it supports crazy people. You walk down Fifth Avenue, you see crazy people, and there are always crazy people. Well, that’s good. It is tolerant of extraordinary diversity.

This is in complete contrast to companies, with the exception of companies maybe at the beginning (think of the image of the Google boys in the back garage, with ideas of the search engine no doubt promoting all kinds of crazy ideas and having maybe even crazy people around them).

Well, Google is a bit of an exception because it still tolerates some of that. But most companies start out probably with some of that buzz. But the data indicates that at about 50 employees to a hundred, that buzz starts to stop. And a company that was more multi dimensional, more evolved becomes one-dimensional. It closes down.

Indeed, if you go to General Motors or you go to American Airlines or you go to Goldman Sachs, you don’t see crazy people. Crazy people are fired. Well, to speak of crazy people is taking the extreme. But maverick people are often fired.

It’s not surprising to learn that when manufacturing companies are on a down turn, they decrease research and development, and in fact in some cases, do actually get rid of it, thinking “oh, we can get that back, in two years we’ll be back on track.”

Well, this kind of thinking kills them. This is part of the killing, and this is part of the change from super linear to sublinear, namely companies allow themselves to be dominated by bureaucracy and administration over creativity and innovation, and unfortunately, it’s necessary. You cannot run a company without administrative. Someone has got to take care of the taxes and the bills and the cleaning the floors and the maintenance of the building and all the rest of that stuff. You need it. And the question is, “can you do it without it dominating the company?” The data suggests that you can’t.

The question is, as a scientist, can we take these ideas and do what we did in biology, at least based on networks and other ideas, and put this into a quantitative, mathematizable, predictive theory, so that we can understand the birth and death of companies, how that stimulates the economy?  How it’s related to cities? How does it affect global sustainability and have a predictive framework for an idealized system, so that we can understand how to deal with it and avoid it? If you’re running a bigger company, you can recognize what the metrics are that are driving you to mortality, and possibly put it off, and hopefully even avoid it.

Otherwise we have a theory that tells you when Google and Microsoft will eventually die, and die might mean a merger with someone else.

That’s the idea and that’s the framework, and that’s what this is.”

Geoffrey West, British theoretical physicist, former president and distinguished professor of the Santa Fe Institute, Why Cities Keep Growing, Corporations and People Always Die, and Life Gets Faster, Edge, 23 May 2011. (Photo Illustration by Hubert Blanz)

Geoffrey West: The surprising math of cities and corporations



Physicist Geoffrey West has found that simple, mathematical laws govern the properties of cities — that wealth, crime rate, walking speed and many other aspects of a city can be deduced from a single number: the city’s population. In this mind-bending talk from TEDGlobal he shows how it works and how similar laws hold for organisms and corporations.

Geoffrey West: The surprising math of cities and corporations, TED.com, July 2011

Why Cities Keep on Growing, Corporations Always Die, and Life Gets Faster | Fora.tv



As organisms, cities, and companies scale up, they all gain in efficiency, but then they vary. The bigger an organism, the slower. Yet the bigger a city is, the faster it runs. And cities are structurally immortal, while corporations are structurally doomed. Scaling up always creates new problems; cities can innovate faster than the problems indefinitely, while corporations cannot.

These revolutionary findings come from Geoffrey West’s examination of vast quantities of data on the metabolic/economic behavior of organisms and organizations. A theoretical physicist, West was president of Santa Fe Institute from 2005 to 2009 and founded the high energy physics group at Los Alamos National Laboratory.

— Full program: Long Now Foundation, Fora.tv, Cowell Theatre, San Francisco, CA, Event Date: 07.25.2011

See also:

☞ Geoffrey West, Scaling Laws In Biology And Other Complex Systems, Google Tech talks August 1, 2007
☞ Geoffrey West, On the Scale and Unity of Life from Cells to Cities, Research channel lecture
The sameness of organisms, cities, and corporations: Q&A with Geoffrey West, TED, 26 July 2011.
☞ Jonah Lehrer, A Physicist Solves the City, The New York Times, Dec 17, 2010
Vlatko Vedral: Decoding Reality: the universe as quantum information
☞ Mark Buchanan, Quantum minds: Why we think like quarks - ‘To be human is to be quantum’, New Scientist, 05 Sep 2011
‘To understand is to perceive patterns’ - B. Fuller, Powell, Johnson, West, Kurzweil & video narration by J. Silva

Apr
4th
Mon
permalink

Ronald Dworkin on moral values, dignity and self-respect
                                                                               
Contemplation, Perseverance, Imagination, and Free Will, from the morality play, Hickscorner (source)

"It was voguish to say that there’s no right answer to legal questions. But if you say there’s no right answer in interpreting a law and you’re talking about justice, you’re not really getting involved in the issues that matter. Most intellectuals thought effectively that moral or legal judgments were just emotional expressions with no basis in cognition. Freddie Ayer argued that moral judgments are just grunts of approval or disapproval.” (…)

The methods of science too undermined convictions that there are objective values. “The idea is that we are not entitled to think our moral convictions true unless they are required by pure reason or produced by something in the world.” In the book, Ronald Dworkin calls this “the Gibraltar of all mental blocks”. We must, he argues, get over it. And yet this Gibraltar rules the waves of philosophy: a recent issue of Philosophy Now was themed around the death of morality. If moral judgments can’t be true, do we need them at all?

When I first studied philosophy 30 years ago, my undergraduate textbook made relativism and scepticism about morality seem natural. It was called Ethics: Inventing Right and Wrong by JL Mackie and began: “There are no objective values.” It suggested that the fact that values conflict (I support gay marriages, while you – you monster – think they’re a disgrace) indicates they can’t be true. (…)

When Mackie says: ‘All moral propositions are false’, that’s a moral proposition, which is false if his proposition 'All moral propositions are false' is true, which it isn't.” A-ha, a version of the Cretan liar paradox that Doctor Who used to make a clever robot short-circuit and explode. (…)

But if objective moral values aren’t in the world, where are they hiding? (…) Dworkin finally tells us when we are justified in thinking any value judgment true, namely: “When we are justified in thinking that our arguments for holding it true are adequate arguments.” Isn’t that circular? Yes, but Dworkin argues it’s good circular, not bad circular. (…)

Dworkin is a hedgehog. “The hedgehog is an anti-pluralist image. Pluralism was Isaiah Berlin's extremely popular thought that there are truths but they conflict. I think it’s wrong. Truths don’t conflict in the domain of value any more than in science.” (…)

Almost all moral philosophy nowadays is steeped in self-abnegation. Mine starts from self-assertion, which was popular with the Greeks like Aristotle and Plato but not now. Now morality is perceived as being about self-sacrifice. I try to show how that’s wrong.

Why is self-assertion important? “We have a responsibility to live well. Our challenge is to act as if we respect ourselves. Enjoying ourselves is not enough.” But doesn’t self-assertion clash with our moral duties to others? “No. The first challenge is to live well – that is ethics – and then to see how that connects with what we owe other people – which is morality. The connection is twofold. One is respect for the importance of other people’s lives. And the other is equal concern for their lives.

Imagine you’re in a lifeboat and you have to decide which of two children is to go overboard to their deaths. If you’re a utilitarian – who believes what’s important morally is maximising the happiness of the greatest number – you wouldn’t mind if it was your child or another’s who dies. Dworkin’s system holds that you’re justified in saving your child. Why? “Because it’s my child! Because they’re part of what it means for my life to be lived well. They’re part of my life, for which I take responsibility.” (…)

Such favouritism can’t work at a political level: you can’t give someone tax breaks because he’s your son. But at the moral level it does: you can save someone because they’re your child, while at the same time respecting other people’s lives. Each person must take his own life seriously: he must accept that it is a matter of importance that his life be a successful performance rather than a wasted opportunity. I’m talking about dignity. It’s a term overused by politicians, but any moral theory worth its salt needs to proceed from it." (…)

I’ve tried to be responsible for my decisions and to make an authentic life. When I was a Wall Street lawyer, I realised I didn’t want that life. So I went and did what I found most fulfilling, thinking about, arguing for the things that are hard, important and rewarding. I’ve tried to do it well. I can’t say if I’ve succeeded.”

Stuart Jeffries writing about Ronald Dworkin (American philosopher and scholar of constitutional law) in Ronald Dworkin: ‘We have a responsibility to live well’, Guardian, 31 March 2011.

See also:

Jesse Prinz: Morality is a Culturally Conditioned Response
The Biology of Ethics. When it comes to morality, the philosopher Patricia Churchland refuses to stand on principle, The Chronicle Review, June 12, 2011.

Mar
19th
Sat
permalink

The genes are so different, the scientists argue, that giant viruses represent a fourth domain of life

Charles Darwin pictured evolution as a grand tree, with the world’s living species as its twigs. Scientists identify 10,000 new species a year, but they’ve got a long, long way to go before finding all of Earth’s biodiversity. So far, they have identified 1.5 million species of animals, but there may be 7 million or more in total. Beyond the animal kingdom, our ignorance balloons. Scoop up some sea water or a cup of soil, and there will likely be thousands of new species of microbes lurking there. (…)

The tree is, in some ways, more like a web. Genes sometimes slip from one species to another, especially among microbes. (…)

Cell fusions and horizontal gene transfer are probably best portrait by interconnected branches, rather than diverging ones. The base of the tree seems especially tangled, more like a mangrove rather than an oak. With all those caveats in mind, here’s a rough picture of the tree of life that Norman Pace of the University of Colorado offered in a scientific review he published in 2009. It shows life divided up into three domains: eukaryotes (that’s us), bacteria, and archaea.


There’s a lot of debate about whether eukaryotes actually split off from within the archaea, or just branched off from a common ancestor. Nevertheless, the two forms of life are quite distinct. For one thing, the common ancestor of living eukaryotes acquired oxygen-consuming bacteria that became a permanent part of their cells, called mitochondria. They’re keeping you alive right now.

A lot of scientists wonder how all the new species that scientists are discovering are going to change the shape of this tree. Will its three-part structure endure, with each part simply growing denser with new branches? Or have we been missing entire swaths of the tree of life? (…)

Giant viruses also explode a lot of conventional ideas of what viruses are supposed to be. Not only are giant viruses monstrously big, but they are overloaded with genes. A flu virus has just ten genes, for example, but a number of giant viruses have well over a thousand. Giant viruses even get infected by viruses of their own.

For years, researchers have been finding that the diversity of genes in viruses is tremendous. It turns out that giant viruses are particularly bizarre, genetically speaking. Didier Raoult and his colleagues compared one set of genes in giant viruses to their counterparts in other lineages. Here’s the evolutionary tree they came up with. (The giant virus genes are shown in red.)


The genes are so different, the scientists argue, that giant viruses represent a fourth domain of life. Here’s an impressionistic figure they created to show how the four domains emerged from the web of gene-trading early on in the history of life (from left to right, archaea, bacteria, eukaryotes, and giant viruses).


Jonathan Eisen of UC Davis and his colleagues publish still more evidence for a possible fourth domain. (Some of the evidence can be found in a paper in PLOS One; the rest is in a shorter note at PLOS Currents.) Their evidence comes from a voyage Craig Venter and his colleagues took in his yacht, scooping up sea water along the way. They ripped open the microbes in the water and pulled out all their genes. The advantage of this approach is that it allowed the scientists to amass a database of literally tens of millions of new genes. The downside was that they could only look at the isolated genes, rather than the living microbes from which they came. (…)

That discovery might show how this possible fourth domain got its start. Did it start out as ordinary cellular life, and then some of its genes ended up in viruses? Or is the fourth domain another sign that life as we know it actually originated as viruses?

Carl Zimmer, Glimpses of the Fourth Domain?, Discover Magazine, March 18th, 2011.

See also:

GiantVirus.org
A Tree of Eukaryotes (infographic)

Feb
10th
Thu
permalink

Daniel Kahneman on the riddle of experience vs. memory

[2:38] “We might be thinking of ourselves in terms of two selves.

There is an experiencing self, who lives in the present and knows the present, is capable of re-living the past, but basically it has only the present. It’s the experiencing self that the doctor approaches — you know, when the doctor asks, “Does it hurt now when I touch you here?”

And then there is a remembering self, and the remembering self is the one that keeps score, and maintains the story of our life, and it’s the one that the doctor approaches in asking the question, “How have you been feeling lately?” or “How was your trip to Albania?” or something like that.

Those are two very different entities, the experiencing self and the remembering self and getting confused between them is part of the mess of the notion of happiness.

Now, the remembering self is a storyteller. And that really starts with a basic response of our memories — it starts immediately. We don’t only tell stories when we set out to tell stories. Our memory tells us stories, that is, what we get to keep from our experiences is a story. (…)

[6:21] What defines a story? And that is true of the stories that memory delivers for us, and it’s also true of the stories that we make up. What defines a story are changes, significant moments and endings. Endings are very, very important and, in this case, the ending dominated.

Now, the experiencing self lives its life continuously. It has moments of experience, one after the other. And you ask: What happens to these moments? And the answer is really straightforward. They are lost forever. I mean, most of the moments of our life — and I calculated — you know, the psychological present is said to be about three seconds long. Which means that, you know, in a life there, are about 600 million of them. In a month, there are about 600,000. Most of them don’t leave a trace. Most of them are completely ignored by the remembering self. And yet, some how you get the sense that they should count, that what happens during these moments of experience is our life. It’s the finite resource that we’re spending while we’re on this earth. And how to spend it, would seem to be relevant, but that is not the story that the remembering self keeps for us.

So we have the remembering self and the experiencing self, and they’re really quite distinct. The biggest difference between them is in the handling of time.
From the point of view of the experiencing self, if you have a vacation, and the second week is just as good as the first, then the two week vacation is twice as good as the one week vacation. That’s not the way it works at all for the remembering self. For the remembering self, a two week vacation is barely better than the one week vacation because there are no new memories added. You have not changed the story. And in this way, time is actually the critical variable that distinguishes a remembering self from an experiencing self. Time has very little impact on this story.

Now, the remembering self does more than remember and tell stories. It is actually the one that makes decisions because, if you have a patient who has had, say, two colonoscopies with two different surgeons and is deciding which of them to choose, then the one that chooses is the one that has the memory that is less bad, and that’s the surgeon that will be chosen. The experiencing self has no voice in this choice. We actually don’t choose between experiences. We choose between memories of experiences. And, even when we think about the future, we don’t think of our future normally as experiences. We think of our future as anticipated memories. And basically you can look at this, you know, as a tyranny of the remembering self, and you can think of the remembering self sort of dragging the experiencing self through experiences that the experiencing self doesn’t need.

I have that sense that when we go on vacations this is very frequently the case, that is, we go on vacations, to a very large extent, in the service of our remembering self. And this is a bit hard to justify I think. I mean, how much do we consume our memories? That is one of the explanations that is given for the dominance of the remembering self.

And when I think about that, I think about a vacation we had in Antarctica a few years ago, which was clearly the best vacation I’ve ever had, and I think of it relatively often, relative to how much I think of other vacations. And I probably have consumed my memories of that three week trip, I would say, for about 25 minutes in the last four years. Now, if I had ever opened the folder with the 600 pictures in it, I would have spent another hour. Now, that is three weeks, and that is at most an hour and a half. There seems to be a discrepancy. Now, I may be a bit extreme, you know, in how little appetite I have for consuming memories, but even if you do more of this, there is a genuine question.

Why do we put so much weight on memory relative to the weight that we put on experiences?

So I want you to think about a thought experiment. Imagine that your next vacation you know that at the end of the vacation all your pictures will be destroyed, and you’ll get an amnesic drug so that you won’t remember anything. Now, would you choose the same vacation? (Laughter) And if you would choose a different vacation, there is a conflict between your two selves, and you need to think about how to adjudicate that conflict, and it’s actually not at all obvious because, if you think in terms of time, then you get one answer. And if you think in terms of memories, you might get another answer. Why do we pick the vacations we do, is a problem that confronts us with a choice between the two selves.

Now, the two selves bring up two notions of happiness. There are really two concepts of happiness that we can apply, one per self. So you can ask: How happy is the experiencing self? And then you would ask: How happy are the moments in the experiencing self’s life? And they’re all — happiness for moments is a fairly complicated process. What are the emotions that can be measured? And, by the way, now we are capable of getting a pretty good idea of the happiness of the experiencing self over time. If you ask for the happiness of the remembering self, it’s a completely different thing. This is not about how happily a person lives. It is about how satisfied or pleased the person is when that person thinks about her life. Very different notion. Anyone who doesn’t distinguish those notions, is going to mess up the study of happiness, and I belong to a crowd of students of well-being, who’ve been messing up the study of happiness for a long time in precisely this way. (…)

[14:14] We know something about what controls satisfaction of the happiness self. We know that money is very important, goals are very important. We know that happiness is mainly being satisfied with people that we like, spending time with people that we like. There are other pleasures, but this is dominant. So if you want to maximize the happiness of the two selves, you are going to end up doing very different things. The bottom line of what I’ve said here is that we really should not think of happiness as a substitute for well-being. It is a completely different notion. (…)

[16:26] It is very difficult to think straight about well-being, and I hope I have given you a sense of how difficult it is. (…)

[17:01] I think the most interesting result that we found in the Gallup survey is a number, which we absolutely did not expect to find. We found that with respect to the happiness of the experiencing self. When we looked at how feelings vary with income. And it turns out that, below an income of 60,000 dollars a year, for Americans, and that’s a very large sample of Americans, like 600,000, but it’s a large representative sample, below an income of 60,000 dollars a year, people are unhappy, and they get progressively unhappier the poorer they get. Above that, we get an absolutely flat line. I mean I’ve rarely seen lines so flat. Clearly, what is happening is money does not buy you experiential happiness, but lack of money certainly buys you misery, and we can measure that misery very, very clearly. In terms of the other self, the remembering self, you get a different story. The more money you earn the more satisfied you are. That does not hold for emotions. (…)

[19:01] How to enhance happiness, goes very different ways depending on how you think, and whether you think of the remembering self or you think of the experiencing self.”

Daniel Kahneman, Israeli-American psychologist and Nobel laureate, notable for his work on the psychology of judgment, decision-making, behavioral economics and hedonic psychology, The riddle of experience vs. memory, TED.com, Feb 2010
(Full transcript)

See also:

Daniel Kahneman: The Marvels and the Flaws of Intuitive Thinking
Our sense of time is deeply entangled with memory
The Human Hard Drive: Antonio Damasio and Ottavio Arancio on How We Make (And Lose) Memories
Eric R. Kandel, The Biology of Memory: A Forty-Year Perspective (pdf), Department of Neuroscience, Columbia University, New York
Daniel Kahneman on thinking ‘Fast And Slow’: How We Aren’t Made For Making Decisions

Jan
22nd
Sat
permalink

Steve Stewart-Williams on the evolution and the ultimate purpose of life

             

                       (Illustration: The Meaning Of Life by Ehecatl Malinalli)

"Evolutionary theory answers one of the most profound and fundamental questions human beings have ever asked themselves, a question that has plagued reflective minds for as long as reflective minds have existed in the universe: Why are we here? The question was answered in 1859 by the English naturalist Charles Darwin, and the answer can be stated in just six words:

We are here because we evolved.

                                      

"What?" I hear you exclaim. "Is that it? That’s no answer!"

Even people who fully accept that we evolved are liable to have this reaction. It just doesn’t seem to be a satisfactory answer to the question of why we are here. Certainly, it’s a good answer to one interpretation of the question. But this is not the interpretation most people have in mind when they reflect on the issue. The real intent behind the question is better captured with another question:

For what purpose are we here?

What I’m going to argue in this post is that evolutionary theory provides an answer to this sense of the question as well. But before I get to that, let’s survey some of the interesting answers that people have given to the question of the meaning of life over the ages.

Religious Answers

A lot of answers have come from religion. Among the views associated with the Western religions is that the purpose of life is…

  • to serve or submit to God (the word “Islam” means submission)
  • to fulfil the purpose for which God made us
  • to gain entrance to heaven
  • to look after the planet
  • to convert people to one’s religion

Eastern answers have a very different flavour. They include such ideas as that the purpose of life is…

  • to break free of the cycle of reincarnation and karma
  • to achieve enlightenment and be extinguished as an individual conscious entity, or merge back into some kind of collective consciousness

Secular Answers

There are also various secular or religion-neutral answers; I googled “meaning of life” and came up with some very cool quotations on the topic (I particularly like the fourth):

  • Kurt Vonnegut: “We are here to help each other get through this thing, whatever it is.”
  • The Dalai Lama: “The purpose of our lives is to be happy.”
  • Ralph Waldo Emerson: “The purpose of life is not to be happy. It is to be useful, to be honorable, to be compassionate, to have it make some difference that you have lived and lived well.”
  • Nelson Henderson: “The true meaning of life is to plant trees, under whose shade you do not expect to sit.”
  • H. L. Mencken: “You come into the world with nothing, and the purpose of your life is to make something out of nothing.”
  • Monty Python: “It’s nothing very special. Try to be nice to people, avoid eating fat, read a good book every now and then, get some walking in, and try to live together in peace and harmony with people of all creeds and nations.”

Another non-religious idea I’m very fond of is that the purpose of life is to create something that no one else could have created, and thus to bring into existence something that wouldn’t exist if you hadn’t existed. In any case, let’s now turn our attention to the question of what evolutionary theory brings to the party.

Evolution and the Meaning of Life

First, I should make clear what I’m not going to argue. I’m not going to argue that evolutionary theory implies that the meaning of life is to survive and reproduce, or put forward our genes, or enhance our fitness or anything like that. Evolutionary theory tells us where we came from, not what we should do now that we’re here. So what does the theory imply?

To answer this, we need to look at some background ideas. Traditional explanations for the “design” found in organisms (e.g., the design found in the human eye) involved a style of explanation known as teleological explanations. Teleological explanations are framed in terms of purposes and future consequences. For example, we might say that the giraffe has a long neck for the purpose of feeding on leaves high in trees. From a Darwinian perspective, this is actually the wrong answer. In fact, it’s not just the wrong answer; it’s the wrong kind of answer to questions in biology. The giraffe does not have a long neck in order to achieve this or any other future goal. It has a long neck because long-necked giraffes in the past were more likely to survive and reproduce than were their short-necked counterparts, and thus long-necked giraffes were more likely to pass on the genes contributing to their longer necks. This point is crucial to a proper understanding of evolutionary theory: There is no teleological explanation for long necks, only a historical explanation. A historical explanation focuses not on future effects, but on the past circumstances that brought adaptations about. (…)

Darwin showed that we don’t need to posit any kind of foresight or future-directed purpose underlying the apparent design in the biological world. In doing so, he showed us that there is no reason to think that there is a teleological answer to the question of why we are here. There is only a historical one.

Thus, evolutionary theory provides answers to both senses of the question of why we are here, the historical and the teleological:

Historical: We are here because we evolved. Teleological: We are not here for any purpose. That’s right; that’s what I’m saying: We are not here for any purpose. Of course, we all have our own little purposes in life that we choose and that make our lives meaningful in the emotional sense. But if we’re interested in the question of whether life is ultimately meaningful, rather than whether it’s potentially emotionally meaningful, well after Darwin, there is no reason at all to suppose that it is - there is no reason to assume that life has any ultimate meaning or purpose.

A Gloomy Conclusion?

This might sound like a gloomy conclusion, especially for those of us who were brought up believing that there is some overarching purpose to the universe or meaning to our lives. The first point to make is that, even if it is a gloomy conclusion, this says absolutely nothing about whether it’s a true or an accurate conclusion. But as it happens, it’s not necessarily such a gloomy conclusion anyway. There’s an important distinction between the idea that life is ultimately meaningless (which is an abstract, philosophical conclusion), and the feeling that one’s own life is meaningless (which is a symptom of depression). Most people can live perfectly happy lives even while accepting that life has no ultimate meaning , at least once they get used to the idea. Some even cheerfully accept that life is meaningless and view it as amusing in a strange kind of way - a cosmic joke but without a joke teller.

This is an issue that the existentialist philosophers grappled with and agonized over, and a lot of them came to the same conclusion that I have: that life is ultimately meaningless. But many found a silver lining in this cloud. They concluded that, if there’s no meaning or purpose imposed on us from outside ourselves, then that leaves us in the position where we are free to choose our own meanings and purposes in life, both as individuals and as a species. And for many people, this is a deeply liberating idea. I’ll leave the final word to the philosopher E. D. Klemke, who wrote…

"An objective meaning - that is, one which is inherent within the universe or dependent upon external agencies - would, frankly, leave me cold. It would not be mine… I, for one, am glad that the universe has no meaning, for thereby is man all the more glorious. I willingly accept the fact that external meaning is non-existent… for this leaves me free to forge my own meaning.”

Steve Stewart-Williams , The Meaning of Life Revealed! Evolution and the ultimate purpose of life, Psychology Today, Jan 8, 2011, This post is excerpted, with changes, from the book Darwin, God and the Meaning of Life by Steve Stewart-Williams.

See also: Debate: Does the Universe have a purpose? Ridley, Shermer, Dawkins VS Wolpe, Craig, Geivett (video)