Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso



Age of information
Artificial intelligence
Cognition, perception, relativity
Cognitive science
Collective intelligence
Human being
Mind & Brain
Science & Art
Self improvement
The other

Pensieri a caso
A Box Of Stories
Reading Space





Science historian George Dyson: Unravelling the digital code
                                                     George Dyson (Photo: Wired)

"It was not made for those who sell oil or sardines."

— G. W. Leibniz, ca. 1674, on his calculating machine

A universe of self-replicating code

Digital organisms, while not necessarily any more alive than a phone book, are strings of code that replicate and evolve over time. Digital codes are strings of binary digits — bits. Google is a fantastically large number, so large it is almost beyond comprehension, distributed and replicated across all kinds of hosts. When you click on a link, you are replicating the string of code that it links to. Replication of code sequences isn’t life, any more than replication of nucleotide sequences is, but we know that it sometimes leads to life.

Q [Kevin Kelly]: Are we in that digital universe right now, as we talk on the phone?

George Dyson: Sure. You’re recording this conversation using a digital recorder — into an empty matrix of addresses on a microchip that is being filled up at 44 kilobytes per second. That address space full of numbers is the digital universe.

Q: How fast is this universe expanding?

G.D.: Like our own universe at the beginning, it’s more exploding than expanding. We’re all so immersed in it that it’s hard to perceive. Last time I checked, the digital universe was expanding at the rate of five trillion bits per second in storage and two trillion transistors per second on the processing side. (…)

Q: Where is this digital universe heading?

G.D.: This universe is open to the evolution of all kinds of things. It’s cycling faster and faster. Even with Google and YouTube and Facebook, we can’t consume it all. And we aren’t aware what this space is filling up with. From a human perspective, computers are idle 99 per cent of the time. While they’re waiting for us to come up with instructions, computation is happening without us, as computers write instructions for each other. As Turing showed, this space can’t be supervised. As the digital universe expands, so does this wild, undomesticated side.”

— George Dyson interviewed by Kevin Kelly in Science historian George Dyson: Unravelling the digital code, Wired, Mar 5, 2012.

"Just as we later worried about recombinant DNA, what if these things escaped? What would they do to the world? Could this be the end of the world as we know it if these self-replicating numerical creatures got loose?

But, we now live in a world where they did get loose—a world increasingly run by self-replicating strings of code. Everything we love and use today is, in a lot of ways, self-reproducing exactly as Turing, von Neumann, and Barricelli prescribed. It’s a very symbiotic relationship: the same way life found a way to use the self-replicating qualities of these polynucleotide molecules to the great benefit of life as a whole, there’s no reason life won’t use the self-replicating abilities of digital code, and that’s what’s happening. If you look at what people like Craig Venter and the thousand less-known companies are doing, we’re doing exactly that, from the bottom up. (…)

What’s, in a way, missing in today’s world is more biology of the Internet. More people like Nils Barricelli to go out and look at what’s going on, not from a business or what’s legal point of view, but just to observe what’s going on.

Many of these things we read about in the front page of the newspaper every day, about what’s proper or improper, or ethical or unethical, really concern this issue of autonomous self-replicating codes. What happens if you subscribe to a service and then as part of that service, unbeknownst to you, a piece of self-replicating code inhabits your machine, and it goes out and does something else? Who is responsible for that? And we’re in an increasingly gray zone as to where that’s going. (…)

Why is Apple one of the world’s most valuable companies? It’s not only because their machines are so beautifully designed, which is great and wonderful, but because those machines represent a closed numerical system. And they’re making great strides in expanding that system. It’s no longer at all odd to have a Mac laptop. It’s almost the normal thing.

But I’d like to take this to a different level, if I can change the subject… Ten or 20 years ago I was preaching that we should look at digital code as biologists: the Darwin Among the Machines stuff. People thought that was crazy, and now it’s firmly the accepted metaphor for what’s going on. And Kevin Kelly quoted me in Wired, he asked me for my last word on what companies should do about this. And I said, “Well, they should hire more biologists.”

But what we’re missing now, on another level, is not just biology, but cosmology. People treat the digital universe as some sort of metaphor, just a cute word for all these products. The universe of Apple, the universe of Google, the universe of Facebook, that these collectively constitute the digital universe, and we can only see it in human terms and what does this do for us?

We’re missing a tremendous opportunity. We’re asleep at the switch because it’s not a metaphor. In 1945 we actually did create a new universe. This is a universe of numbers with a life of their own, that we only see in terms of what those numbers can do for us. Can they record this interview? Can they play our music? Can they order our books on Amazon? If you cross the mirror in the other direction, there really is a universe of self-reproducing digital code. When I last checked, it was growing by five trillion bits per second. And that’s not just a metaphor for something else. It actually is. It’s a physical reality.

We’re still here at the big bang of this thing, and we’re not studying it enough. Who’s the cosmologist really looking at this in terms of what it might become in 10,000 years? What’s it going to be in 100 years? Here we are at the very beginning and we just may simply not be asking the right questions about what’s going on. Try looking at it from the other side, not from our side as human beings. Scientists are the people who can do that kind of thing. You can look at viruses from the point of view of a virus, not from the point of view of someone getting sick.

Very few people are looking at this digital universe in an objective way. Danny Hillis is one of the few people who is. His comment, made exactly 30 years ago in 1982, was that "memory locations are just wires turned sideways in time". That’s just so profound. That should be engraved on the wall. Because we don’t realize that there is this very different universe that does not have the same physics as our universe. It’s completely different physics. Yet, from the perspective of that universe, there is physics, and we have almost no physicists looking at it, as to what it’s like. And if we want to understand the sort of organisms that would evolve in that totally different universe, you have to understand the physics of the world in which they are in.  It’s like looking for life on another planet. Danny has that perspective. Most people say just, “well, a wire is a wire. It’s not a memory location turned sideways in time.” You have to have that sort of relativistic view of things.

We are still so close to the beginning of this explosion that we are still immersed in the initial fireball. Yet, in that short period of time, for instance, it was not long ago that to transfer money electronically you had to fill out paper forms on both ends and then wait a day for your money to be transferred. And, in a very few years, it’s a dozen years or so, most of the money in the world is moving electronically all the time.

The best example of this is what we call the flash crash of May 6th, two years ago, when suddenly, the whole system started behaving unpredictably. Large amounts of money were lost in milliseconds, and then the money came back, and we quietly (although the SEC held an investigation) swept it under the rug and just said, “well, it recovered. Things are okay.” But nobody knows what happened, or most of us don’t know.

There was a great Dutch documentary—Money and Speed: Inside the Black Box—where they spoke to someone named Eric Scott Hunsader who actually had captured the data on a much finer time scale, and there was all sorts of very interesting stuff going on. But it’s happening so quickly that it’s below what our normal trading programs are able to observe, they just aren’t accounting for those very fast things. And this could be happening all around us—not just in the world of finance. We would not necessarily even perceive it, that there’s a whole world of communication that’s not human communication. It’s machines communicating with machines. And they may be communicating money, or information that has other meaning—but if it is money, we eventually notice it. It’s just the small warm pond sitting there waiting for the spark.

It’s an unbelievably interesting time to be a digital biologist or a digital physicist, or a digital chemist. A good metaphor is chemistry. We’re starting to address code by template, rather than by numerical location—the way biological molecules do.

We’re living in a completely different world. The flash crash was an example: you could have gone out for a cup of coffee and missed the whole thing, and come back and your company lost a billion dollars and got back 999 million, while you were taking your lunch break. It just happened so fast, and it spread so quickly.

So, yes, the fear scenario is there, that some malevolent digital virus could bring down the financial system. But on the other hand, the miracle of this flash crash was not that it happened, but that it recovered so quickly. Yet, in those milliseconds, somebody made off with a lot of money. We still don’t know who that was, and maybe we don’t want to know.

The reason we’re here today (surrounded by this expanding digital universe) is because in 1936, or 1935, this oddball 23-year-old undergraduate student, Alan Turing, developed this theoretical framework to understand a problem in mathematical logic, and the way he solved that problem turned out to establish the model for all this computation. And I believe we wold have arrived here, sooner or later, without Alan Turing or John von Neumann, but it was Turing who developed the one-dimensional model, and von Neumann who developed the two-dimensional implementation, for this increasingly three-dimensional digital universe in which everything we do is immersed. And so, the next breakthrough in understanding will also I think come from some oddball. It won’t be one of our great, known scientists. It’ll be some 22-year-old kid somewhere who makes more sense of this.

But, we’re going back to biology, and of course, it’s impossible not to talk about money, and all these other ways that this impacts our life as human beings. What I was trying to say is that this digital universe really is so different that the physics itself is different. If you want to understand what types of life-like or self-reproducing forms would develop in a universe like that, you actually want to look at the sort of physics and chemistry of how that universe is completely different from ours. An example is how not only its time scale but how time operates is completely different, so that things can be going on in that world in microseconds that suddenly have a real effect on ours.

Again, money is a very good example, because money really is a sort of a gentlemen’s agreement to agree on where the money is at a given time. Banks decide, well, this money is here today and it’s there tomorrow. And when it’s being moved around in microseconds, you can have a collapse, where suddenly you hit the bell and you don’t know where the money is. And then everybody’s saying, “Where’s the money? What happened to it?” And I think that’s what happened. And there are other recent cases where it looks like a huge amount of money just suddenly disappeared, because we lost the common agreement on where it is at an exact point in time. We can’t account for those time periods as accurately as the computers can.

One number that’s interesting, and easy to remember, was in the year 1953, there were 53 kilobytes of high-speed memory on planet earth. This is random access high-speed memory. Now you can buy those 53 kilobytes for an immeasurably small, thousandth of one cent or something. If you draw the graph, it’s a very nice, clean graph. That’s sort of Moore’s Law; that it’s doubling. It has a doubling time that’s surprisingly short, and no end in sight, no matter what the technology does. We’re doubling the number of bits in a extraordinarily short time.

And we have never seen that. Or I mean, we have seen numbers like that, in epidemics or chain reactions, and there’s no question it’s a very interesting phenomenon. But still, it’s very hard not to just look at it from our point of view. What does it mean to us? What does it mean to my investments? What does it mean to my ability to have all the music I want on my iPhone? That kind of thing. But there’s something else going on. We’re seeing a fraction of one percent of it, and there’s this other 99.99 percent that people just aren’t looking at.

The beginning of this was driven by two problems. The problem of nuclear weapons design, and the problem of code breaking were the two drivers of the dawn of this computational universe. There were others, but those were the main ones.

What’s the driver today? You want one word? It’s advertising. And, you may think advertising is very trivial, and of no real importance, but I think it’s the driver. If you look at what most of these codes are doing, they’re trying to get the audience, trying to deliver the audience. The money is flowing as advertising.

And it is interesting that Samuel Butler imagined all this in 1863, and then in his book Erewhon. And then 1901, before he died, he wrote a draft for “Erewhon Revisited.” In there, he called out advertising, saying that advertising would be the driving force of these machines evolving and taking over the world. Even then at the close of 19th century England, he saw advertising as the way we would grant power to the machines.

If you had to say what’s the most powerful algorithm set loose on planet earth right now? Originally, yes, it was the Monte Carlo code for doing neutron calculations. Now it’s probably the AdWords algorithm. And the two are related: if you look at the way AdWords works, it is a Monte Carlo process. It’s a sort of statistical sampling of the entire search space, and a monetizing of it, which as we know, is a brilliant piece of work. And that’s not to diminish all the other great codes out there.

We live in a world where we measure numbers of computers in billions, and numbers of what we call servers, which are the equivalent of in the old days, of what would be called mainframes. Those are in the millions, hundreds of millions.

Two of the pioneers of this—to single out only two pioneers—were John Von Neumann and Alan Turing. If they were here today Turing would be 100. Von Neumann would be 109. I think they would understand what’s going on immediately—it would take them a few minutes, if not a day, to figure out, to understand what was going on. And, they both died working on biology, and I think they would be immediately fascinated by the way biological code and digital code are now intertwined. Von Neumann’s consuming passion at the end was self-reproducing automata. And Alan Turing was interested in the question of how molecules could self-organize to produce organisms.

They would be, on the other hand, astonished that we’re still running their machines, that we don’t have different computers. We’re still just running your straight Von Neumann/Turing machine with no real modification. So they might not find our computers all that interesting, but they would be diving into the architecture of the Internet, and looking at it.

In both cases, they would be amazed by the direct connection between the code running on computers and the code running in biology—that all these biotech companies are directly reading and writing nucleotide sequences in and out of electronic memory, with almost no human intervention. That’s more or less completely mechanized now, so there’s direct translation, and once you translate to nucleotides, it’s a small step, a difficult step, but, an inevitable step to translate directly to proteins. And that’s Craig Venter’s world, and it’s a very, very different world when we get there.

The question of how and when humans are going to expand into the universe, the space travel question, is, in my view, almost rendered obsolete by this growth of a digitally-coded biology, because those digital organisms—maybe they don’t exist now, but as long as the system keeps going, they’re inevitable—can travel at the speed of light. They can propagate. They’re going to be so immeasurably far ahead that maybe humans will be dragged along with it.

But while our digital footprint is propagating at the speed of light, we’re having very big trouble even getting to the eleven kilometers per second it takes to get into lower earth orbit. The digital world is clearly winning on that front. And that’s for the distant future. But it changes the game of launching things, if you no longer have to launch physical objects, in order to transmit life.”

George Dyson, author and historian of technology whose publications broadly cover the evolution of technology in relation to the physical environment and the direction of society, A universe of self-replicating code, Edge, Mar 26, 2012.

See also:

Jameson Dungan on information and synthetic biology
Vlatko Vedral: Decoding Reality: the universe as quantum information
Rethinking “Out of Africa: A Conversation with Christopher Stringer (2011)
A Short Course In Synthetic Genomics, The Edge Master Class with George Church & Craig Venter (2009)
Eat Me Before I Eat You! A New Foe For Bad Bugs: A Conversation with Kary Mullis (2010)
Mapping The Neanderthal Genome. A Conversation with Svante Pääbo (2009)
Engineering Biology”: A Conversation with Drew Endy (2008)
☞ “Life: A Gene-Centric View A Conversation in Munich with Craig Venter & Raichard Dawkins (2008)
Ants Have Algorithms: A Talk with Ian Couzin (2008)
Life: What A Concept, The Edge Seminar, Freeman Dyson, J. Craig Venter, George Church, Dimitar Sasselov, Seth Lloyd, Robert Shapiro (2007)
Code II J. Doyne Farmer v. Charles Simonyi (1998)
Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries


The Rise of Complexity. Scientists replicate key evolutionary step in life on earth

         Green cells are undergoing cell death, a cellular division-of-labor—fostering new life.

More than 500 million years ago, single-celled organisms on Earth’s surface began forming multi-cellular clusters that ultimately became plants and animals. (…)

The yeast “evolved” into multi-cellular clusters that work together cooperatively, reproduce and adapt to their environment—in essence, they became precursors to life on Earth as it is today. (…)

The finding that the division-of-labor evolves so quickly and repeatedly in these ‘snowflake’ clusters is a big surprise. (…) The first step toward multi-cellular complexity seems to be less of an evolutionary hurdle than theory would suggest.” (…)

"To understand why the world is full of , including humans, we need to know how one-celled organisms made the switch to living as a group, as multi-celled organisms.” (…)

"This study is the first to experimentally observe that transition," says Scheiner, "providing a look at an event that took place hundreds of millions of years ago." (…)

The scientists chose Brewer’s yeast, or Saccharomyces cerevisiae, a species of yeast used since ancient times to make bread and beer because it is abundant in nature and grows easily.

They added it to nutrient-rich culture media and allowed the cells to grow for a day in test tubes.

Then they used a centrifuge to stratify the contents by weight.

As the mixture settled, cell clusters landed on the bottom of the tubes faster because they are heavier. The biologists removed the clusters, transferred them to fresh media, and agitated them again.

    First steps in the transition to multi-cellularity: ‘snowflake’ yeast with dead cells stained red.

Sixty cycles later, the clusters—now hundreds of cells—looked like spherical snowflakes.

Analysis showed that the clusters were not just groups of random cells that adhered to each other, but related cells that remained attached following cell division.

That was significant because it meant that they were genetically similar, which promotes cooperation. When the clusters reached a critical size, some cells died off in a process known as apoptosis to allow offspring to separate.

The offspring reproduced only after they attained the size of their parents. (…)

     Multi-cellular yeast individuals containing central dead cells, which promote reproduction.

"A cluster alone isn’t multi-cellular," William Ratcliff says. "But when cells in a cluster cooperate, make sacrifices for the common good, and adapt to change, that’s an evolutionary transition to multi-cellularity."

In order for multi-cellular organisms to form, most cells need to sacrifice their ability to reproduce, an altruistic action that favors the whole but not the individual. (…)

For example, all cells in the human body are essentially a support system that allows sperm and eggs to pass DNA along to the next generation.

Thus multi-cellularity is by its nature very cooperative.

"Some of the best competitors in nature are those that engage in cooperation, and our experiment bears that out. (…)

Evolutionary biologists have estimated that multi-cellularity evolved independently in about 25 groups.”

Scientists replicate key evolutionary step in life on earth, Physorg, Jan 16, 2012.

Evolution: The Rise of Complexity

"Let’s rewind time back about 3.5 billion years. Our beloved planet looks nothing like the lush home we know today – it is a turbulent place, still undergoing the process of formation. Land is a fluid concept, consisting of molten lava flows being created and destroyed by massive volcanoes. The air is thick with toxic gasses like methane and ammonia which spew from the eruptions. Over time, water vapor collects, creating our first weather events, though on this early Earth there is no such thing as a light drizzle. Boiling hot acid rain pours down on the barren land for millions of years, slowly forming bubbling oceans and seas. Yet in this unwelcoming, violent landscape, life begins.

The creatures which dared to arise are called cyanobacteria, or blue-green algae. They were the pioneers of photosynthesis, transforming the toxic atmosphere by producing oxygen and eventually paving the way for the plants and animals of today. But what is even more incredible is that they were the first to do something extraordinary – they were the first cells to join forces and create multicellular life. (…)

William Ratcliff and his colleagues at the University of Minnesota. In a PNAS paper published online this week, they show how multicellular yeast can arise in less than two months in the lab. (…)

All of their cultures went from single cells to snowflake-like clumps in less than 60 days. “Although known transitions to complex multicellularity, with clearly differentiated cell types, occurred over millions of years, we have shown that the first crucial steps in the transition from unicellularity to multicellularity can evolve remarkably quickly under appropriate selective conditions,” write the authors. These clumps weren’t just independent cells sticking together for the sake of it – they acted as rudimentary multicellular creatures. They were formed not by random cells attaching but by genetically identical cells not fully separating after division. Furthermore, there was division of labor between cells. As the groups reached a certain size, some cells underwent programmed cell death, providing places for daughter clumps to break from. Since individual cells acting as autonomous organisms would value their own survival, this intentional culling suggests that the cells acted instead in the interest of the group as a whole organism.

Given how easily multicellular creatures can arise in test tubes, it might then come as no surprise that multicellularity has arisen at least a dozen times in the history of life, independently in bacteria, plants and of course, animals, beginning the evolutionary tree that we sit atop today. Our evolutionary history is littered with leaps of complexity. While such intricacies might seem impossible, study after study has shown that even the most complex structures can arise through the meandering path of evolution. In Evolution’s Witness, Ivan Schwab explains how one of the most complex organs in our body, our eyes, evolved. (…)

Eyes are highly intricate machines that require a number of parts working together to function. But not even the labyrinthine structures in the eye present an insurmountable barrier to evolution.

Our ability to see began to evolve long before animals radiated. Visual pigments, like retinal, are found in all animal lineages, and were first harnessed by prokaryotes to respond to changes in light more than 2.5 billion years ago. But the first complex eyes can be found about 540 million years ago, during a time of rapid diversification colloquially referred to as the Cambrian Explosion. It all began when comb jellies, sponges and jellyfish, along with clonal bacteria, were the first to group photoreceptive cells and create light-sensitive ‘eyespots’. These primitive visual centers could detect light intensity, but lacked the ability to define objects. That’s not to say, though, that eyespots aren’t important – eyespots are such an asset that they arose independently in at least 40 different lineages. But it was the other invertebrate lineages that would take the simple eyespot and turn it into something incredible.

According to Schwab, the transition from eyespot to eye is quite small. “Once an eyespot is established, the ability to recognize spatial characteristics – our eye definition – takes one of two mechanisms: invagination (a pit) or evagination (a bulge).” Those pits or bulges can then be focused with any clear material forming a lens (different lineages use a wide variety of molecules for their lenses). Add more pigments or more cells, and the vision becomes sharper. Each alteration is just a slight change from the one before, a minor improvement well within bounds of evolution’s toolkit, but over time these small adjustments led to intricate complexity.

In the Cambrian, eyes were all the rage. Arthropods were visual trendsetters, creating compound eyes by using the latter approach, that of bulging, then combining many little bulges together. One of the era’s top predators, Anomalocaris, had over 16,000 lenses! So many creatures arose with eyes during the Cambrian that Andrew Parker, a visiting member of the Zoology Department at the University of Oxford, believes that the development of vision was the driver behind the evolutionary explosion. His ‘Light-Switch’ hypothesis postulates that vision opened the doors for animal innovation, allowing rapid diversification in modes and mechanisms for a wide set of ecological traits. Even if eyes didn’t spur the Cambrian explosion, their development certainly irrevocably altered the course of evolution.

                     Fossilized compound eyes from Cambrian arthropods (Lee et al. 2011)

Our eyes, as well as those of octopuses and fish, took a different approach than those of the arthropods, putting photo receptors into a pit, thus creating what is referred to as a camera-style eye. In the fossil record, eyes seem to emerge from eyeless predecessors rapidly, in less than 5 million years. But is it really possible that an eye like ours arose so suddenly? Yes, say biologists Dan-E. Nilsson and Susanne Pelger. They calculated a pessimistic guess as to how long it would take for small changes – just 1% improvements in length, depth, etc per generation – to turn a flat eyespot into an eye like our own. Their conclusion? It would only take about 400,000 years – a geological instant.

How does complexity arise in the first place

But how does complexity arise in the first place? How did cells get photoreceptors, or any of the first steps towards innovations such as vision? Well, complexity can arise a number of ways.

Each and every one of our cells is a testament to the simplest way that complexity can arise: have one simple thing combine with a different one. The powerhouses of our cells, called mitochondria, are complex organelles that are thought to have arisen in a very simple way. Some time around 3 billion years ago, certain bacteria had figured out how to create energy using electrons from oxygen, thus becoming aerobic. Our ancient ancestors thought this was quite a neat trick, and, as single cells tend to do, they ate these much smaller energy-producing bacteria. But instead of digesting their meal, our ancestors allowed the bacteria to live inside them as an endosymbiont, and so the deal was struck: our ancestor provides the fuel for the chemical reactions that the bacteria perform, and the bacteria, in turn, produces ATP for both of them. Even today we can see evidence of this early agreement – mitochondria, unlike other organelles, have their own DNA, reproduce independently of the cell’s reproduction, and are enclosed in a double membrane (the bacterium’s original membrane and the membrane capsule used by our ancestor to engulf it).

Over time the mitochondria lost other parts of their biology they didn’t need, like the ability to move around, blending into their new home as if they never lived on their own. The end result of all of this, of course, was a much more complex cell, with specialized intracellular compartments devoted to different functions: what we now refer to as a eukaryote.

Complexity can arise within a cell, too, because our molecular machinery makes mistakes. On occasion, it duplicates sections of DNA, entire genes, and even whole chromosomes, and these small changes to our genetic material can have dramatic effects. We saw how mutations can lead to a wide variety of phenotypic traits when we looked at how artificial selection has shaped dogs. These molecular accidents can even lead to complete innovation, like the various adaptations of flowering plants that I talked about in my last Evolution post. And as these innovations accumulate, species diverge, losing the ability to reproduce with each other and filling new roles in the ecosystem. While the creatures we know now might seem unfathomably intricate, they are the product of billions of years of slight variations accumulating.

Of course, while I focused this post on how complexity arose, it’s important to note that more complex doesn’t necessarily mean better. While we might notice the eye and marvel at its detail, success, from the viewpoint of an evolutionary lineage, isn’t about being the most elaborate. Evolution only leads to increases in complexity when complexity is beneficial to survival and reproduction.

Indeed, simplicity has its perks: the more simple you are, the faster you can reproduce, and thus the more offspring you can have. Many bacteria live happy simple lives, produce billions of offspring, and continue to thrive, representatives of lineages that have survived billions of years. Even complex organisms may favor less complexity – parasites, for example, are known for their loss of unnecessary traits and even whole organ systems, keeping only what they need to get inside and survive in their host. Darwin referred to them as regressive for seemingly violating the unspoken rule that more complex arises from less complex, not the other way around. But by not making body parts they don’t need, parasites conserve energy, which they can invest in other efforts like reproduction.

When we look back in an attempt to grasp evolution, it may instead be the lack of complexity, not the rise of it, that is most intriguing.”

See also:

Scientists recreate evolution of complexity using ‘molecular time travel’
Nature Has A Tendency To Reduce Complexity
Emergence and Complexity - prof. Robert Sapolsky’s lecture, Stanford University (video)


Scientists recreate evolution of complexity using ‘molecular time travel’  


Much of what living cells do is carried out by “molecular machines” – physical complexes of specialized proteins working together to carry out some biological function. (…)

In a study published early online on January 8, in Nature, a team of scientists from the University of Chicago and the University of Oregon demonstrate how just a few small, high-probability mutations increased the complexity of a molecular machine more than 800 million years ago. By biochemically resurrecting ancient genes and testing their functions in modern organisms, the researchers showed that a new component was incorporated into the machine due to selective losses of function rather than the sudden appearance of new capabilities.

"Our strategy was to use ‘molecular time travel’ to reconstruct and experimentally characterize all the proteins in this molecular machine just before and after it increased in complexity," said the study’s senior author Joe Thornton, PhD, professor of human genetics and & ecology at the University of Chicago, professor of biology at the University of Oregon, and an Early Career Scientist of the Howard Hughes Medical Institute.

"By reconstructing the machine’s components as they existed in the deep past," Thornton said, "we were able to establish exactly how each protein’s function changed over time and identify the specific genetic mutations that caused the machine to become more elaborate." (…)

To understand how the ring increased in complexity, Thornton and his colleagues “resurrected” the ancestral versions of the ring proteins just before and just after the third subunit was incorporated. To do this, the researchers used a large cluster of computers to analyze the gene sequences of 139 modern-day ring proteins, tracing evolution backwards through time along the Tree of Life to identify the most likely ancestral sequences. They then used biochemical methods to synthesize those ancient genes and express them in modern yeast cells. (…)

Thornton’s research group has helped to pioneer this molecular time-travel approach for single genes; this is the first time it has been applied to all the components in a .

The group found that the third component of the ring in Fungi originated when a gene coding for one of the subunits of the older two-protein ring was duplicated, and the daughter genes then diverged on their own evolutionary paths.

The pre-duplication ancestor turned out to be more versatile than either of its descendants: expressing the ancestral gene rescued modern yeast that otherwise failed to grow because either or both of the descendant ring protein genes had been deleted. In contrast, each resurrected gene from after the duplication could only compensate for the loss of a single ring protein gene.

The researchers concluded that the functions of the ancestral protein were partitioned among the duplicate copies, and the increase in complexity was due to complementary loss of ancestral functions rather than gaining new ones. By cleverly engineering a set of ancestral proteins fused to each other in specific orientations, the group showed that the duplicated proteins lost their capacity to interact with some of the other ring proteins. Whereas the pre-duplication ancestor could occupy five of the six possible positions within the ring, each duplicate gene lost the capacity to fill some of the slots occupied by the other, so both became obligate components for the complex to assemble and function.

"It’s counterintuitive but simple: complexity increased because protein functions were lost, not gained," Thornton said. "Just as in society, complexity increases when individuals and institutions forget how to be generalists and come to depend on specialists with increasingly narrow capacities." (…)

"The mechanisms for this increase in complexity are incredibly simple, common occurrences," Thornton said. "Gene duplications happen frequently in cells, and it’s easy for errors in copying to DNA to knock out a protein’s ability to interact with certain partners. It’s not as if evolution needed to happen upon some special combination of 100 mutations that created some complicated new function."

Thornton proposes that the accumulation of simple, degenerative changes over long periods of times could have created many of the complex molecular machines present in organisms today. Such a mechanism argues against the intelligent design concept of “irreducible complexity,” the claim that molecular machines are too complicated to have formed stepwise through evolution.

"I expect that when more studies like this are done, a similar dynamic will be observed for the evolution of many molecular complexes," Thornton said.

"These really aren’t like precision-engineered machines at all," he added. "They’re groups of molecules that happen to stick to each other, cobbled together during evolution by tinkering, degradation, and good luck, and preserved because they helped our ancestors to survive."

Scientists recreate evolution of complexity using ‘molecular time travel’, Physorg, Jan 8m, 2011. (Illustration: Oak Ridge National Laboratory)

See also:

Nature Has A Tendency To Reduce Complexity
The Rise of Complexity. Scientists replicate key evolutionary step in life on earth
The genes are so different, the scientists argue, that giant viruses represent a fourth domain of life
Uncertainty principle: How evolution hedges its bets
Culture gene coevolution of individualism - collectivism
Genetics tag at Lapidarium notes


The Genographic Project ☞ A Landmark Study of the Human Journey 

                                       (Click image to explore Atlas of Human Journey)

Human Migration, Population Genetics, Maps, DNA.

"Where do you really come from? And how did you get to where you live today? DNA studies suggest that all humans today descend from a group of African ancestors who—about 60,000 years ago—began a remarkable journey.

The Genographic Project is seeking to chart new knowledge about the migratory history of the human species by using sophisticated laboratory and computer analysis of DNA contributed by hundreds of thousands of people from around the world. In this unprecedented and of real-time research effort, the Genographic Project is closing the gaps of what science knows today about humankind’s ancient migration stories.

The Genographic Project is a multi-year research initiative led by National Geographic Explorer-in-Residence Dr. Spencer Wells. Dr. Wells and a team of renowned international scientists and IBM researchers, are using cutting-edge genetic and computational technologies to analyze historical patterns in DNA from participants around the world to better understand our human genetic roots.”

                                       (Click image to explore Globe of Human History)

The Genographic Project - Human Migration, Population Genetics, Maps, DNA, National Geographic

The Genographic Project - Introduction


See also:

Evolution of Language tested with genetic analysis


Human Nature. Sapolsky, Maté, Wilkinson, Gilligan, discuss on human behavior and the nature vs. nurture debate

In this part of Peter Joseph's documentary Zeitgeist: Moving Forward “The discussion turns to human behavior and the nature vs. nurture debate. This portion begins with a small clip with Robert Sapolsky summing up the nature vs. nurture debate which he essentially refers it as a “false dichotomy.” After which he states that “it is virtually impossible to understand how biology works outside the context of environment.”

During which time the film then goes onto describe that it is neither Nature or Nurture that shapes human behavior but both are supposed to influence behavior. The interviewed pundits state that even with genetic predispositions to diseases the expression and manifestation of disease is largely determined by environmental stressors. Disease criminal activity and addictions are also placed in the same light. One study discussed showed that newly born babies are more likely to die if they are not touched. Another study which was mentioned claimed to show how stressed women were more likely to have children with addiction disorders. A reference is made to the unborn children who were in utero during the Dutch famine of 1944. The “Dutch Famine Birth Cohort Study" is mentioned to have shown that obesity and other health complications became common problems later in life due to prolonged starvation of their mother during pregnancy.

Comparisons are made by sociologists of criminals in different parts of the world and how different cultures with different values can often have more peaceful inhabitants. An Anabaptist sect called the Hutterites are mentioned to have never reported a homicide in any of their societies. The overall conclusion is that social environment and cultural conditioning play a large part in shaping human behavior.”

Zeitgeist Moving Forward I Human Nature

Dr. Gabor Maté: “Nothing is genetically programmed. There are very rare diseases, a small handful, extremely sparsely represented in the population, that are truly genetically determined. Most complex conditions might have a predisposition that has a genetic component. But a predisposition is not the same as a predetermination. The whole search for the source of diseases in the genome was doomed to failure before anybody even thought of it, because most diseases are not genetically predetermined. Heart disease, cancer, strokes, rheumatoid conditions, autoimmune conditions in general, mental health conditions, addictions, none of them are genetically determined. (…)

That’s an epigenetic effect. “Epi” means on top of, so that the epigenetic influence is what happens environmentally to either activate or deactivate certain genes. (…)

So, the genetic argument is simply a cop-out which allows us to ignore the social and economic and political factors that, in fact, underlie many troublesome behaviors. (…)

If we wish to understand what then makes some people susceptible we actually have to look at the life experience. The old idea, although it’s old but it’s still broadly held, that addictions are due to some genetic cause is simply scientifically untenable. What the case is actually is that certain life experiences make people susceptible. Life experiences that not only shape the person’s personality and psychological needs but also their very brains in certain ways. And that process begins in utero.

It has been shown, for example that if you stress mothers during pregnancy their children are more likely to have traits that predispose them to addictions and that’s because development is shaped by the psychological and social environment. So the biology of human beings is very much affected by and programmed by the life experiences beginning in utero.”

Dr. Robert Sapolsky: “Environment does not begin at birth. Environment begins as soon as you have an environment. As soon as you are a fetus, you are subject to whatever information is coming through mom’s circulations. Hormones levels of nutrients. (…) Be a Dutch Hunger Winter fetus and half a century later, everything else being equal, you are more likely to have high blood pressure, obesity or metabolic syndrome. That is environment coming in a very unexpected place. (…)”

GM: “The point about human development and specifically human brain development is that it occurs mostly under the impact of the environment and mostly after birth. (…)

The concept of Neural Darwinism simply means that the circuits that get the appropriate input from the environment will develop optimally and the ones that don’t will either not develop optimally or perhaps not at all. (…)

There is a significant way in which early experiences shape adult behavior and even and especially early experiences for which there is no recall memory. It turns out that there are two kinds of memory: there is explicit memory which is recall; this is when you can call back facts, details, episodes, circumstances. But the structure in the brain which is called the hippocampus which encodes recall memory doesn’t even begin to develop fully until a year and a half and it is not fully developed until much later, which is why hardly anybody has any recall memory prior to 18 months.

But there is another kind of memory which is called implicit memory which is, in fact, an emotional memory where the emotional impact and the interpretation the child makes of those emotional experiences are ingrained in the brain in the form of nerve circuits ready to fire without specific recall.  So to give you a clear example, people who are adopted have a lifelong sense of rejection very often. They can’t recall the adoption. They can’t recall the separation of the birth mother because there’s nothing there to recall with. But the emotional memory of separation and rejection is deeply embedded in their brains. Hence, they are much more likely to experience a sense of rejection and a great emotional upset when they perceive themselves as being rejected by other people. That’s not unique to people who are adopted but it is particularly strong in them because of this function of implicit memory. (…)

The great British child psychiatrist, D.W. Winnicott, said that fundamentally, two things can go wrong in childhood. One is when things happen that shouldn’t happen and then things that should happen but don’t. (…)

The Buddha argued that everything depends on everything else. He says ‘The one contains the many and the many contains the one.’ That you can’t understand anything in isolation from its environment, the leaf contains the sun, the sky and the earth, obviously. This has now been shown to be true, of course all around and specifically when it comes to human development. The modern scientific term for it is the ‘bio-psycho-social’ nature of human development which says that the biology of human beings depends very much on their interaction with the social and psychological environment.

And specifically, the psychiatrist and researcher Daniel Siegel at the University of California, Los Angeles, UCLA has coined a phrase Interpersonal Neurobiology” which means to say that the way that our nervous system functions depends very much on our personal relationships,  in the first place with the parenting caregivers, and in the second place with other important attachment figures in our lives and in the third-place, with our entire culture. So that you can’t separate the neurological functioning of a human being from the environment in which he or she grew up in and continues to exist in. And this is true throughout the life cycle. It’s particularly true when you are dependent and helpless when your brain is developing but it’s true even in adults and even at the end of life. (…)”

Dr. James Gilligan: “Violence is not universal. It is not symmetrically distributed throughout the human race. There is a huge variation in the amount of violence in different societies. There are some societies that have virtually no violence. There are others that destroy themselves. Some of the Anabaptist religious groups that are complete strict pacifists like the Amish, the Mennonites, the Hutterites, among some of these groups, the Hutterites - there are no recorded cases of homicide.

During our major wars, like World War II where people were being drafted they would refuse to serve in the military. They would go to prison rather than serve in the military. In the Kibbutzim in Israel the level of violence is so low that the criminal courts there will often send violent offenders - people who have committed crimes - to live on the Kibbutzim in order to learn how to live a non-violent life. Because that’s the way people live there. 

RS: So, we are amply shaped by society. Our societies, in the broader sense, including our theological, our metaphysical, our linguistic influences, etc, our societies help shape us as to whether or not we think life is basically about sin or about beauty; whether the afterlife will carry a price for how we live our lives or if it’s irrelevant. (…)

So, this brings us to a total impossible juncture which is to try to make sense in perspective science as to what that nature is of human nature. You know, on a certain level the nature of our nature is not to be particularly constrained by our nature. We come up with more social variability than any species out there. More systems of belief, of styles, of family structures, of ways of raising children. The capacity for variety that we have is extraordinary. (…)

GM: In a society which is predicated on competition and really, very often, the ruthless exploitation of one human being by another, the profiteering off of other people’s problems and very often the creation of problems for the purpose of profiteering, the ruling ideology will very often justify that behavior by appeals to some fundamental and unalterable human nature. So the myth in our society is that people are competitive by nature and that they are individualistic and that they’re selfish. The real reality is quite the opposite. We have certain human needs. The only way that you can talk about human nature concretely is by recognizing that there are certain human needs. We have a human need for companionship and for close contact, to be loved, to be attached to, to be accepted, to be seen, to be received for who we are. If those needs are met, we develop into people who are compassionate and cooperative and who have empathy for other people.

So the opposite, that we often see in our society, is in fact, a distortion of human nature precisely because so few people have their needs met. So, yes, you can talk about human nature but only in the sense of basic human needs that are instinctively evoked or I should say certain human needs that lead to certain traits if they are met and a different set of traits if they are denied.”

— Zeitgeist: Moving Forward - full transcript

Robert Sapolsky - American scientist and author. He is currently professor of Biological Sciences, and Professor of Neurology and Neurological Sciences and, by courtesy, Neurosurgery, at Stanford University.

Gabor Maté, Hungarian-born Canadian physician who specializes in the study and treatment of addiction and is also widely recognized for his unique perspective on Attention Deficit Disorder.

Richard Wilkinson - British researcher in social inequalities in health and the social determinants of health. He is Professor Emeritus of social epidemiology at the University of Nottingham.

James Gilligan - American psychiatrist and author, best known for his series of books entitled Violence, where he draws on 25 years of work in the American prison system to describe the motivation and causes behind violent behaviour. He now lectures at the Department of Psychiatry, New York University.

See also:

Zeitgeist: Moving Forward by Peter Joseph, 2011 (full documentary) (transcript)


Vlatko Vedral: Decoding Reality: the universe as quantum information


Everything in our reality is made up of information. From the evolution of life to the dynamics of social ordering to the functioning of quantum computers, they can all be understood in terms of bits of information. We saw that in order to capture all the latest elements of reality we needed to extend Claude Shannon's original notion of information, and upgrade his notion from bits to quantum bits, or qubits. Qubits incorporate the fact that in quantum theory outcomes to our measurements are intrinsically random.

But where do these qubits come from? Quantum theory allows us to answer this question; but the answer is not quite what we expected. It suggests that these qubits come from nowhere! There is no prior information required in order for information to exist. Information can be created from emptiness. In presenting a solution to the sticky question of ‘law without law’ we find that information breaks the infinite chain of regression in which we always seem to need a more fundamental law to explain the current one. This feature of information, ultimately coming from our understanding of quantum theory, is what distinguishes information from any other concept that could potentially unify our view of reality, such as matter or energy. Information is, in fact, unique in this respect. (…) p. 215

This book will argue that information (and not matter or energy or love) is the building block on which everything is constructed. Information is far more fundamental than matter or energy because it can be successfully applied to both macroscopic interactions, such as economic and social phenomena, and, as I will argue, information can also be used to explain the origin and behaviour of microscopic interactions such as energy and matter.

The question of everything from nothing, creation ex nihilo

As pointed out by David Deutsch and John Archibald Wheeler, however, whatever candidate is proposed for the fundamental building block of the Universe, it still needs to explain its ‘own’ ultimate origin too. In other words, the question of everything from nothing, creation ex nihilo, is key. So if, as I claim, information is this common thread, the question of creation ex nihilo reduces to explaining how some information arises out of no information. Not only will I show how this is possible, I will also argue that information, in contrast to matter and energy, is the only concept that we currently have that can explain its own origin. (…) p.10

This desire to compress information and the natural increase of information in the Universe may initially seem like independent processes, but as we will explore in much more detail later there may be a connection. As we compress and find all-encompassing principles describing our reality, it is these principles that then indicate how much more information there is in our Universe to find. In the same way that Feuerbach states that ‘Man first creates God, and then God creates Man’, we can say that we compress information into laws from which we construct our reality, and this reality then tells us how to further compress information. (…)

I believe this view of reality being defined through information compression is closer to the spirit of science as well as its practice. (…) It is also closer to the scientific meaning of information in that information reflects the degree of uncertainty in our knowledge of a system. (…)

Information is the underlying thread that connects all phenomena we see around us as well as explaining their origin. Our reality is ultimately made up of information. (…) p. 12-13

Information is the language Nature uses to convey its messages and this information comes in discrete units. We use these units to construct our reality. (…) p. 23

Do we define information as a quantity which we can use to do something useful or could we still call it information even if it wasn’t of any use to us? Is information objective or is it subjective? For example, would the same message or piece of news carry the same information for two different people? Is information inherently human or can animals also process information? Going even beyond this, is it a good thing to have a lot of information and to be able to process it quickly or can too much information drown you? These questions all add some colour and vigour to the challenge of achieving an agreed and acceptable definition of information.

The second trouble with information is that, once defined in a rigorous manner, it is measured in a way that is not easy to convey without mathematics. You may be very surprised to hear that even scientists balk at the thought of yet another equation. (…) p. 26-27

By stripping away all irrelevant details we can distil the essence of what information means. (…) Unsurprisingly, we find the basis of our modern concept of information in Ancient Greece. The Ancient Greeks laid the groundwork for its definition when they suggested that the information content of an event somehow depends only on how probable this event really is. Philosophers like Aristotle reasoned that the more surprised we are by an event the more information the event carries. By this logic, having a clear sunny autumn day in England would be a very surprising event, whilst experiencing drizzle randomly throughout this period would not shock anyone. This is because it is very likely, that is, the probability is high, that it will rain in England at any given instant of time. From this we can conclude that less likely events, the ones for which the probability of happening is very small, are those that surprise us more and therefore are the ones that carry more information.

Following this logic, we conclude that information has to be inversely proportional to probability, i.e. events with smaller probability carry more information. In this way, information is reduced to only probabilities and in turn probabilities can be given objective meaning independent of human interpretation or anything else (meaning that whilst you may not like the fact that it rains a lot in England, there is simply nothing you can do to change its probability of occurrence). (…) p. 29

As we saw in the initial chapter on creation ex nihilo, the fundamental question is why there is any information in the first place. For the replication of life we saw that we needed four main components, the protein synthesizer machine [a universal constructing machine], M, the DNA Xerox copier X, the enzymes which act as controllers, C, and the DNA information set [the set of instructions required to construct these three], I. (…) With these it is possible to then create an entity that self-replicates indefi nitely.

A macromolecule responsible for storing the instructions, I, in living systems is called DNA. DNA has four bases: A, C, T, and G. When DNA replicates inside our cells, each base has a specifi c pairing partner. There is huge redundancy in how bases are combined to form amino acid chains. This is a form of error correction. The digital encoding mechanism of DNA ensures that the message gets propagated with high fidelity. Random mutations aided by natural selection necessarily lead to an increase in complexity of life. 

The process of creating biological information from no prior biological information is another example of the question of creation ex nihilo. Natural selection does not tell us where biological information comes from – it just gives us a framework of how it propagates. (…) p. 54-55

My argument is that life paradoxically ends not when it underdoses on fuel, but, more fundamentally, when it overdoses on ‘information’ (i.e. when it reaches a saturation point and can no longer process any further information). We have all experienced instances where we feel we cannot absorb any more information. (…)

The Second Law of thermodynamics tells us that in physical terms, a system reaches its death when it reaches its maximum disorder (i.e. it contains as much information as it can handle). This is sometimes (cheerfully) referred to as thermal death, which could really more appropriately be called information overload. This state of maximum disorder is when life effectively becomes a part of the rest of the lifeless Universe. Life no longer has any capacity to evolve and remains entirely at the mercy of the environment. (…) p. 58-59

Physical entropy, which describes how disordered a system is, tends to increase with time. This is known as the Second Law of thermodynamics. The increasing complexity of life is driven by the overall increase in disorder in the Universe. (…) p. 76

Mutual information

This concept is very important in understanding a diverse number of phenomena in Nature and will be the key when we explain the origin of structure in any society.

Mutual information is the formal word used to describe the situation when two (or more) events share information about one another. Having mutual information between events means that they are no longer independent; one event has something to tell you about the other. For example, when someone asks if you’d like a drink in a bar, how many times have you replied ‘I’ll have one if you have one’? This statement means that you are immediately correlating your actions with the actions of the person offering you a drink. If they have a drink, so will you; if they don’t, neither will you. Your choice to drink-or-not-to-drink is completely tied to theirs and hence, in information theory parlance, you both have maximum mutual information.

A little more formally, the whole presence of mutual information can be phrased as an inference indicator. Two things have mutual information if by looking at just one of them you can infer something about one of the properties of the other one. So, in the above example, if I see that you have a drink in front of you that means logically that the person offering you a drink also has a glass in front of them (given that you only drink when the person next to you drinks). (…)

Whenever we discuss mutual information we are really asking how much information an object/person/idea has about another object/person/idea. (…)

When it comes to DNA, its molecules share information about the protein they encode. Different strands of DNA share information about each other as well (we know that A only binds to G and C only binds to T). Furthermore the DNA molecules of different people also share information about one another (a father and a son, for example, share half of their DNA genetic material) and the DNA is itself sharing information with the environment – in that the environment determines through natural selection how the DNA evolves. (…)

One of the phenomena we will try to understand here, using mutual information, is what we call ‘globalization’, or the increasing interconnectedness of disparate societies. (…)

Before we delve further into social phenomena, I need to explain an important concept in physics called a phase transition. Stated somewhat loosely, phase transitions occur in a system when the information shared between the individual constituents become large (so for a gas in a box, for an iron rod in a magnetic field, and for a copper wire connected into an electric circuit, all their constituents share some degree of mutual information).

A high degree of mutual information often leads to a fundamentally different behaviour, although the individual constituents are still the same. To elaborate this point, the individual constituents are not affected on an individual basis, but as a group they exhibit entirely different behaviour. The key is how the individual constituents relate to one another and create a group dynamic. This is captured by the phrase ‘more is different’, by the physicist Philip Anderson, who contributed a great deal to the subject, culminating in his Nobel Prize in 1977.

A common example of a group dynamic is the effect we observe when boiling or freezing water (i.e. conversion of a liquid to a gas or conversion of a liquid to a solid). These extreme and visible changes of structures and behaviour are known as phase transitions. When water freezes, the phase transition occurs as the water molecules becomes more tightly correlated and these correlations manifest themselves in stronger molecular bonds and a more solid structure. The formation of societies and significant changes in every society – such as a revolution or a civil war or the attainment of democracy – can, in fact, be better understood using the language of phase transitions.

I now present one particular example that will explain phase transitions in more detail. This example will then act as our model to explain various social phenomena that we will tackle later in the chapter. Let us imagine a simple solid, made up of a myriad of atoms (billions and billions of them). Atoms usually interact with each other, although these interactions hardly ever stretch beyond their nearest neighbours. So, atoms next to each other will feel each other’s presence only, while the ones that are further apart from each other will typically never directly exchange any information.

It would now be expected that as a result of the ‘nearest neighbour’ interaction, only the atoms next to each other share information while this is not possible where there is no interaction. Though this may sound logical, it is in fact entirely incorrect. Think of a whip: you shake one end and this directly infl uences the speed and range at which the other end moves. You are transferring movement using the interconnectedness of atoms in the whip. Information can be shared between distant atoms because one atom interacts with its neighbours, but the neighbours also interact with their neighbours, and so on. This concept can be explained more elegantly through the concept of ‘six degrees of separation’. You often see it claimed that each person on this planet is at most six people away from any other person. (…) p. 94-97

Why is this networking between people important? You might argue that decisions made by society are to a high degree controlled by individuals – who ultimately think for themselves. It is clear, however, that this thinking is based on the common information shared between individuals. It is this interaction between individuals that is responsible for the different structures within society as well as society itself. (…) In this case, the information shared between individuals becomes much more important. So how do all people agree to make a decision, if they only interact locally, i.e. with a very limited number of neighbours?

In order to understand how local correlations can lead to the establishment of structures within society, let us return to the example of a solid. Solids are regular arrays of atoms. This time, however, rather than talking about how water becomes ice, let’s consider how a solid becomes a magnet. Every atom in a solid can be thought of as a little magnet on its own. Initially these magnets are completely independent of one another and there is no common north/south alignment – meaning that they are all pointing in random directions. The whole solid – the whole collection of atoms – would then be a random collection of magnets and would not be magnetized as a whole (this is known as a paramagnet). All the random little atomic magnets would simply cancel each other out in effect and there would be no net magnetic field.

However, if the atoms interact, then they can affect each other’s state, i.e. they can cause their neighbours to line up with them. Now through the same principle as six degrees of separation, each atom affects the other atoms it is connected to, and in turn these affect their own neighbours, eventually correlating all the atoms in the solid. If the interaction is stronger than the noise due to the external temperature, then all magnets will eventually align in the same direction and the solid as a whole generates a net magnetic field and hence becomes magnetic! All atoms now behave coherently in tune, just like one big magnet. The point at which all atoms ‘spontaneously’ align is known as the point of phase transition, i.e. the point at which a solid becomes a magnet. (…)

You may object that atoms are simple systems compared to humans. After all humans can think, feel, get angry, while atoms are not alive and their range of behaviour is far simpler. But this is not the point! The point is that we are only focusing on one relevant property of humans (or atoms) here. Atoms are not all that simple either, but we are choosing to make them so by looking only at their magnetic properties. Humans are much more complicated still, but now we only want to know about their political preference, and these can be quite simple in practice. (…)

Small-world network

This unevenness in the number of contacts leads to a very important model where there is a great deal of interaction with people close by and then, every once in a while, there is a long-distance interaction with someone far away. This is called a ‘small world network’ and is an excellent model for how and why disease propagates rapidly in our world. When we get ill, disease usually spreads quickly to our closest neighbours. Then it is enough that only one of the neighbours takes a long-distance flight and this can then make the virus spread in distant places. And this is why we are very worried about swine flu and all sorts of other potential viruses that can kill humans.

Let us now consider why some people believe – rightly or wrongly – that the information revolution has and will transform our society more than any other revolution in the past – such as the industrial revolution discussed in earlier chapters. Some sociologists, such as Manuel Castells, believe that the Internet will inflict much more profound transformations in our society than ever previously seen in history. His logic is based on the above idea of phase transitions, though, being a sociologist, he may not be interpreting them in quite the same way as a physicist does mathematically.

To explain, we can think of early societies as very ‘local’ in nature. One tribe exists here, another over there, but with very little communication between them. Even towards the end of the nineteenth century, transfer of ideas and communication in general were still very slow. So for a long time humans have lived in societies where communication was very short range. And, in physics, this would mean that abrupt changes are impossible. Societies have other complexities, so I would say that ‘fundamental change is unlikely’ rather than ‘impossible’. Very recently, through the increasing availability of technology we can travel far and wide, and through the Internet we can learn from and communicate with virtually anyone
in the world.

Early societies were like the Ising model, while later ones are more like the small world networks. Increasingly, however, we are approaching the stage where everyone can and does interact with everyone else. And this is exactly when phase transitions become increasingly more likely. Money (and even labour) can travel from one end of the globe to another in a matter of seconds or even faster. This, of course, has an effect on all elements of our society.

Analysing social structures in terms of information theory can frequently reveal very counterintuitive features. This is why it is important to be familiar with a language of information theory, because without a formalized framework, some of the most startling and beautiful effects are much harder to understand in terms of root causes.(…) p. 98

Universe as a quantum computer

Konrad Zuse, a famous German mathematician who pioneered many cryptographic techniques used during World War II, was the first to view the Universe as a computer. (…) The problem, however, is that all these models assume that the Universe is a classical computer. By now, however, we know that the Universe should be understood as a quantum computer.

Our reality evolves because every once in a while we find that we need to edit part of the program that describes reality. We may find that this piece of the program, based on a certain model, is refuted (the underlying model is found to be inaccurate), and hence the program needs to be updated. Refuting a model and changing a part of the program is, as we saw, crucial to changing reality itself because refutations carry much more information than simply confirming a model. (…) p. 192

We can construct our whole reality in this way by looking at it in terms of two distinct but inter-related arrows of knowledge. We have the spontaneous creation of mutual information in the Universe as events unfold, without any prior cause. This kicks off the interplay between the two arrows. On the one hand, through our observations and a series of conjectures and refutations, we compress the information in the Universe into a set of natural laws. These laws are the shortest programs to represent all our observations. On the other hand, we run these programs to generate our picture of reality. It is this picture that then tells us what is, and isn’t, possible to accomplish, in other words, what our limitations are.

The Universe starts empty but potentially with a huge amount of information. The key event that gives the Universe some direction is the first act of ‘symmetry breaking’, the first cut of the sculptor. This act, which we consider as completely random, i.e. without any prior cause, just decides on why one tiny aspect in the Universe is one way rather than another. This first event sets in motion a chain reaction in which, once one rule has been decided, the rest of the Universe needs to proceed in a consistent manner. (…)

This is where the first arrow of knowledge begins. We compress the spontaneous, yet consistent information in the Universe, into a set of natural laws that continuously evolve as we test and discard the erroneous ones. Just as man evolved through a compression of biological information (a series of optimizations for the changing environment), our understanding of the Universe (our reality) has also evolved as we better synthesize and compress the information that we are presented with into more and more accurate laws of Nature. This is how the laws of Nature emerge, and these are the physical, biological, and social principles that our knowledge is based on.

The second arrow of knowledge is the flip-side to the first arrow. Once we have the laws of Nature, we explore their meaning in order to define our reality, in terms of what is and isn’t possible within it. It is a necessary truth that whatever our reality, it is based exclusively on our understanding of these laws. For example, if we have no knowledge of natural selection, all of the species look independently created and without any obvious connection. Of course this is all dynamic in that when we find an event that doesn’t fit our description of reality, then we go back and change the laws, so that the subsequently generated reality also explains this event.

The basis for these two arrows is the darkness of reality, a void from which they were created and within which they operate. Following the first arrow, we ultimately arrive at nothing (ultimately there is no reality, no law without law). The second arrow then lifts us from this nothingness and generates a picture of reality as an interconnected whole.

So our two arrows seem to point in opposite directions to one another. The first compresses the information available into succinct knowledge and the second decompresses the resulting laws into a colourful picture of reality. In this sense our whole reality is encoded into the set of natural laws. We already said that there was an overall direction for information flow in the Universe, i.e. that entropy (disorder) in the Universe can only increase. This gives us a well defined directionality to the Universe, commonly known as the ‘arrow of time’. (…)

The first arrow of knowledge clearly acts like a Maxwell’s demon. It constantly combats the arrow of time and tirelessly compresses disorder into something more meaningful. It connects seemingly random and causeless events into a string of mutually inter-related facts. The second arrow of knowledge, however, acts in the opposite direction of increasing the disorder. By changing our view of reality it instructs us that there are more actions we can take within the new reality than we could with the previous, more limited view.

Within us, within all objects in the Universe, lie these two opposing tendencies. So, is this a constant struggle between new information and hence disorder being created in the Universe, and our efforts to order this into a small set of rules? If so, is this a losing battle? (…)

Scientific knowledge proceeds via a dialogue with Nature. We ask ‘yes-no’ questions through our observations of various phenomena.

Information in this way is created out of no information. By taking a stab in the dark we set a marker which we can then use to refine our understanding by asking such ‘yes-no’ questions. (…)

The whole of our reality emerges by first using the conjectures and refutations to compress observations and then from this compression we deduce what is and isn’t possible. (…) p. 211-214

Viewing reality as information leads us to recognize two competing trends in its evolution. These trends, or let’s call them arrows, work hand in hand, but point in opposite directions. The first arrow orders the world against the Second Law of thermodynamics and compresses all the spontaneously generated information in the Universe into a set of well-defined principles. The second arrow then generates our view of reality from these principles.

It is clear that the more efficient we are in compressing all the spontaneously generated information, the faster we can expand our reality of what is and isn’t possible. But without the second arrow, without an elementary view of our reality, we cannot even begin to describe the Universe. We cannot access parts of the Universe that have no corresponding basis in our reality. After all, whatever is outside our reality is unknown to us. (…)

By exploring our reality we better understand how to look for and compress the information that the Universe produces. This in turn then affects our reality. Everything that we have understood, every piece of knowledge, has been acquired by feeding these two arrows into one another. Whether it is biological propagation of life, astrophysics, economics, or quantum mechanics, these are all a consequence of our constant re-evaluation of reality. So it’s clear that not only does the second arrow depend on the first, it is natural that the first arrow also depends on the second. (…)

We compress information to generate our laws of Nature, and then use these laws of Nature to generate more information, which then gets compressed back into upgraded laws of Nature.

The dynamics of the two arrows is driven by our desire to understand the Universe. As we drill deeper and deeper into our reality we expect to find a better understanding of the Universe. We believe that the Universe to some degree behaves independently of us and the Second Law tells us that the amount of information in the Universe is increasing. But what if with the second arrow, which generates our view of reality, we can affect parts of the Universe and create new information? In other words, through our existence could we affect the Universe within which we exist? This would make the information generated by us a part of the new information the Second Law talks about.

A scenario like this presents no conceptual problem within our picture. This new information can also be captured by the first arrow, as it fights, through conjectures and refutations, to incorporate any new information into the basic laws of Nature. However, could it be that there is no other information in the Universe than that generated by us as we create our own reality?

This leads us to a startling possibility. If indeed the randomness in the Universe, as demonstrated by quantum mechanics, is a consequence of our generation of reality then it is as if we create our own destiny. It is as if we exist within a simulation, where there is a program that is generating us and everything that we see around us. Think back to the movie The Matrix, where Keanu Reeves lives in a simulation until he is offered a way out, a way back into reality. If the randomness in the Universe is due to our own creation of reality, then there is no way out for us. This is because, in the end, we are creators of our own simulation. In such a scenario, Reeves would wake up in his reality only to find himself sitting at the desk programming his own simulation. This closed loop was echoed by John Wheeler who said: ‘physics gives rise to observer-participancy; observer-participancy gives rise to information; information gives rise to physics.

But whether reality is self-simulating (and hence there is no Universe required outside of it) is, by definition, something that we will never know. What we can say, following the logic presented in this book, is that outside of our reality there is no additional description of the Universe that we can understand, there is just emptiness. This means that there is no scope for the ultimate law or supernatural being – given that both of these would exist outside of our reality and in the darkness. Within our reality everything exists through an interconnected web of relationships and the building blocks of this web are bits of information. We process, synthesize, and observe this information in order to construct the reality around us. As information spontaneously emerges from the emptiness we take this into account to update our view of reality. The laws of Nature are information about information and outside of it there is just darkness. This is the gateway to understanding reality.

And I finish with a quote from the Tao Te Ching, which some 2500 years earlier, seems to have beaten me to the punch-line:

The Tao that can be told is not the eternal Tao.
The name that can be named is not the eternal name.
The nameless is the beginning of heaven and earth.
The named is the mother of the ten thousand things.
Ever desireless, one can see the mystery.
Ever desiring, one sees the manifestations.
These two spring from the same source but differ in name; this
appears as darkness.
Darkness within darkness.
The gate to all mystery.

p. 215-218

Vlatko Vedral, Professor of Physics at the University of Oxford and CQT (Centre for Quantum Technologies) at the National University of Singapore, Decoding Reality: the universe as quantum information, Oxford University Press, 2010 (Illustration source)

Vlatko Vedral: Everything is information

Physicist Vlatko Vedral explains to Aleks Krotoski why he believes the fundamental stuff of the universe is information and how he hopes that one day everything will be explained in this way.

"In Decoding Reality, Vedral argues that we should regard the entire universe as a gigantic quantum computer. Wacky as that may sound, it is backed up by hard science. The laws of physics show that it is not only possible for electrons to store and flip bits: it is mandatory. For more than a decade, quantum-information scientists have been working to determine just how the universe processes information at the most microscopic scale." — The universe is a quantum computer, New Scientist, 22 March 2010

See also:

☞ Vlatko Vedral, Living in a Quantum World (pdf), Scientific American, 2011
☞ Mark Buchanan, Quantum minds: Why we think like quarks - ‘To be human is to be quantum’, New Scientist, 05 Sep 2011
The Concept of Laws. The special status of the laws of mathematics and physics
David Deutsch: A new way to explain explanation, TED
Stephen Hawking on the univers’s origin
The Relativity of Truth - a brief résumé
☞ Vlatko Vedral, Information and Physics, University of Oxford, National University of Singapore (2012)


New evidence for innate knowledge - Why we all share similar perceptions of physical reality

“Do we have innate knowledge? The team working on the Blue Brain Project at EPFL (Ecole Polytechnique Fédérale de Lausanne), led by Professor Henry Markram are finding proof that this is the case. They’ve discovered that neurons make connections independently of a subject’s experience. (…) 

The researchers were able to demonstrate that small clusters of pyramidal neurons in the neocortex interconnect according to a set of immutable and relatively simple rules. (…)

Acquired knowledge, such as memory, would involve combining these elementary building blocks at a higher level of the system. “This could explain why we all share similar perceptions of physical reality, while our memories reflect our individual experience” (…)

The neuronal connectivity must in some way have been programmed in advance. (…) Some of our fundamental representations or basic knowledge is inscribed in our genes.”


Richard Dawkins’ formula

Richard Dawkins, Evolutionary Biologist; Emeritus Professor of the Public Understanding of Science, Oxford; Author, The Greatest Show on Earth, answering the question ‘What is your formula? Your equation, algorithm? in Formulae for the 21st century, Edge, Oct 13, 2007


Susan Blackmore on memes and “temes”     

                                              (Illustration credit: Collective Memes)

”[Darwin] had no concept of the idea of an algorithm. But that’s what he described in that book, and this is what we now know as the evolutionary algorithm. The principle is you just need those three things — variegation, selection and heredity. And as Dan Dennett puts it, if you have those then you must get evolution. Or design out of chaos without the aid of mind. (…)

The principle here applies to anything that is copied with variation and selection. We’re so used to thinking in terms of biology, we think about genes this way. Darwin didn’t of course, he didn’t know about genes. He talked mostly about animals and plants, but also about languages evolving and becoming extinct. But the principle of universal Darwinism is that any information that is varied and selected will produce design.

And this is what Richard Dawkins was on about in his 1976 bestseller, “The Selfish Gene.” The information that is copied, he called the replicator. It selfishly copies. (…)

Look around you, here will do, in this room. All around us, still clumsily drifting about in its primeval soup of culture, is another replicator. Information that we copy from person to person by imitation, by language, by talking, by telling stories, by wearing clothes, by doing things. This is information copied with variation and selection. This is design process going on. He wanted a name for the new replicator. So he took the Greek word mimeme, which means that which is imitated. (…)

There are two replicators now on this planet. From the moment that our ancestors, perhaps two and a half million years ago or so, began imitating, there was a new copying process. Copying with variation and selection. A new replicator was let loose, and it could never be — right from the start, it could never be that human beings who let loose this new creature, could just copy the useful, beautiful, true things, and not copy the other things. While their brains were having an advantage from being able to copy — lighting fires, keeping fires going, new techniques of hunting, these kinds of things — inevitably they were also copying putting feathers in their hair, or wearing strange clothes, or painting their faces, or whatever.

So you get an arms race between the genes which are trying to get the humans to have small economical brains and not waste their time copying all this stuff, and the memes themselves, like the sounds that people made and copied — in other words, what turned out to be language — competing to get the brains to get bigger and bigger. So the big brain on this theory is driven by the memes. (…)

Language is a parasite that we’ve adapted to, not something that was there originally for our genes, on this view. And like most parasites it can begin dangerous, but then it co-evolves and adapts and we end up with a symbiotic relationship with this new parasite.

And so from our perspective, we don’t realize that that’s how it began. So this is a view of what humans are. All other species on this planet are gene machines only, they don’t imitate at all well, hardly at all. We alone are gene machines and meme machines as well. The memes took a gene machine and turned it into a meme machine.

But that’s not all. We have new kind of memes now. I’ve been wondering for a long time, since I’ve been thinking about memes a lot, is there a difference between the memes that we copy — the words we speak to each other, the gestures we copy, the human things — and all these technological things around us? I have always, until now, called them all memes, but I do honestly think now we need a new word for technological memes.

Let’s call them technomemes or temes. Because the processes are getting different. We began, perhaps 5,000 years ago, with writing. We put the storage of memes out there on a clay tablet, but in order to get true temes and true teme machines, you need to get the variation, the selection and the copying, all done outside of humans. And we’re getting there. We’re at this extraordinary point where we’re nearly there, that there are machines like that. And indeed, in the short time I’ve already been at TED, I see we’re even closer than I thought we were before.

So actually, now the temes are forcing our brains to become more like teme machines. Our children are growing up very quickly learning to read, learning to use machinery. We’re going to have all kinds of implants, drugs that force us to stay awake all the time. We’ll think we’re choosing these things, but the temes are making us do it. So we’re at this cusp now of having a third replicator on our planet. Now, what about what else is going on out there in the universe? Is there anyone else out there? People have been asking this question for a long time. (…)

In 1961, Frank Drake made his famous equation, but I think he concentrated on the wrong things. It’s been very productive, that equation. He wanted to estimate N, the number of communicative civilizations out there in our galaxy. And he included in there the rate of star formation, the rate of planets, but crucially, intelligence.

I think that’s the wrong way to think about it. Intelligence appears all over the place, in all kinds of guises. Human intelligence is only one kind of a thing. But what’s really important is the replicators you have and the levels of replicators, one feeding on the one before. So I would suggest that we don’t think intelligence, we think replicators.

Think of the big brain. How many mothers do we have here? You know all about big brains. They’re dangerous to give birth to. Are agonizing to give birth to. My cat gave birth to four kittens, purring all the time. Ah, mm — slightly different.

But not only is it painful, it kills lots of babies, it kills lots of mothers, and it’s very expensive to produce. The genes are forced into producing all this myelin, all the fat to myelinate the brain. Do you know, sitting here, your brain is using about 20 percent of your body’s energy output for two percent of your body weight. It’s a really expensive organ to run. Why? Because it’s producing the memes. (…)

Well, we did pull through, and we adapted. But now, we’re hitting, as I’ve just described, we’re hitting the third replicator point. And this is even more dangerous — well, it’s dangerous again. Why? Because the temes are selfish replicators and they don’t care about us, or our planet, or anything else. They’re just information — why would they? They are using us to suck up the planet’s resources to produce more computers, and more of all these amazing things we’re hearing about here at TED. Don’t think, “Oh, we created the Internet for our own benefit.” That’s how it seems to us. Think temes spreading because they must. We are the old machines.

Now, are we going to pull through? What’s going to happen? What does it mean to pull through? Well, there are kind of two ways of pulling through. One that is obviously happening all around us now, is that the temes turn us into teme machines, with these implants, with the drugs, with us merging with the technology. And why would they do that? Because we are self-replicating. We have babies. We make new ones, and so it’s convenient to piggyback on us, because we’re not yet at the stage on this planet where the other option is viable. (…) Where the teme machines themselves will replicate themselves. That way, it wouldn’t matter if the planet’s climate was utterly destabilized, and it was no longer possible for humans to live here. Because those teme machines, they wouldn’t need — they’re not squishy, wet, oxygen-breathing, warmth-requiring creatures. They could carry on without us.

So, those are the two possibilities. The second, I don’t think we’re that close. It’s coming, but we’re not there yet. The first, it’s coming too. But the damage that is already being done to the planet is showing us how dangerous the third point is, that third danger point, getting a third replicator. And will we get through this third danger point, like we got through the second and like we got through the first? Maybe we will, maybe we won’t. I have no idea.”

Susan Blackmore, PhD, an English freelance writer, lecturer, and broadcaster on psychology, Susan Blackmore on memes and “temes”,, Feb 2008 (transcript)

See also:

What Defines a Meme? James Gleick: Our world is a place where information can behave like human genes and ideas can replicate, mutate and evolve
☞ Adam McNamara, Can we measure memes?, Department of Psychology, University of Surrey, UK


Advice vs. experience: Genes predict learning style


"Researchers at Brown University have found that specific genetic variations can predict how persistently people will believe advice they are given, even when it is contradicted by experience.

The story they tell in a paper in the April 20 issue of the Journal of Neuroscience is one of the byplay between two brain regions that have different takes on how incoming information should influence thinking. The prefrontal cortex (PFC), the executive area of the brain, considers and stores incoming instructions such as the advice of other people (e.g., “Don’t sell those stocks.”) The striatum, buried deeper in the brain, is where people process experience to learn what to do (e.g., “Those stocks often go up after I sell them.”) (…)

People are guided more by advice at the start. Their genes determine how long it takes before they let the lessons of experience prevail.

"We are studying how maintaining instructions in the prefrontal cortex changes the way that the striatum works," said lead author Bradley Doll, a graduate student in Frank’s lab. “It biases what people learn about the contingencies they are actually experiencing.” (…)

People with a variation on the gene DARPP-32 that affects the response to dopamine in the stratium allowed people to learn more quickly from experience when no advice was given, but also made them more readily impressionable to the bias of the PFC when instruction was given. Like a “yes man” who is flexible to a fault, the striatum would give more weight to experiences that reinforced the PFC’s belief, and less weight to experiences that contradicted it. Researchers call this confirmation bias, which is ubiquitous across many domains, such as astrology, politics, and even science. (…)

Ultimately the people with particular genetic variants were the ones who stuck with wrong advice the longest, and in a later test they were more likely to choose symbols that they were advised were correct over those that in reality had higher likelihood of being correct. Using a mathematical model, the researchers found that the extent of this confirmation bias on learning depended on their genes. (…)

Tradeoffs of adaptability

It may seem like having the genes for a strong-willed prefrontal cortex and an overly obsequious striatum could make people dangerously oblivious to reality, but Frank said there’s a good reason for brains to be hardwired to believe in advice: Advice is often right and convenient.

People inclined to follow instructions from others, albeit to varying degrees based on their genes, can make sensible decisions much more quickly than if they had to learn the right thing to do from experience. In some cases (e.g., “Danger: high voltage”) experience is a very dangerous way to learn. But in other cases (e.g. “The cable guy should be there at 1 p.m.” or “This slot machine pays off”), believing in advice for too long is just foolish.

"It’s funny because we are telling a story about how these genes lead to maladaptive performance, but that’s actually reflective of a system that evolved to be that way for an adaptive reason,” Frank said. “This phenomenon of confirmation bias might actually just be a byproduct of a system that tries to be more efficient with the learning process.”

Advice vs. experience: Genes predict learning style, Physorg, April 19, 2011.


Evolution of Language tested with genetic analysis

                                   Human Migration, National Geographic

Evolutionary Babel was in southern Africa

"Where did humanity utter its first words? A new linguistic analysis attempts to rewrite the story of Babel by borrowing from the methods of genetic analysis – and finds that modern language originated in sub-Saharan Africa and spread across the world with migrating human populations.

Quentin Atkinson of the University of Auckland in New Zealand designed a computer program to analyse the diversity of 504 languages. Specifically, the program focused on phonemes – the sounds that make up words, like “c”, “a”, and “tch” in the word “catch”.

Earlier research has shown that the more people speak a language, the higher its phonemic diversity. Large populations tend to draw on a more varied jumble of consonants, vowels and tones than smaller .

Africa turned out to have the greatest phonemic diversity – it is the only place in the world where languages incorporate clicks of the tongue into their vocabularies, for instance – while South America and Oceania have the smallest. Remarkably, this echoes genetic analyses showing that African populations have higher genetic diversity than European, Asian and American populations.

This is generally attributed to the "serial founder" effect: it’s thought that humans first lived in a large and genetically diverse population in Africa, from which smaller groups broke off and migrated to what is now Europe. Because each break-off group carried only a subset of the genetic diversity of its parent group, this migration was, in effect, written in the migrants’ genes.

Dr. Mark Pagel sees language as central to human expansion across the globe.

“Language was our secret weapon, and as soon we got language we became a really dangerous species, he said.

— Nicholas Wade, Phonetic Clues Hint Language Is Africa-Born, NYT, Apr 14, 2011.

Mother language

Atkinson argues that the process was mirrored in languages: as smaller populations broke off and spread across the world, human language lost some of its phonemic diversity, and sounds that humans first spoke in the African Babel were left behind.

To test this, Atkinson compared the phoneme content of languages around the world and used this analysis to determine the most likely origin of all language. He found that sub-Saharan Africa was a far better fit for the origin of modern language than any other location. (…)

"It’s a compelling idea," says Sohini Ramachandran of Brown University in Providence, Rhode Island, who studies population genetics and human evolution. "Language is such an adaptive thing that it makes sense to have a single origin before the diaspora out of Africa. It’s also a nice confirmation of what we have seen in earlier genetic studies. The processes that shaped genetic variation of humans may also have shaped cultural traits.”

Ferris Jabr, Evolutionary Babel was in southern Africa, New Scientist, 14 April 2011. (Journal reference: Science, DOI: 10.1126/science.1199295)

             Out of Africa (Map source: The Mother of All Languages,, Apr 15, 2011)

Language universality idea tested with biology method

(The study challenges the idea that the “language centres” of our brains are the sole driver of language)

A long-standing idea that human languages share universal features that are dictated by human brain structure has been cast into doubt.

A study reported in Nature has borrowed methods from evolutionary biology to trace the development of grammar in several language families.

The results suggest that features shared across language families evolved independently in each lineage.

The authors say cultural evolution, not the brain, drives language development.

At the heart of both studies is a method based on what are known as phylogenetic studies.

Lead author Michael Dunn, an evolutionary linguist at the Max Planck Institute for Psycholinguistics in the Netherlands, said the approach is akin to the study of pea plants by Gregor Mendel, which ultimately led to the idea of heritability of traits. (…)

He inferred the existence of some kind of information transfer just from knowing family trees and observing variation, and that’s exactly the same thing we’re doing.”

Family trees

Modern phylogenetics studies look at variations in animals that are known to be related, and from those can work out when specific structures evolved.

For their studies, the team studied the characteristics of word order in four language families: Indo-European, Uto-Aztec, Bantu and Austronesian.

They considered whether what we call prepositions occur before or after a noun (“in the boat” versus “the boat in”) and how the word order of subject and object work out in either case (“I put the dog in the boat” versus “I the dog put the canoe in”).

The method starts by making use of well-established linguistic data on words and grammar within these language families, and building “family trees” of those languages.

"Once we have those trees we look at distribution of these different word order features over the descendant languages, and build evolutionary models for what’s most likely to produce the diversity that we observe in the world," Dr Dunn said.

The models revealed that while different language structures in the family tree could be seen to evolve along the branches, just how and when they evolved depended on which branch they were on.

We show that each of these language families evolves according to its own set of rules, not according to a universal set of rules,” Dr Dunn explained.

"That is inconsistent with the dominant ‘universality theories’ of grammar; it suggests rather that language is part of not a specialised module distinct from the rest of cognition, but more part of broad human cognitive skills.”

The paper asserts instead that “cultural evolution is the primary factor that determines linguistic structure, with the current state of a linguistic system shaping and constraining future states”.

However, co-author and evolutionary biologist Russell Gray of the University of Auckland stressed that the team was not pitting biology against culture in a mutually exclusive way.

We’re not saying that biology is irrelevant - of course it’s not,” Professor Gray told BBC News.

"But the clumsy argument about an innate structure of the human mind imposing these kind of ‘universals’ that we’ve seen in cognitive science for such a long time just isn’t tenable."

Steven Pinker, a cognitive scientist at Harvard University, called the work “an important and welcome study”.

However, Professor Pinker told BBC News that the finer details of the method need bearing out in order to more fully support their hypothesis that cultural boundaries drive the development of language more than biological limitations do.

The [authors] suggest that the human mind has a tendency to generalise orderings across phrases of different types, which would not occur if the mind generated every phrase type with a unique and isolated rule.

"The tendency may be partial, and it may be elaborated in different ways in differently language families, but it needs an explanation in terms of the working of the mind of language speakers."

— Jason Palmer, Science and technology reporter, Language universality idea tested with biology method, BBC News, 14 April 2011.

Evolution of Language Takes Unexpected Turn

"The findings “do not support simple ideas of the mind as a computer, with a language processor plugged in. They support much-more complex ideas of how language arises.” (…)

 One school of thought, pioneered by linguist Noam Chomsky, holds that language is a product of dedicated mechanisms in the human brain. These can be imagined as a series of switches, each corresponding to particular forms of grammar and syntax and structure.

Such a system would account for why, of the nearly infinite number of languages that are possible — imagine, for instance, a language in which verb conjugation changes randomly; it is possible — relatively few actually exist. Our brains have adapted to contain a limited, universal set of switches.

A limited set of linguistic universals is exactly what was described by the late, great comparative linguist Joseph Greenberg, who empirically tabulated features common to language. He made no claims as to neurological origin, but the essential claim overlapped with Chomsky’s: Language has universals.

If you speak a subject-verb-object language, one in which “I kick the ball,” then you likely use prepositions — “over the fence.” If you speak a subject-object-verb language, one in which “I the ball kicked,” then you almost certainly use postpositions — “the fence over.” And so on.

“What both these views predict is that languages should evolve according to the same set of rules,” said Dunn. “No matter what the language, no matter what the family, if there are two features of language that are somehow linked together structurally, they should be linked together the same way in all languages.”

That’s what Dunn, along with University of Auckland (New Zealand) computational linguist Russell Gray, set out to test.

Unlike earlier linguists, however, Dunn and Gray had access to powerful computational tools that, when set to work on sets of data, calculate the most likely relationships between the data. Such tools are well known in evolutionary biology, where they’re used to create trees of descent from genetic readings, but they can be applied to most anything that changes over time, including language.


In the new study, Dunn and Gray’s team created evolutionary trees for eight word-order features in humanity’s best-described language groups — Austronesian, Indo-European, Bantu and Uto-Aztecan. Together they contain more than one-third of humanity’s 7,000 languages, and span thousands of years. If there are universal trends, say Dunn and Gray, they should be visible, with each language family evolving along similar lines.

That’s not what they found.

“Each language family is evolving according to its own set of rules. Some were similar, but none were the same,” said Dunn. “There is much more diversity, in terms of evolutionary processes, than anybody ever expected.”

In one representative example of divergence (diagram above), both Austronesian and Indo-European languages that linked prepositions and object-verb structures (“over the fence, ball kicked) tended to evolve preposition and verb-object structures (“over the fence, kicked ball.”) That’s exactly what universalism would predict.

But when Austronesian and Indo-European languages both started from postposition, verb-object arrangements (“the fence over, kicked ball”), they ended up in different places. Austronesian tended towards preposition, verb-object (“over the fence, kicked ball”) but Indo-European tended towards postposition, object-verb (“the fence over, ball kicked.”)

Such differences might be eye-glazing to people unaccustomed to diagramming sentences, but the upshot is that the two language families took opposite trajectories. Many other comparisons followed suit. “The things specific to language families trumped any kind of universals we could look for,” said Dunn.

“We see that there isn’t any sort of rigid” progression of changes, said University of Reading (England) evolutionary linguist Mark Pagel, who wasn’t involved in the study. “There seems to be quite a lot of fluidity. That leads me to believe this isn’t something where you’re throwing a lot of parameter switches.”

Instead of a simple set of brain switches steering language evolution, cultural circumstance played a role. Changes were the product of chance, or perhaps fulfilled as-yet-unknown needs. For whatever reason, “the fence over, ball kicked” might have been especially useful to Indo-European speakers, but not Austronesians.

There is, however, still room for universals, said Pagel. After all, even if culture and circumstance shapes language evolution, it’s still working with a limited set of possibilities. Of the six possible combinations of subject, verb and object, for example, just two — “I kicked the ball” and “I the ball kicked” — are found in more than 90 percent of all languages, with Yoda-style “Kicked I the ball” exceedingly rare. People do seem to prefer some structures.

“What languages have in common is to be found at a much deeper level. They must emerge from more-general cognitive capacities,” said Dunn.

What those capacities may be is a new frontier for investigation. As for Dunn, his team next plans to conduct similar analyses on other features of language, searching for further evolutionary differences or those deeper levels of universality.”

“This can be applied to every level of language structure,” he said.

Brandon Keim, Evolution of Language Takes Unexpected Turn,, April 14, 2011.

See also:

☞ Andis Kaulins, Principles of Historical Language Reconstruction, AABECIS, Feb 24, 2010.
Researchers Synthesize Evolution of Language
Evolution of Language Parallels Evolution of Species
Gut Bacteria, Language Analysis Solve Pacific Migration Mystery
Cultural Evolution Could Be Studied in Google Books Database
Human-Chimp Gene Comparison Hints at Roots of Language
Mark Changizi on how we read
Mark Changizi, The Topography Of Language, Science 2.0, Sep 17, 2009.
A brief history of writing
Evolved structure of language shows lineage-specific trends in word-order universals, Word-Order Research, Basic Vocabulary Database
The Tree of Life: Tangled Roots and Sexy Shoots. Tracing the genetic pathway from the first Eukaryotes to Homo sapiens.
The Genographic Project ☞ A Landmark Study of the Human Journey


The genes are so different, the scientists argue, that giant viruses represent a fourth domain of life

Charles Darwin pictured evolution as a grand tree, with the world’s living species as its twigs. Scientists identify 10,000 new species a year, but they’ve got a long, long way to go before finding all of Earth’s biodiversity. So far, they have identified 1.5 million species of animals, but there may be 7 million or more in total. Beyond the animal kingdom, our ignorance balloons. Scoop up some sea water or a cup of soil, and there will likely be thousands of new species of microbes lurking there. (…)

The tree is, in some ways, more like a web. Genes sometimes slip from one species to another, especially among microbes. (…)

Cell fusions and horizontal gene transfer are probably best portrait by interconnected branches, rather than diverging ones. The base of the tree seems especially tangled, more like a mangrove rather than an oak. With all those caveats in mind, here’s a rough picture of the tree of life that Norman Pace of the University of Colorado offered in a scientific review he published in 2009. It shows life divided up into three domains: eukaryotes (that’s us), bacteria, and archaea.

There’s a lot of debate about whether eukaryotes actually split off from within the archaea, or just branched off from a common ancestor. Nevertheless, the two forms of life are quite distinct. For one thing, the common ancestor of living eukaryotes acquired oxygen-consuming bacteria that became a permanent part of their cells, called mitochondria. They’re keeping you alive right now.

A lot of scientists wonder how all the new species that scientists are discovering are going to change the shape of this tree. Will its three-part structure endure, with each part simply growing denser with new branches? Or have we been missing entire swaths of the tree of life? (…)

Giant viruses also explode a lot of conventional ideas of what viruses are supposed to be. Not only are giant viruses monstrously big, but they are overloaded with genes. A flu virus has just ten genes, for example, but a number of giant viruses have well over a thousand. Giant viruses even get infected by viruses of their own.

For years, researchers have been finding that the diversity of genes in viruses is tremendous. It turns out that giant viruses are particularly bizarre, genetically speaking. Didier Raoult and his colleagues compared one set of genes in giant viruses to their counterparts in other lineages. Here’s the evolutionary tree they came up with. (The giant virus genes are shown in red.)

The genes are so different, the scientists argue, that giant viruses represent a fourth domain of life. Here’s an impressionistic figure they created to show how the four domains emerged from the web of gene-trading early on in the history of life (from left to right, archaea, bacteria, eukaryotes, and giant viruses).

Jonathan Eisen of UC Davis and his colleagues publish still more evidence for a possible fourth domain. (Some of the evidence can be found in a paper in PLOS One; the rest is in a shorter note at PLOS Currents.) Their evidence comes from a voyage Craig Venter and his colleagues took in his yacht, scooping up sea water along the way. They ripped open the microbes in the water and pulled out all their genes. The advantage of this approach is that it allowed the scientists to amass a database of literally tens of millions of new genes. The downside was that they could only look at the isolated genes, rather than the living microbes from which they came. (…)

That discovery might show how this possible fourth domain got its start. Did it start out as ordinary cellular life, and then some of its genes ended up in viruses? Or is the fourth domain another sign that life as we know it actually originated as viruses?

Carl Zimmer, Glimpses of the Fourth Domain?, Discover Magazine, March 18th, 2011.

See also:
A Tree of Eukaryotes (infographic)


Sam Harris on the ‘selfish gene’ and moral behavior

“Many people imagine that the theory of evolution entails selfishness as a biological imperative. This popular misconception has been very harmful to the reputation of science. In truth, human cooperation and its attendant moral emotions are fully compatible with biological evolution. Selection pressure at the level of ‘selfish’ genes would surely incline creatures like ourselves to make sacrifices for our relatives, for the simple reason that one’s relatives can be counted on to share one’s genes: while this truth might not be obvious through introspection, your brother’s or sister’s reproductive success is, in part, your own. This phenomenon, known as kin selection, was not given a formal analysis until the 1960s in the work of William Hamilton, but it was at least implicit in the understanding of earlier biologists. Legend has it that J.B.S. Haldane was once asked if he would risk his life to save a drowning brother, to which he quipped, ‘No, but I would save two brothers or eight cousins.’

The work of evolutionary biologist Robert Trivers on reciprocal altruism has gone a long way toward explaining cooperation among unrelated friends and strangers. Trivers’s model incorporates many of the psychological and social factors related to altruism and reciprocity, including friendship, moralistic aggression (i.e., the punishment of cheaters), guilt, sympathy, and gratitude, along with a tendency to deceive others by mimicking these states. As first suggested by Darwin, and recently elaborated by the psychologist Geoffrey Miller, sexual selection may have further encouraged the development of moral behavior. Because moral virtue is attractive to both sexes, it might function as a kind of peacock’s tail: costly to produce and maintain, but beneficial to one’s genes in the end.

Clearly, our selfish and selfless interests do not always conflict. In fact, the well-being of others, especially those closest to us, is one of our primary (and, indeed, most selfish) interests. While much remains to be understood about the biology of our moral impulses, kin selection, reciprocal altruism, and sexual selection explain how we have evolved to be, not merely atomized selves in thrall to our self-interest, but social selves disposed to serve a common interest with others.” “
Sam Harris, American author, and CEO of Project Reason. He received a Ph.D. in neuroscience from UCLA, and is a graduate in philosophy from Stanford University, The Moral Landscape, Free Press, 2010.

Willingness to Listen to Music Is Biological, Study of Gene Variants Suggests


Our willingness to listen to music is biological trait and related to the neurobiological pathways affecting social affiliation and communication, suggests a recent Finnish study.

Music is listened to in all known cultures. Similarities between human and animal song have been detected: both contain a message, an intention that reflects innate emotional state that is interpreted correctly even among different species. In fact, several behavioral features in listening to music are closely related to attachment: lullabies are sung to infants to increase their attachment to a parent, and singing or playing music together is based on teamwork and may add group cohesion. (…)

Recent genetic studies have shown familial aggregation of tone deafness, absolute pitch, musical aptitude and creative functions in music. In this study, willingness to listen to music and the level of music education varied in pedigrees.

This is one of the first studies where listening to music has been explored at molecular level, and the first study to show association between arginine vasopressin receptor 1A (AVPR1A) gene variants and listening to music. Previously, an association between AVPR1A and musical aptitude had been reported. AVPR1A gene is a gene that has been associated with social communication and attachment behavior in human and other species. The vasopressin homolog increases vocalization in birds and influences breeding in lizards and fishes. The results suggest biological contribution to the sound perception (here listening to music), provide a molecular evidence of sound or music’s role in social communication, and are providing tools for further studies on gene-culture evolution in music.”

Willingness to Listen to Music Is Biological, Study of Gene Variants Suggests, ScienceDaily, Feb. 28, 2011. (Picture source)


Genes and social networks: new research links genes to friendship networks

James Fowler, a professor at UC-San Diego, is engaged in highly innovative and important research at the crossroads of political science and biology. His recent paper in the Proceedings of the National Academy of Sciences, “Correlated Genotypes in Friendship Networks”, represents an important new study in an emerging research field that is exploring the genetic and biological foundations for our political and social behavior. (…)

Genotypic clustering in social networks”, by statistically examining the association between markers for six different genes and the reported friendship networks from respondents in data from the National Longitudinal Study of Adolescent Health and the Framingham Heart Study Social Network. They show that one of these genes (DRD2) is positively associated with in friendship networks, meaning that those who have this gene are more likely to be friends with others who have this gene, controlling for demographic similarities and population stratification; another gene, CYP2A6 has a negative association in friendship networks. (…)

What is the most important implication of demonstrating that specific genes are associated with who we affiliate with in our friendship networks?

James Fowler: What happens to us may depend not only on our own genes but also on the genes of our friends. This has been shown already in hens, whose feathers change depending on the genetic constitution of the hens that are caged near them. But something similar may happen in humans. We each live in a sea of the genes of others. In fact, we are metagenomic. (…)

An important caveat is that there may be processes besides friendship choice that create correlated genotypes. Our genes may cause us to be drawn to certain environments where we are more likely to meet similar people. For example, people with the same DRD2 genotype might both find themselves in a bar where they then become friends. But this cannot explain *negative* correlation. The “opposites attrac” result with CYP2A6 is more likely to be due to friendship choice. (…)

But it is true there can be feedback effects — our genes not only influence us, but they may influence the genes of our friends, which in turn has an additional effect on us. For example, the DRD2 gene variant we study has been associated with alcoholism, and if you have this gene variant, your friends are likely to have it, too. So you are not only more susceptible to alcoholism yourself, but you are likely to be surrounded by friends who are susceptible, too. Thus, ignoring genes means we might miss important heterogeneity in the network that would obscure strong social effects. (…)

We have discovered some regularities in our studies of human social networks that suggest their structure may be universal, such as the tendency for many of our friends also to be friends with one another, and the tendency for influence to spread to about three degrees of separation. We conjecture that we have coevolved with these networks as our brains have gotten bigger, and genetic variation might give us a clue about which systems have undergone the most recent evolutionary changes.”

A conversation with James Fowler by R. Michael Alvarez, Genes and social networks: new research links genes to friendship networks, Psychology Today, February 14, 2011. See also: Daniel MacArthur, On sharing genes with friends, Wired, Jan 19, 2011