‘Elegance,’ ‘Symmetry,’ and ‘Unity’: Is Scientific Truth Always Beautiful?
“Today the grandest quest of physics is to render compatible the laws of quantum physics—how particles in the subatomic world behave—with the rules that govern stars and planets. That’s because, at present, the formulas that work on one level implode into meaninglessness at the other level. This is deeply ungainly, and significant when the two worlds collide, as occurs in black holes. The quest to unify quantum physics (micro) and general relativity (macro) has spawned heroic efforts, the best-known candidate for a grand unifying concept presently being string theory. String theory proposes that subatomic particles are not particles at all but closed or open vibrating strings, so tiny, a hundred billion billion times shorter than an atomic nucleus’s diameter, that no human instrument can detect them. It’s the “music of the spheres”—think vibrating harp strings—made literal.
A concept related to string theory is “supersymmetry.” Physicists have shown that at extremely high energy levels, similar to those that existed a micro-blink after the big bang, the strength of the electromagnetic force, and strong and weak nuclear forces (which work only on subatomic levels), come tantalizingly close to converging. Physicists have conceived of scenarios in which the three come together precisely, an immensely intellectually and aesthetically pleasing accomplishment. But those scenarios imply the existence of as-yet-undiscovered “partners” for existing particles: The electron would be joined by a “selectron,” quarks by “squarks,” and so on. There was great hope that the $8-billion Large Hadron Collider would provide indirect evidence for these theories, but so far it hasn’t. (…)
[Marcelo Gleiser]: “We look out in the world and we see a very complicated pattern of stuff, and the notion of symmetry is an important way to make sense of the mess. The sun and moon are not perfect spheres, but that kind of approximation works incredibly well to simulate the behavior of these bodies.”
But the idea that what’s beautiful is true and that “symmetry rules,” as Gleiser puts it, “has been catapulted to an almost religious notion in the sciences,” he says. In his own book A Tear at the Edge of Creation (Free Press), Gleiser made a case for the beauty inherent in asymmetry—in the fact that neutrinos, the most common particles in the universe, spin only in one direction, for example, or that amino acids can be produced in laboratories in “left-handed” or “right-handed” forms, but only the “left-handed” form appears in nature. These are nature’s equivalent of Marilyn Monroe’s mole, attractive because of their lopsidedness, and Orrell also makes use of those examples.
But Weinberg, the Nobel-winning physicist at the University of Texas at Austin, counters: “Betting on beauty works remarkably well.” The Large Hadron Collider’s failure to produce evidence of supersymmetry is “disappointing,” he concedes, but he notes that plenty of elegant theories have waited years, even decades, for confirmation. Copernicus’s theory of a Sun-centered universe was developed entirely without experiment—he relied on Ptolemy’s data—and it was eventually embraced precisely because his description of planetary motion was simply more economical and elegant than those of his predecessors; it turned out to be true.
Closer to home, Weinberg says his own work on the weak nuclear force and electromagnetism had its roots in remarkably elegant, purely abstract theories of researchers who came before him, theories that, at first, seemed to be disproved by evidence but were too elegant to stop thinking about. (…)
To Orrell, it’s not just that many scientists are too enamored of beauty; it’s that their notion of beauty is ossified. It is “kind of clichéd,” Orrell says. “I find things like perfect symmetry uninspiring.” (In fairness, the Harvard theoretical physicist Lisa Randall has used the early unbalanced sculptures of Richard Serra as an example of how the asymmetrical can be as fascinating as the symmetrical, in art as in physics. She finds this yin-yang tension perfectly compatible with modern theorizing.)
Orrell also thinks it is more useful to study the behavior of complex systems rather than their constituent elements. (…)
Outside of physics, Orrell reframes complaints about “perfect-model syndrome” in aesthetic terms. Classical economists, for instance, treat humans as symmetrical in terms of what motivates decision-making. In contrast, behavioral economists are introducing asymmetry into that field by replacing Homo economicus with a quirkier, more idiosyncratic and human figure—an aesthetic revision, if you like. (…)
The broader issue, though, is whether science’s search for beautiful, enlightening patterns has reached a point of diminishing returns. If science hasn’t yet hit that point, might it be approaching it? The search for symmetry in nature has had so many successes, observes Stephon Alexander, a Dartmouth physicist, that “there is a danger of forgetting that nature is the one that decides where that game ends.”
“Look into a mirror and you’ll simultaneously see the familiar and the alien: an image of you, but with left and right reversed.
Left-right inequality has significance far beyond that of mirror images, touching on the heart of existence itself. From subatomic physics to life, nature prefers asymmetry to symmetry. There are no equal liberties when neutrinos and proteins are concerned. In the case of neutrinos, particles that spill out of the sun’s nuclear furnace and pass through you by the trillions every second, only leftward-spinning ones exist. Why? No one really knows.
Proteins are long chains of amino acids that can be either left- or right-handed. Here, handedness has to do with how these molecules interact with polarized light, rotating it either to the left or to the right. When synthesized in the lab, amino acids come out fifty-fifty. In living beings, however, all proteins are made of left-handed amino acids. And all sugars in RNA and DNA are right-handed. Life is fundamentally asymmetric.
Is the handedness of life, its chirality (think chiromancer, which means “palm reader”), linked to its origins some 3.5 billion years ago, or did it develop after life was well on its way? If one traces life’s origins from its earliest stages, it’s hard to see how life began without molecular building blocks that were “chirally pure,” consisting solely of left- or right-handed molecules. Indeed, many models show how chirally pure amino acids may link to form precursors of the first protein-like chains. But what could have selected left-handed over right-handed amino acids?
My group’s research suggests that early Earth’s violent environmental upheavals caused many episodes of chiral flip-flopping. The observed left-handedness of terrestrial amino acids is probably a local fluke. Elsewhere in the universe, perhaps even on other planets and moons of our solar system, amino acids may be right-handed. But only sampling such material from many different planetary platforms will determine whether, on balance, biology is lefthanded, right-handed, or ambidextrous.”
“One of the deepest consequences of symmetries of any kind is their relationship with conservation laws. Every symmetry in a physical system, be it balls rolling down planes, cars moving on roads, planets orbiting the Sun, a photon hitting an electron, or the expanding Universe, is related to a conserved quantity, a quantity that remains unchanged in the course of time. In particular, external (spatial and temporal) symmetries are related to the conservation of momentum and energy, respectively: the total energy and momentum of a system that is temporally and spatially symmetric remains unchanged.
The elementary particles of matter live in a reality very different from ours. The signature property of their world is change: particles can morph into one another, changing their identities. […] One of the greatest triumphs of twentieth-century particle physics was the discovery of the rules dictating the many metamorphoses of matter particles and the symmetry principles behind them. One of its greatest surprises was the realization that some of the symmetries are violated and that these violations have very deep consequences. (…) p.27
Even though matter and antimatter appear in equal footing on the equations describing relativistic particles, antimatter occurs only rarely. […] Somehow, during its infancy, the cosmos selected matter over antimatter. This imperfection is the single most important factor dictating our existence. (…)
Back to the early cosmos: had there been an equal quantity of antimatter particles around, they would have annihilated the corresponding particles of matter and all that would be left would be lots of gamma-ray radiation and some leftover protons and antiprotons in equal amounts. Definitely not our Universe. The tiny initial excess of matter particles is enough to explain the overwhelming excess of matter over antimatter in today’s Universe. The existence of mattter, the stuff we and everything else are made of, depends on a primordial imperfection, the matter-antimatter asymmetry. (…) p.29.
We have seen how the weak interactions violate a series of internal symmetries: charge conjugation, parity, and even the combination of the two. The consequences of these violations are deeply related to our existence: they set the arrow of time at the microscopic level, providing a viable mechanism to generate the excess of matter over antimatter. […] The message from modern particle physics and cosmology is clear: we are the products of imperfections in Nature. (…)
It is not symmetry and perfection that should be our guiding principle, as it has been for millennia. We don’t have to look for the mind of God in Nature and try to express it through our equations. The science we create is just that, our creation. Wonderful as it is, it is always limited, it is always constrained by what we know of the world. […] The notion that there is a well-defined hypermathematical structure that determines all there is in the cosmos is a Platonic delusion with no relationship to physical reality. (…) p. 35.
The critics of this idea miss the fact that a meaningless cosmos that produced humans (and possibly other intelligences) will never be meaningless to them (or to the other intelligences). To exist in a purposeless Universe is even more meaningful than to exist as the result of some kind of mysterious cosmic plan. Why? Because it elevates the emergence of life and mind to a rare event, as opposed to a ubiquitous and premeditated one. For millennia, we believed that God (or gods) protected us from extinction, that we were chosen to be here and thus safe from ultimate destruction. […]
When science proposes that the cosmos has a sense of purpose where in life is a premeditated outcome of natural events, a similar safety blanket mechanism is at play: if life fails here, it will succeed elsewhere. We don’t really need to preserve it. To the contrary, I will argue that unless we accept our fragility and cosmic loneliness, we will never act to protect what we have. (…)
The laws of physics and the laws of chemistry as presently understood have nothing to say about the emergence of life. As Paul Davies remarked in Cosmic Jackpot, notions of a life principle suffer from being teleologic, explaining life as the end goal, a purposeful cosmic strategy. The human mind, of course, would be the crown jewel of such creative drive. Once again we are “chosen” ones, a dangerous proposal. […] Arguments shifting the “mind of God” to the “mind of the cosmos” perpetuate our obsession with the notion of Oneness. Our existence need not be planned to be meaningful.” (…) p.49.
Unified theories, life principles, and self-aware universes are all expressions of our need to find a connection between who we are and the world we live in. I do not question the extreme importance of understanding the connection between man and the cosmos. But I do question that it has to derive from unifying principles. (…) p.50.
My point is that there is no Final Truth to be discovered, no grand plan behind creation. Science advances as new theories engulf or displace old ones. The growth is largely incremental, punctuated by unexpected, worldview-shattering discoveries about the workings of Nature. […]
Once we understand that science is the creation of human minds and not the pursuit of some divine plan (even if metaphorically) we shift the focus of our search for knowledge from the metaphysical to the concrete. (…) p.51.
For a clever fish, water is “just right“ for it to swim in. Had it been too cold, it would freeze; too hot, it would boil. Surely the water temperature had to be just right for the fish to exist. “I’m very important. My existence cannot be an accident,” the proud fish would conclude. Well, he is not very important. He is just a clever fish. The ocean temperature is not being controlled with the purpose of making it possible for it to exist. Quite the opposite: the fish is fragile. A sudden or gradual temperature swing would kill it, as any trout fisherman knows. We so crave for meaningful connections that we see them even when they are not there.
We are soulful creatures in a harsh cosmos. This, to me, is the essence of the human predicament. The gravest mistake we can make is to think that the cosmos has plans for us, that we are somehow special from a cosmic perspective. (…) p.52
We are witnessing the greatest mass extinction since the demise of the dinosaurs 65 million years ago. The difference is that for the first time in history, humans, and not physical causes, are the perpetrators. […] Life recovered from the previous five mass extinctions because the physical causes eventually ceased to act. Unless we understand what is happening and start acting toghether as a species we may end up carving the path toward our own destruction. (…)” p.56
S. Hawking, L. Mlodinow on why is there something rather than nothing and why are the fundamental laws as we have described them
“According to the idea of model-dependent realism, our brains interpret the input from our sensory organs by making a model of the outside world. We form mental concepts of our home, trees, other people, the electricity that flows from wall sockets, atoms, molecules, and other universes. These mental concepts are the only reality we can know. There is no modelindependent test of reality. It follows that a well-constructed model creates a reality of its own. An example that can help us think about issues of reality and creation is the Game of Life, invented in 1970 by a young mathematician at Cambridge named John Conway.
The word “game” in the Game of Life is a misleading term. There are no winners and losers; in fact, there are no players. The Game of Life is not really a game but a set of laws that govern a two dimensional universe. It is a deterministic universe: Once you set up a starting configuration, or initial condition, the laws determine what happens in the future.
The world Conway envisioned is a square array, like a chessboard, but extending infinitely in all directions. Each square can be in one of two states: alive (shown in green) or dead (shown in black). Each square has eight neighbors: the up, down, left, and right neighbors and four diagonal neighbors. Time in this world is not continuous but moves forward in discrete steps. Given any arrangement of dead and live squares, the number of live neighbors determine what happens next according to the following laws:
1. A live square with two or three live neighbors survives (survival). 2. A dead square with exactly three live neighbors becomes a live cell (birth). 3. In all other cases a cell dies or remains dead. In the case that a live square has zero or one neighbor, it is said to die of loneliness; if it has more than three neighbors, it is said to die of overcrowding.
That’s all there is to it: Given any initial condition, these laws generate generation after generation. An isolated living square or two adjacent live squares die in the next generation because they don’t have enough neighbors. Three live squares along a diagonal live a bit longer. After the first time step the end squares die, leaving just the middle square, which dies in the following generation. Any diagonal line of squares “evaporates” in just this manner. But if three live squares are placed horizontally in a row, again the center has two neighbors and survives while the two end squares die, but in this case the cells just above and below the center cell experience a birth. The row therefore turns into a column. Similarly, the next generation the column back turns into a row, and so forth. Such oscillating configurations are called blinkers.
If three live squares are placed in the shape of an L, a new behavior occurs. In the next generation the square cradled by the L will give birth, leading to a 2 × 2 block. The block belongs to a pattern type called the still life because it will pass from generation to generation unaltered. Many types of patterns exist that morph in the early generations but soon turn into a still life, or die, or return to their original form and then repeat the process. There are also patterns called gliders, which morph into other shapes and, after a few generations, return to their original form, but in a position one square down along the diagonal. If you watch these develop over time, they appear to crawl along the array. When these gliders collide, curious behaviors can occur, depending on each glider’s shape at the moment of collision.
What makes this universe interesting is that although the fundamental “physics” of this universe is simple, the “chemistry” can be complicated. That is, composite objects exist on different scales. At the smallest scale, the fundamental physics tells us that there are just live and dead squares. On a larger scale, there are gliders, blinkers, and still-life blocks. At a still larger scale there are even more complex objects, such as glider guns: stationary patterns that periodically give birth to new gliders that leave the nest and stream down the diagonal. (…)
If you observed the Game of Life universe for a while on any particular scale, you could deduce laws governing the objects on that scale. For example, on the scale of objects just a few squares across you might have laws such as “Blocks never move,” “Gliders move diagonally,” and various laws for what happens when objects collide. You could create an entire physics on any level of composite objects. The laws would entail entities and concepts that have no place among the original laws. For example, there are no concepts such as “collide” or “move” in the original laws. Those describe merely the life and death of individual stationary squares. As in our universe, in the Game of Life your reality depends on the model you employ.
Conway and his students created this world because they wanted to know if a universe with fundamental rules as simple as the ones they defined could contain objects complex enough to replicate. In the Game of Life world, do composite objects exist that, after merely following the laws of that world for some generations, will spawn others of their kind? Not only were Conway and his students able to demonstrate that this is possible, but they even showed that such an object would be, in a sense, intelligent! What do we mean by that? To be precise, they showed that the huge conglomerations of squares that self-replicate are “universal Turing machines.” For our purposes that means that for any calculation a computer in our physical world can in principle carry out, if the machine were fed the appropriate input—that is, supplied the appropriate Game of Life world environment—then some generations later the machine would be in a state from which an output could be read that would correspond to the result of that computer calculation. (…)
In the Game of Life, as in our world, self-reproducing patterns are complex objects. One estimate, based on the earlier work of mathematician John von Neumann, places the minimum size of a selfreplicating pattern in the Game of Life at ten trillion squares—roughly the number of molecules in a single human cell. One can define living beings as complex systems of limited size that are stable and that reproduce themselves.
The objects described above satisfy the reproduction condition but are probably not stable: A small disturbance from outside would probably wreck the delicate mechanism. However, it is easy to imagine that slightly more complicated laws would allow complex systems with all the attributes of life. Imagine a entity of that type, an object in a Conway-type world. Such an object would respond to environmental stimuli, and hence appear to make decisions. Would such life be aware of itself? Would it be self-conscious? This is a question on which opinion is sharply divided. Some people claim that self-awareness is something unique to humans. It gives them free will, the ability to choose between different courses of action.
How can one tell if a being has free will?
If one encounters an alien, how can one tell if it is just a robot or it has a mind of its own? The behavior of a robot would be completely determined, unlike that of a being with free will. Thus one could in principle detect a robot as a being whose actions can be predicted. (…) This may be impossibly difficult if the being is large and complex. We cannot even solve exactly the equations for three or more particles interacting with each other. Since an alien the size of a human would contain about a thousand trillion trillion particles even if the alien were a robot, it would be impossible to solve the equations and predict what it would do. We would therefore have to say that any complex being has free will—not as a fundamental feature, but as an effective theory, an admission of our inability to do the calculations that would enable us to predict its actions.
The example of Conway’s Game of Life shows that even a very simple set of laws can produce complex features similar to those of intelligent life. There must be many sets of laws with this property. What picks out the fundamental laws (as opposed to the apparent laws) that govern our universe? As in Conway’s universe, the laws of our universe determine the evolution of the system, given the state at any one time. In Conway’s world we are the creators—we choose the initial state of the universe by specifying objects and their positions at the start of the game. (…)
If the total energy of the universe must always remain zero, and it costs energy to create a body, how can a whole universe be created from nothing? That is why there must be a law like gravity. Because gravity is attractive, gravitational energy is negative: One has to do work to separate a gravitationally bound system, such as the earth and moon. This negative energy can balance the positive energy needed to create matter, but it’s not quite that simple. The negative gravitational energy of the earth, for example, is less than a billionth of the positive energy of the matter particles the earth is made of. A body such as a star will have more negative gravitational energy, and the smaller it is (the closer the different parts of it are to each other), the greater this negative gravitational energy will be. But before it can become greater than the positive energy of the matter, the star will collapse to a black hole, and black holes have positive energy. That’s why empty space is stable. Bodies such as stars or black holes cannot just appear out of nothing. But a whole universe can.
Because gravity shapes space and time, it allows space-time to be locally stable but globally unstable. On the scale of the entire universe, the positive energy of the matter can be balanced by the negative gravitational energy, and so there is no restriction on the creation of whole universes. Because there is a law like gravity, the universe can and will create itself from nothing. (…) Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist. It is not necessary to invoke God to light the blue touch paper and set the universe going.
Why are the fundamental laws as we have described them?
The ultimate theory must be consistent and must predict finite results for quantities that we can measure. We’ve seen that there must be a law like gravity, and we saw in Chapter 5 that for a theory of gravity to predict finite quantities, the theory must have what is called supersymmetry between the forces of nature and the matter on which they act. M-theory is the most general supersymmetric theory of gravity. For these reasons M-theory is the only candidate for a complete theory of the universe. If it is finite—and this has yet to be proved—it will be a model of a universe that creates itself. We must be part of this universe, because there is no other consistent model.
M-theory is the unified theory Einstein was hoping to find. The fact that we human beings—who are ourselves mere collections of fundamental particles of nature—have been able to come this close to an understanding of the laws governing us and our universe is a great triumph. But perhaps the true miracle is that abstract considerations of logic lead to a unique theory that predicts and describes a vast universe full of the amazing variety that we see. If the theory is confirmed by observation, it will be the successful conclusion of a search going back more than 3,000 years. We will have found the grand design.”
Science historian George Dyson: Unravelling the digital code George Dyson (Photo: Wired)
“It was not made for those who sell oil or sardines.”
— G. W. Leibniz, ca. 1674, on his calculating machine
A universe of self-replicating code
“Digital organisms, while not necessarily any more alive than a phone book, are strings of code that replicate and evolve over time. Digital codes are strings of binary digits — bits. Google is a fantastically large number, so large it is almost beyond comprehension, distributed and replicated across all kinds of hosts. When you click on a link, you are replicating the string of code that it links to. Replication of code sequences isn’t life, any more than replication of nucleotide sequences is, but we know that it sometimes leads to life.
Q [Kevin Kelly]: Are we in that digital universe right now, as we talk on the phone?
George Dyson: Sure. You’re recording this conversation using a digital recorder — into an empty matrix of addresses on a microchip that is being filled up at 44 kilobytes per second. That address space full of numbers is the digital universe.
Q: How fast is this universe expanding?
G.D.: Like our own universe at the beginning, it’s more exploding than expanding. We’re all so immersed in it that it’s hard to perceive. Last time I checked, the digital universe was expanding at the rate of five trillion bits per second in storage and two trillion transistors per second on the processing side. (…)
Q: Where is this digital universe heading?
G.D.: This universe is open to the evolution of all kinds of things. It’s cycling faster and faster. Even with Google and YouTube and Facebook, we can’t consume it all. And we aren’t aware what this space is filling up with. From a human perspective, computers are idle 99 per cent of the time. While they’re waiting for us to come up with instructions, computation is happening without us, as computers write instructions for each other. As Turing showed, this space can’t be supervised. As the digital universe expands, so does this wild, undomesticated side.”
“Just as we later worried about recombinant DNA, what if these things escaped? What would they do to the world? Could this be the end of the world as we know it if these self-replicating numerical creatures got loose?
But, we now live in a world where they did get loose—a world increasingly run by self-replicating strings of code. Everything we love and use today is, in a lot of ways, self-reproducing exactly as Turing, von Neumann, and Barricelli prescribed. It’s a very symbiotic relationship: the same way life found a way to use the self-replicating qualities of these polynucleotide molecules to the great benefit of life as a whole, there’s no reason life won’t use the self-replicating abilities of digital code, and that’s what’s happening. If you look at what people like Craig Venter and the thousand less-known companies are doing, we’re doing exactly that, from the bottom up. (…)
What’s, in a way, missing in today’s world is more biology of the Internet. More people like Nils Barricelli to go out and look at what’s going on, not from a business or what’s legal point of view, but just to observe what’s going on.
Many of these things we read about in the front page of the newspaper every day, about what’s proper or improper, or ethical or unethical, really concern this issue of autonomous self-replicating codes. What happens if you subscribe to a service and then as part of that service, unbeknownst to you, a piece of self-replicating code inhabits your machine, and it goes out and does something else? Who is responsible for that? And we’re in an increasingly gray zone as to where that’s going. (…)
Why is Apple one of the world’s most valuable companies? It’s not only because their machines are so beautifully designed, which is great and wonderful, but because those machines represent a closed numerical system. And they’re making great strides in expanding that system. It’s no longer at all odd to have a Mac laptop. It’s almost the normal thing.
But I’d like to take this to a different level, if I can change the subject… Ten or 20 years ago I was preaching that we should look at digital code as biologists: the Darwin Among the Machines stuff. People thought that was crazy, and now it’s firmly the accepted metaphor for what’s going on. And Kevin Kelly quoted me in Wired, he asked me for my last word on what companies should do about this. And I said, “Well, they should hire more biologists.”
But what we’re missing now, on another level, is not just biology, but cosmology. People treat the digital universe as some sort of metaphor, just a cute word for all these products. The universe of Apple, the universe of Google, the universe of Facebook, that these collectively constitute the digital universe, and we can only see it in human terms and what does this do for us?
We’re missing a tremendous opportunity. We’re asleep at the switch because it’s not a metaphor. In 1945 we actually did create a new universe. This is a universe of numbers with a life of their own, that we only see in terms of what those numbers can do for us. Can they record this interview? Can they play our music? Can they order our books on Amazon? If you cross the mirror in the other direction, there really is a universe of self-reproducing digital code. When I last checked, it was growing by five trillion bits per second. And that’s not just a metaphor for something else. It actually is. It’s a physical reality.
We’re still here at the big bang of this thing, and we’re not studying it enough. Who’s the cosmologist really looking at this in terms of what it might become in 10,000 years? What’s it going to be in 100 years? Here we are at the very beginning and we just may simply not be asking the right questions about what’s going on. Try looking at it from the other side, not from our side as human beings. Scientists are the people who can do that kind of thing. You can look at viruses from the point of view of a virus, not from the point of view of someone getting sick.
Very few people are looking at this digital universe in an objective way. Danny Hillis is one of the few people who is. His comment, made exactly 30 years ago in 1982, was that “memory locations are just wires turned sideways in time”. That’s just so profound. That should be engraved on the wall. Because we don’t realize that there is this very different universe that does not have the same physics as our universe. It’s completely different physics. Yet, from the perspective of that universe, there is physics, and we have almost no physicists looking at it, as to what it’s like. And if we want to understand the sort of organisms that would evolve in that totally different universe, you have to understand the physics of the world in which they are in. It’s like looking for life on another planet. Danny has that perspective. Most people say just, “well, a wire is a wire. It’s not a memory location turned sideways in time.” You have to have that sort of relativistic view of things.
We are still so close to the beginning of this explosion that we are still immersed in the initial fireball. Yet, in that short period of time, for instance, it was not long ago that to transfer money electronically you had to fill out paper forms on both ends and then wait a day for your money to be transferred. And, in a very few years, it’s a dozen years or so, most of the money in the world is moving electronically all the time.
The best example of this is what we call the flash crash of May 6th, two years ago, when suddenly, the whole system started behaving unpredictably. Large amounts of money were lost in milliseconds, and then the money came back, and we quietly (although the SEC held an investigation) swept it under the rug and just said, “well, it recovered. Things are okay.” But nobody knows what happened, or most of us don’t know.
There was a great Dutch documentary—Money and Speed: Inside the Black Box—where they spoke to someone named Eric Scott Hunsader who actually had captured the data on a much finer time scale, and there was all sorts of very interesting stuff going on. But it’s happening so quickly that it’s below what our normal trading programs are able to observe, they just aren’t accounting for those very fast things. And this could be happening all around us—not just in the world of finance. We would not necessarily even perceive it, that there’s a whole world of communication that’s not human communication. It’s machines communicating with machines. And they may be communicating money, or information that has other meaning—but if it is money, we eventually notice it. It’s just the small warm pond sitting there waiting for the spark.
It’s an unbelievably interesting time to be a digital biologist or a digital physicist, or a digital chemist. A good metaphor is chemistry. We’re starting to address code by template, rather than by numerical location—the way biological molecules do.
We’re living in a completely different world. The flash crash was an example: you could have gone out for a cup of coffee and missed the whole thing, and come back and your company lost a billion dollars and got back 999 million, while you were taking your lunch break. It just happened so fast, and it spread so quickly.
So, yes, the fear scenario is there, that some malevolent digital virus could bring down the financial system. But on the other hand, the miracle of this flash crash was not that it happened, but that it recovered so quickly. Yet, in those milliseconds, somebody made off with a lot of money. We still don’t know who that was, and maybe we don’t want to know.
The reason we’re here today (surrounded by this expanding digital universe) is because in 1936, or 1935, this oddball 23-year-old undergraduate student, Alan Turing, developed this theoretical framework to understand a problem in mathematical logic, and the way he solved that problem turned out to establish the model for all this computation. And I believe we wold have arrived here, sooner or later, without Alan Turing or John von Neumann, but it was Turing who developed the one-dimensional model, and von Neumann who developed the two-dimensional implementation, for this increasingly three-dimensional digital universe in which everything we do is immersed. And so, the next breakthrough in understanding will also I think come from some oddball. It won’t be one of our great, known scientists. It’ll be some 22-year-old kid somewhere who makes more sense of this.
But, we’re going back to biology, and of course, it’s impossible not to talk about money, and all these other ways that this impacts our life as human beings. What I was trying to say is that this digital universe really is so different that the physics itself is different. If you want to understand what types of life-like or self-reproducing forms would develop in a universe like that, you actually want to look at the sort of physics and chemistry of how that universe is completely different from ours. An example is how not only its time scale but how time operates is completely different, so that things can be going on in that world in microseconds that suddenly have a real effect on ours.
Again, money is a very good example, because money really is a sort of a gentlemen’s agreement to agree on where the money is at a given time. Banks decide, well, this money is here today and it’s there tomorrow. And when it’s being moved around in microseconds, you can have a collapse, where suddenly you hit the bell and you don’t know where the money is. And then everybody’s saying, “Where’s the money? What happened to it?” And I think that’s what happened. And there are other recent cases where it looks like a huge amount of money just suddenly disappeared, because we lost the common agreement on where it is at an exact point in time. We can’t account for those time periods as accurately as the computers can.
One number that’s interesting, and easy to remember, was in the year 1953, there were 53 kilobytes of high-speed memory on planet earth. This is random access high-speed memory. Now you can buy those 53 kilobytes for an immeasurably small, thousandth of one cent or something. If you draw the graph, it’s a very nice, clean graph. That’s sort of Moore’s Law; that it’s doubling. It has a doubling time that’s surprisingly short, and no end in sight, no matter what the technology does. We’re doubling the number of bits in a extraordinarily short time.
And we have never seen that. Or I mean, we have seen numbers like that, in epidemics or chain reactions, and there’s no question it’s a very interesting phenomenon. But still, it’s very hard not to just look at it from our point of view. What does it mean to us? What does it mean to my investments? What does it mean to my ability to have all the music I want on my iPhone? That kind of thing. But there’s something else going on. We’re seeing a fraction of one percent of it, and there’s this other 99.99 percent that people just aren’t looking at.
The beginning of this was driven by two problems. The problem of nuclear weapons design, and the problem of code breaking were the two drivers of the dawn of this computational universe. There were others, but those were the main ones.
What’s the driver today? You want one word? It’s advertising. And, you may think advertising is very trivial, and of no real importance, but I think it’s the driver. If you look at what most of these codes are doing, they’re trying to get the audience, trying to deliver the audience. The money is flowing as advertising.
And it is interesting that Samuel Butler imagined all this in 1863, and then in his book Erewhon. And then 1901, before he died, he wrote a draft for “Erewhon Revisited.” In there, he called out advertising, saying that advertising would be the driving force of these machines evolving and taking over the world. Even then at the close of 19th century England, he saw advertising as the way we would grant power to the machines.
If you had to say what’s the most powerful algorithm set loose on planet earth right now? Originally, yes, it was the Monte Carlo code for doing neutron calculations. Now it’s probably the AdWords algorithm. And the two are related: if you look at the way AdWords works, it is a Monte Carlo process. It’s a sort of statistical sampling of the entire search space, and a monetizing of it, which as we know, is a brilliant piece of work. And that’s not to diminish all the other great codes out there.
We live in a world where we measure numbers of computers in billions, and numbers of what we call servers, which are the equivalent of in the old days, of what would be called mainframes. Those are in the millions, hundreds of millions.
Two of the pioneers of this—to single out only two pioneers—were John Von Neumann and Alan Turing. If they were here today Turing would be 100. Von Neumann would be 109. I think they would understand what’s going on immediately—it would take them a few minutes, if not a day, to figure out, to understand what was going on. And, they both died working on biology, and I think they would be immediately fascinated by the way biological code and digital code are now intertwined. Von Neumann’s consuming passion at the end was self-reproducing automata. And Alan Turing was interested in the question of how molecules could self-organize to produce organisms.
They would be, on the other hand, astonished that we’re still running their machines, that we don’t have different computers. We’re still just running your straight Von Neumann/Turing machine with no real modification. So they might not find our computers all that interesting, but they would be diving into the architecture of the Internet, and looking at it.
In both cases, they would be amazed by the direct connection between the code running on computers and the code running in biology—that all these biotech companies are directly reading and writing nucleotide sequences in and out of electronic memory, with almost no human intervention. That’s more or less completely mechanized now, so there’s direct translation, and once you translate to nucleotides, it’s a small step, a difficult step, but, an inevitable step to translate directly to proteins. And that’s Craig Venter’s world, and it’s a very, very different world when we get there.
The question of how and when humans are going to expand into the universe, the space travel question, is, in my view, almost rendered obsolete by this growth of a digitally-coded biology, because those digital organisms—maybe they don’t exist now, but as long as the system keeps going, they’re inevitable—can travel at the speed of light. They can propagate. They’re going to be so immeasurably far ahead that maybe humans will be dragged along with it.
But while our digital footprint is propagating at the speed of light, we’re having very big trouble even getting to the eleven kilometers per second it takes to get into lower earth orbit. The digital world is clearly winning on that front. And that’s for the distant future. But it changes the game of launching things, if you no longer have to launch physical objects, in order to transmit life.”
— George Dyson, author and historian of technology whose publications broadly cover the evolution of technology in relation to the physical environment and the direction of society, A universe of self-replicating code, Edge, Mar 26, 2012.
“Plato was one that made the divide between the world of ideas and the world of the senses explicit. In his famous Allegory of the Cave, he imagined a group of prisoners who had been chained to a cave all their lives; all they could see were shadows projected on a wall, which they conceived as their reality. Unbeknownst to them, a fire behind them illuminated objects and created the shadows they saw, which could be manipulated to deceive them. In contrast, the philosopher could seereality as it truly is, a manifestation of ideas freed from the deception of the senses. In other words, if we want to understand the true nature of reality, we shouldn’t rely on our senses; only ideas are truly pure, freed from the distortions caused by our limited perception of reality.
Plato thus elevated the human mind to a god-like status, given that it can find truth through reason, in particular through the rational construction of ideal “Forms,” which are the essence of all objects we see in reality. For example, all tables share the Form of “tableness,” even if every table is different. The Form is an ideal and, thus, a blueprint of perfection. If I ask you to imagine a circle, the image of a circle you hold in your head is the only perfect circle: any representation of that circle, on paper or on a blackboard, will be imperfect. To Plato, intelligence was the ability to grasp the world of Forms and thus come closer to truth.
Due to its connection with the search for truth, it’s no surprise that Plato’s ideas influenced both scientists and theologians. If the world is made out of Forms, say geometrical forms, reality may be described mathematically, combining the essential forms and their relations to describe the change we see in the world. Thus, by focusing on the essential elements of reality as mathematical objects and their relations we could, perhaps, grasp the ultimate nature of reality and so come closer to timeless truths.
The notion that mathematics is a portal to final truths holds tremendous intellectual appeal and has influenced some of the greatest names in the history of science, from Copernicus, Kepler, Newton, and Einstein to many present-day physicists searching for a final theory of nature based upon a geometrical scaffolding, such as superstring theories. (…)
Taken in context, we can see where modern scientific ideas that relate the ultimate nature of reality to geometry come from. If it’s not God the Geometer anymore, Man the Geometer persists. That this vision offers a major drive to human creativity is undeniable.
We do imagine the universe in our minds, with our minds, and many scientific successes are a byproduct of this vision. Perhaps we should take Nicholas of Cusa,’s advice to heart and remember that whatever we achieve with our minds will be an expression of our own creativity, having little or nothing to do with ultimate truths.”
Zoom from the edge of the universe to the quantum foam of spacetime and learn the scale of things.
The Powers of Ten (1977)
“Powers of Ten takes us on an adventure in magnitudes. Starting at a picnic by the lakeside in Chicago, this famous film transports us to the outer edges of the universe. Every ten seconds we view the starting point from ten times farther out until our own galaxy is visible only a s a speck of light among many others. Returning to Earth with breathtaking speed, we move inward- into the hand of the sleeping picnicker- with ten times more magnification every ten seconds. Our journey ends inside a proton of a carbon atom within a DNA molecule in a white blood cell.”
What Happened Before the Big Bang? The New Philosophy of Cosmology
Tim Maudlin: “There are problems that are fairly specific to cosmology. Standard cosmology, or what was considered standard cosmology twenty years ago, led people to the conclude that the universe that we see around us began in a big bang, or put another way, in some very hot, very dense state. And if you think about the characteristics of that state, in order to explain the evolution of the universe, that state had to be a very low entropy state, and there’s a line of thought that says that anything that is very low entropy is in some sense very improbable or unlikely. And if you carry that line of thought forward, you then say “Well gee, you’re telling me the universe began in some extremely unlikely or improbable state” and you wonder is there any explanation for that. Is there any principle that you can use to account for the big bang state?
This question of accounting for what we call the “big bang state” — the search for a physical explanation of it — is probably the most important question within the philosophy of cosmology, and there are a couple different lines of thought about it. One that’s becoming more and more prevalent in the physics community is the idea that the big bang state itself arose out of some previous condition, and that therefore there might be an explanation of it in terms of the previously existing dynamics by which it came about. There are other ideas, for instance that maybe there might be special sorts of laws, or special sorts of explanatory principles, that would apply uniquely to the initial state of the universe.
One common strategy for thinking about this is to suggest that what we used to call the whole universe is just a small part of everything there is, and that we live in a kind of bubble universe, a small region of something much larger. And the beginning of this region, what we call the big bang, came about by some physical process, from something before it, and that we happen to find ourselves in this region because this is a region that can support life. The idea being that there are lots of these bubble universes, maybe an infinite number of bubble universes, all very different from one another. Part of the explanation of what’s called the anthropic principle says, “Well now, if that’s the case, we as living beings will certainly find ourselves in one of those bubbles that happens to support living beings.” That gives you a kind of account for why the universe we see around us has certain properties. (…)
Newton would call what he was doing natural philosophy, that’s actually the name of his book: “Mathematical Principles of Natural Philosophy.” Philosophy, traditionally, is what everybody thought they were doing. It’s what Aristotle thought he was doing when he wrote his book called Physics. So it’s not as if there’s this big gap between physical inquiry and philosophical inquiry. They’re both interested in the world on a very general scale, and people who work in the foundations of physics, that is, the group that works on the foundations of physics, is about equally divided between people who live in philosophy departments, people who live in physics departments, and people who live in mathematics departments.
Q: In May of last year Stephen Hawking gave a talk for Google in which he said that philosophy was dead, and that it was dead because it had failed to keep up with science, and in particular physics. Is he wrong or is he describing a failure of philosophy that your project hopes to address?
Maudlin: Hawking is a brilliant man, but he’s not an expert in what’s going on in philosophy, evidently. Over the past thirty years the philosophy of physics has become seamlessly integrated with the foundations of physics work done by actual physicists, so the situation is actually the exact opposite of what he describes. I think he just doesn’t know what he’s talking about. I mean there’s no reason why he should. Why should he spend a lot of time reading the philosophy of physics? I’m sure it’s very difficult for him to do. But I think he’s just … uninformed. (…)
Q: Do you think that physics has neglected some of these foundational questions as it has become, increasingly, a kind of engine for the applied sciences, focusing on the manipulation, rather than say, the explanation, of the physical world?
Maudlin: Look, physics has definitely avoided what were traditionally considered to be foundational physical questions, but the reason for that goes back to the foundation of quantum mechanics. The problem is that quantum mechanics was developed as a mathematical tool. Physicists understood how to use it as a tool for making predictions, but without an agreement or understanding about what it was telling us about the physical world. And that’s very clear when you look at any of the foundational discussions. This is what Einstein was upset about; this is what Schrodinger was upset about.
Quantum mechanics was merely a calculational technique that was not well understood as a physical theory. Bohr and Heisenberg tried to argue that asking for a clear physical theory was something you shouldn’t do anymore. That it was something outmoded. And they were wrong, Bohr and Heisenberg were wrong about that. But the effect of it was to shut down perfectly legitimate physics questions within the physics community for about half a century. And now we’re coming out of that, fortunately.
Q And what’s driving the renaissance?
Maudlin: Well, the questions never went away. There were always people who were willing to ask them. Probably the greatest physicist in the last half of the twentieth century, who pressed very hard on these questions, was John Stewart Bell. So you can’t suppress it forever, it will always bubble up. It came back because people became less and less willing to simply say, “Well, Bohr told us not to ask those questions,” which is sort of a ridiculous thing to say.
Q: Are the topics that have scientists completely flustered especially fertile ground for philosophers? For example I’ve been doing a ton of research for a piece about the James Webb Space Telescope, the successor to the Hubble Space Telescope, and none of the astronomers I’ve talked to seem to have a clue as to how to use it to solve the mystery of dark energy. Is there, or will there be, a philosophy of dark energy in the same way that a body of philosophy seems to have flowered around the mysteries of quantum mechanics?
Maudlin: There will be. There can be a philosophy of anything really, but it’s perhaps not as fancy as you’re making it out. The basic philosophical question, going back to Plato, is “What is x?” What is virtue? What is justice? What is matter? What is time? You can ask that about dark energy - what is it? And it’s a perfectly good question.
There are different ways of thinking about the phenomena which we attribute to dark energy. Some ways of thinking about it say that what you’re really doing is adjusting the laws of nature themselves. Some other ways of thinking about it suggest that you’ve discovered a component or constituent of nature that we need to understand better, and seek the source of. So, the question — What is this thing fundamentally? — is a philosophical question, and is a fundamental physical question, and will lead to interesting avenues of
Q: One example of philosophy of cosmology that seems to have trickled out to the layman is the idea of fine tuning - the notion that in the set of all possible physics, the subset that permits the evolution of life is very small, and that from this it is possible to conclude that the universe is either one of a large number of universes, a multiverse, or that perhaps some agent has fine tuned the universe with the expectation that it generate life. Do you expect that idea to have staying power, and if not what are some of the compelling arguments against it?
Maudlin: A lot of attention has been given to the fine tuning argument. Let me just say first of all, that the fine tuning argument as you state it, which is a perfectly correct statement of it, depends upon making judgments about the likelihood, or probability of something. Like, “how likely is it that the mass of the electron would be related to the mass of the proton in a certain way?” Now, one can first be a little puzzled by what you mean by “how likely” or “probable” something like that is. You can ask how likely it is that I’ll roll double sixes when I throw dice, but we understand the way you get a handle on the use of probabilities in that instance. It’s not as clear how you even make judgments like that about the likelihood of the various constants of nature (an so on) that are usually referred to in the fine tuning argument.
Now let me say one more thing about fine tuning. I talk to physicists a lot, and none of the physicists I talk to want to rely on the fine tuning argument to argue for a cosmology that has lots of bubble universes, or lots of worlds. What they want to argue is that this arises naturally from an analysis of the fundamental physics, that the fundamental physics, quite apart from any cosmological considerations, will give you a mechanism by which these worlds will be produced, and a mechanism by which different worlds will have different constants, or different laws, and so on. If that’s true, then if there are enough of these worlds, it will be likely that some of them have the right combination of constants to permit life. But their arguments tend not to be “we have to believe in these many worlds to solve the fine tuning problem,” they tend to be “these many worlds are generated by physics we have other reasons for believing in.”
If we give up on that, and it turns out there aren’t these many worlds, that physics is unable to generate them, then it’s not that the only option is that there was some intelligent designer. It would be a terrible mistake to think that those are the only two ways things could go. You would have to again think hard about what you mean by probability, and about what sorts of explanations there might be. Part of the problem is that right now there are just way too many freely adjustable parameters in physics. Everybody agrees about that. There seem to be many things we call constants of nature that you could imagine setting at different values, and most physicists think there shouldn’t be that many, that many of them are related to one another.
Physicists think that at the end of the day there should be one complete equation to describe all physics, because any two physical systems interact and physics has to tell them what to do. And physicists generally like to have only a few constants, or parameters of nature. This is what Einstein meant when he famously said he wanted to understand what kind of choices God had —using his metaphor— how free his choices were in creating the universe, which is just asking how many freely adjustable parameters there are. Physicists tend to prefer theories that reduce that number, and as you reduce it, the problem of fine tuning tends to go away. But, again, this is just stuff we don’t understand well enough yet.
Q: I know that the nature of time is considered to be an especially tricky problem for physics, one that physicists seem prepared, or even eager, to hand over to philosophers. Why is that?
Maudlin: That’s a very interesting question, and we could have a long conversation about that. I’m not sure it’s accurate to say that physicists want to hand time over to philosophers. Some physicists are very adamant about wanting to say things about it; Sean Carroll for example is very adamant about saying that time is real. You have others saying that time is just an illusion, that there isn’t really a direction of time, and so forth. I myself think that all of the reasons that lead people to say things like that have very little merit, and that people have just been misled, largely by mistaking the mathematics they use to describe reality for reality itself. If you think that mathematical objects are not in time, and mathematical objects don’t change — which is perfectly true — and then you’re always using mathematical objects to describe the world, you could easily fall into the idea that the world itself doesn’t change, because your representations of it don’t.
There are other, technical reasons that people have thought that you don’t need a direction of time, or that physics doesn’t postulate a direction of time. My own view is that none of those arguments are very good. To the question as to why a physicist would want to hand time over to philosophers, the answer would be that physicists for almost a hundred years have been dissuaded from trying to think about fundamental questions. I think most physicists would quite rightly say “I don’t have the tools to answer a question like ‘what is time?’ - I have the tools to solve a differential equation.” The asking of fundamental physical questions is just not part of the training of a physicist anymore.
Q: I recently came across a paper about Fermi’s Paradox and Self-Replicating Probes, and while it had kind of a science fiction tone to it, it occurred to me as I was reading it that philosophers might be uniquely suited to speculating about, or at least evaluating the probabilistic arguments for the existence of life elsewhere in the universe. Do you expect philosophers of cosmology to enter into those debates, or will the discipline confine itself to issues that emerge directly from physics?
Maudlin: This is really a physical question. If you think of life, of intelligent life, it is, among other things, a physical phenomenon — it occurs when the physical conditions are right. And so the question of how likely it is that life will emerge, and how frequently it will emerge, does connect up to physics, and does connect up to cosmology, because when you’re asking how likely it is that somewhere there’s life, you’re talking about the broad scope of the physical universe. And philosophers do tend to be pretty well schooled in certain kinds of probabilistic analysis, and so it may come up. I wouldn’t rule it in or rule it out.
I will make one comment about these kinds of arguments which seems to me to somehow have eluded everyone. When people make these probabilistic equations, like the Drake Equation, which you’re familiar with — they introduce variables for the frequency of earth-like planets, for the evolution of life on those planets, and so on. The question remains as to how often, after life evolves, you’ll have intelligent life capable of making technology.
What people haven’t seemed to notice is that on earth, of all the billions of species that have evolved, only one has developed intelligence to the level of producing technology. Which means that kind of intelligence is really not very useful. It’s not actually, in the general case, of much evolutionary value. We tend to think, because we love to think of ourselves, human beings, as the top of the evolutionary ladder, that the intelligence we have, that makes us human beings, is the thing that all of evolution is striving toward. But what we know is that that’s not true.
Obviously it doesn’t matter that much if you’re a beetle, that you be really smart. If it were, evolution would have produced much more intelligent beetles. We have no empirical data to suggest that there’s a high probability that evolution on another planet would lead to technological intelligence. There is just too much we don’t know.”
Raphael Bousso: Thinking About the Universe on the Larger Scales
“‘The far-reaching questions are things like how do we unify all the laws of nature, how do you do quantum gravity, how do you understand how gravitation and quantum mechanics fit together, how does that fit in with all the other matter and forces that we know?’ That’s a really far-reaching and important question.
Another far-reaching question is “what does the world look like on the largest scales?” What does the universe look like on the largest scales? How special is the part of the universe that we see? Are there other possibilities? Those questions are connected with each other, but in order to try to answer them, we have to try to come up with specific models, with specific ways to think about these questions, with ways to break them down into pieces, and of course, most importantly, with ways to relate them to observation and experiment.
One important hint that came along on the theoretical side a long time ago was string theory, which wasn’t invented for this sort of deep-sounding questions. It was invented to understand something about the strong force, but then it took on its own life and became this amazing structure that could be explored and which started spitting out these answers to questions that you hadn’t even thought of asking yet, such as quantum gravity. It started doing quantum gravity for you. (…)
Another hint that helps us break things up and lower the questions down to accessible levels is, of course, observational: what do we see when we look out the window? The one thing that’s really remarkable that we see, and it’s remarkable in the way that the question of why the sky is not bright at night is remarkable, is (it sounds stupid, but when you really think about it, it’s a profound question, and it needs an explanation: “Why isn’t there a star everywhere you look?”) A similar kind of question is: “Why is the universe so large?” It’s actually extremely remarkable that the universe is so large, from the viewpoint of fundamental physics. A lot of amazing things have to happen for the universe to not be incredibly small, and I can go into that.
One of the things that has to happen is that the energy of empty space has to be very, very small for the universe to be large, and in fact, just by looking out the window and seeing that you can see a few miles out, it’s an experiment that already tells you that the energy of empty space is a ridiculously small number, 0.000 and then dozens of zeros and then a 1. Just by looking out the window you learn that.
The funny thing is that when you calculate what the energy of empty space should be using theories you have available, really well-tested stuff that’s been tested in accelerators, like particle theory, the standard model, things that we know work, you use that to estimate the energy of empty space, and you can’t calculate it exactly on the dot. But you can calculate what the size of different contributions is, and they’re absolutely huge. They should be much larger than what you already know it can possibly be, again, not just by factor of 10 or 100, but by a factor of billions, of billions of billions of billions.
This requires an explanation. It’s only one of the things that has to go right for the universe to become as large as we see it, but it is one of the most mysterious properties that turned out to be right for the universe to become large, but it needs an explanation.
Funnily enough, because we knew that that number had to be so small, that is the energy of empty space, the weight of empty space, had to be so small, it became the lore within at least a large part of the physics community that it was probably zero for some unknown reason. And one day we’d wake up and discover why it’s exactly zero. But instead, one day in ‘98 we woke up and discovered that it’s non-zero. One day we woke up in ‘98 and we discovered that cosmologists had done some experiments that looked at how fast the universe has been accelerating at different stages of its life, and they discovered that the universe had started to accelerate its expansion, when we used to think that what it would do is explode at the Big Bang, and then kind of get slower and slower in the way that galaxies expand away from each other. Instead, it’s like you went off the brakes and stepped on the gas pedal a few billion years ago; the universe is accelerating. That’s exactly what a universe does if the energy of empty space is non-zero and positive, and you could look at how fast its acceleration is happening, and deduce the actual value of this number. In the last 13 years a lot of independent observations have come together to corroborate this conclusion.
It’s still true that the main thing that we needed to explain is why the cosmological constant, or the energy of empty space, isn’t huge. But now we also know that the explanation was definitely not going to be that for some symmetry reason that number is exactly zero. And so we needed an explanation that would tell us why that number is not huge, but also not exactly zero.
The amazing thing is that string theory, which wasn’t invented for this purpose, managed to provide such an explanation, and in my mind this is the first serious contact between observation, experiment on the one side, and string theory on the other. It was always interesting to have a consistent theory of quantum gravity, it’s very hard to write that down in the first place, but it turned out that string theory has exactly the kind of ingredients that make it possible to explain why the energy of empty space has this bizarre, very small, but non-zero value.
I thought I was going to become a mathematician, and then decided to study physics instead, at the last minute, because I realized that I actually cared about understanding Nature, and not just some abstract, perhaps beautiful, but abstract construct. I went to Cambridge, the one in England, for my PhD. I worked with Stephen Hawking on questions of quantum properties of black holes, and how they might interplay with early universe cosmology. (…)
Another topic that I started thinking about was trying to understand the small but non-zero value of the cosmological constant, energy of empty space, or as people like to call it, dark energy. I worked on that subject with Joe Polchinski, at KITP, in Santa Barbara, and we realized that string theory offers a way of understanding this, and I would argue that that is the leading explanation currently of this mysterious problem. (…)
I don’t do experiments in the sense that I would walk into a lab and start connecting wires to something. But it matters tremendously to me that the theory that I work on is supposed to actually explain something about Nature. The problem is that the more highly developed physics becomes, we start asking questions which, for technological reasons, are not in the realm of day-to-day experimental feedback. We can’t ask about quantum gravity and expect at the same time to be getting some analog of the spectroscopic data that in the late 19th century fed the quest for quantum mechanics. And I think it is a perfectly reasonable reaction to say, “Well, in that case I think that the subject is too risky to work on.” But I think it’s also a reasonable reaction to say, “Well, but the question, it’s obviously a sensible one.” It’s clearly important to understand how to reconcile quantum mechanics and general relativity. They’re both great theories, but they totally contradict each other, and there are many reasons to believe that by understanding each other we will learn very profound things about how Nature works. Now, it could be that we are not smart enough to do this, in particular without constant feedback from experiments, but we could have been pessimistic at so many junctures in the past and we found a way around.
I don’t think that we’re going to understand a lot about quantum gravity by building more particle accelerators. We’ll understand a lot of other things, even a few things about quantum gravity, but ratherindirectly. But we’ll look elsewhere, we’ll look at cosmological experiments, we’ll use the universe to tell us about very high energies. We’ll come up with ideas that we can’t even dream about right now. I’m always in awe of the inventiveness of my experimental colleagues, and I don’t doubt that they will deliver for us eventually.
It has been said that it’s been a golden age for cosmology in the last 15 years or so, and it’s true. I was very lucky with timing. When I was a graduate student, the COBE satellite was launched, and started flying and returning data, and that really marked the beginning of an era where cosmology was no longer the sort of subject where there were maybe one or two numbers to measure and people had uncertainties on say how fast the universe expands. They couldn’t even agree on how fast galaxies are moving away from each other.
And from this, we move to a data-rich age where you have unbelievably detailed information about how matter is distributed in the universe, how fast the universe is, not just expanding right now, but the expansion history, how fast it was expanding at earlier times, and so on. Things were measured that seemed out of reach just a few years earlier, and so indeed it’s no longer possible to look down on cosmology as this sort of hand-waving subject where you can say almost anything and never be in conflict with the data. In fact, a lot of theories have gone down the road of being eliminated by data in the past 15 years or so, and several more are probably going to go down that road pretty soon. (…)
Inflation looks really good. It’s not like we have a smoking gun confirmation of it, but it has passed so many tests, it could have been ruled out quite a few times by now, that it I would say is looking really interesting right now.
Inflation comes in many detailed varieties, but it does make a number of rather generic predictions, and unless you work very hard to avoid them, they come with pretty much every inflation model you grab off the shelf. One of those predictions is that the spatial geometry of the universe would be flat. It should be the kind of geometry that you learn about in high school as opposed to the weird kind of geometry that mathematicians study in university, and that has turned out to be the case. To within a percent precision, we now know that the universe is spatially flat. Inflation predicts a particular pattern of perturbations in the sky, and again, to the extent that we have the data, and we have very precise data by now, there was plenty of opportunity to rule out that prediction, but inflation still stands. So there are a number of general predictions that inflation makes which have held up very well, but we’re not yet at a point where we can say, it’s this particular make and model of inflation that is the right one, and not this other one. We’re zooming in. Some types of inflation have been ruled out, large classes of models have been ruled out, but we haven’t zoomed in on the one right answer, and that might still take a while, I would expect.
I was saying that string theory has in a way surprised us by being able to solve a problem that other theories, including some that were invented for that purpose alone, had not been able to address, i.e. the problem of why empty space weighs so little, why is there so little dark energy. The way that string theory does this is very similar to the way that we can explain the enormous variety of that we see when we look at the chair, the table, and the sofa in this room. What are these things?
They’re basically a few basic ingredients, electrons, quarks, and photons. You’ve got five different particles, and you put them together, and now you’ve got lots and lots of these particle. There are very few fundamental ingredients, but you have many copies of them. You have many quarks, you have many electrons, and when you put them together you have a huge number of possibilities of what you can make. It’s just like with a big box of Legos, there are lots of different things you can build out of that. With a big box of quarks and electrons you can build a table if you want, or you can build a chair if you want. It’s your choice. And strictly speaking, if I take one atom and I move it over here to a slightly different place on this chair, I’ve built a different object. These objects in technical lingo will be called solutions of a certain theory called the standard model. If I have a block of iron, I move an atom over there, it’s a different solution of the standard model.
The fact that there are innumerably many different solutions of the standard model does not of course mean that the standard model of particle physics (this triumph of human thinking) is somehow unbelievably complicated, or that it’s a theory of anything, or that it has no predictive power, it just means that it is rich enough to accommodate the rich phenomenology we actually see in nature, while at the same time starting from a very simple setup. There are only certain quarks. There is only one kind of electron. There are only certain ways you can put them together, and you cannot make arbitrary materials with them. There are statistical laws that govern how very large numbers of atoms behave, so even though things look like they get incredibly complicated, actually they start simplifying again when you get to really large numbers.
In string theory we’re doing a different kind of building of iron blocks. String theory is a theory that wants to live in ten dimensions, nine spatial dimensions and one time. We live in three spatial dimensions and one time, or at least so it seems to us. And this used to be viewed as a little bit of an embarrassment for string theory, not fatal, because it’s actually fairly easy to imagine how some of those spatial dimensions could be curled up into circles so small that they wouldn’t be visible even under our best microscopes. But it might have seemed nicer if the theory had just matched up directly with observation.
It matches up with observation very nicely when you start realizing that there are many different ways to curl up the six unwanted dimensions. How do you curl them up? Well, it’s not like they just bend themselves into some random shape. They get shaped into a small bunch of circles, whatever shape they want to take, depending on what matter there is around.
Similarly to how the shape of your Lego car depends on how you put the pieces together, the shape of this chair depends on how you put the atoms in it together, the shape of the extra dimensions depends on how you put certain fundamental string theory objects together. Now, string theory actually is even more rigorous about what kind of fundamental ingredients it allows you to play with than the Lego Company or the standard model. It allows you to play with fluxes, D-branes, and strings, and these are objects that we didn’t put into the theory, the theory gives them to us and says, “This is what you get to play with.” But depending on how it warps strings and other objects called D-branes and fluxes in the extra six dimensions, these six dimensions take on a different shape. In effect, this means that there are many different ways of making a three-dimensional world, just as there are many ways of building a block of iron, or a Lego car, there are many different ways of making a three-plus-one dimensional-seeming world.
Of course, none of these worlds are truly three-plus-one dimensional. If you could build a strong enough accelerator, you could see all these extra dimensions. If you could build an even better accelerator, you might be able to even manipulate them and make a different three-plus-one dimensional world in your lab. But naturally you would expect that this happens at energy scales that are currently and probably for a long time inaccessible to us. But you have to take into account the fact that string theory has this enormous richness in how many different three-plus-one dimensional worlds it can make.
Joe Polchinski and I did an estimate, and we figured that there should be not millions or billions of different ways of making a three-plus-one dimensional world, but ten to the hundreds, maybe ten to the five hundred different ways of doing this. This is interesting for a number of reasons, but the reason that seemed the most important to us is that it implies that string theory can help us understand why the energy of the vacuum is so small. Because, after all, what we call “the vacuum” is simply a particular three-plus-one dimensional world, what that one looks like when it’s empty. And what that one looks like when it’s empty is basically, it still has all the effects from all this stuff that you have in the extra dimensions, all these choices you have there about what to put.
For every three-plus-one dimensional world, you expect that in particular the energy of the vacuum is going to be different, the amount of dark energy, or cosmological constant is going to be different. And so if you have ten to the five hundred ways of making a three-plus-one dimensional world, and some of them just by accident, the energy of the vacuum is going to be incredibly close to zero.
The other thing that is going to happen is that in about half of these three-plus-one dimensional worlds, the vacuum is going to have positive energy. So even if you don’t start out the universe in the right one, where by “right one” I mean the one that later develops beings like us to observe it, you could start it out in a pretty much random state, another way of making a three-dimensional world. What would happen is it would grow very fast, because positive vacuum energy needs acceleration, as we observed today in the sky, it will grow very fast, and then by quantum mechanical processes it would decay, and you would see changes in the way that matter is put into these extra dimensions, and locally you would have different three-plus-one dimensional worlds appearing. (…)
What happens is the universe gets very, very large, all these different vacua, three-dimensional worlds that have positive weight, grow unboundedly, and decay locally, and new vacuole appear that try to eat them up, but they don’t eat them up fast enough. So the parents grow faster than the children can eat them up, and so you make everything. You fill the universe with these different vacua, these different kinds of regions in which empty space have all sorts of different weights. Then you can ask, “Well, in such a theory, where are the observers going to be?” To just give the most primitive answer to this question, it’s actually very useful to remember the story about the holographic principle. (…)
If you have a lot of vacuum energy, then even though the universe globally grows and grows and grows, if you sit somewhere and look around, there is a horizon around you. The region that’s causally connected, where particles can interact and form structure is inversely related to the amount of vacuumed energy you have. This is why I said earlier that just by looking out the window and seeing that the universe is large, we know that there has to be very little vacuum energy. If there’s a lot of vacuum energy, the universe is a tiny little box from the viewpoint of anybody sitting in it. The holographic principle tells you that the amount of information in the tiny little box is proportional to the area of its surface. If the vacuum energy has this sort of typical value that it has in most of the vacua, that surface allows for only a few bits of information. So whatever you think observers look like, they probably are a little bit more complicated than a few bits.
And so you can immediately understand that you don’t expect observers to exist in the typical regions. They will exist in places where the vacuum energy happens to be unusually small due to accidental cancellations between different ingredients in these extra dimensions, and where, therefore, there is room for a lot of complexity. And so you have a way of understanding both the existence of regions in the universe somewhere with very small vacuum energy, and also of understanding why we live in those particular rather atypical regions.
What’s interesting about this is the idea that maybe the universe is a very large multi-verse with different kinds of vacua in it was actually thrown around independently of string theory for some time, in the context of trying to solve this famous cosmological constant problem. But it’s not actually that easy to get it all right. If you just imagine that the vacuum energy got smaller and smaller and smaller as the universe went on, that the vacua are nicely lined up with each one that you decay into having slightly smaller vacuum energy than the previous one, you cannot solve this problem. You can make the vacuum energy small, but you also empty out the universe. You won’t have any matter in it. (…)
I think that the things that haven’t hit Oprah yet, and which are up and coming are questions like, well, if the universe is really accelerating its expansion, then we know that it’s going to get infinitely large, and that things will happen over and over and over. And just simply because if you have infinitely many tries at something, then every possible outcome is going to happen infinitely many times, no matter how unlikely it is.
This is actually something that predates this string theory multiverse that I was talking about. It’s a very robust question in the sense that even if you believe string theory is a bunch of crap, you still have to worry about this problem because it’s based on just observation. You see that the universe is currently expanding in an accelerated way, and unless there’s some kind of conspiracy that’s going to make this go away very quickly, it means that you have to address this problem of infinities. But the problem becomes even more important in the context of the string landscape because it’s very difficult to make any predictions in the landscape if you don’t tame those infinities.
Why? Because you want to say that seeing this thing in your experiment is more likely than that thing, so that if you see the unlikely thing, you can rule out your theory the way we always like to do physics. But if both things happen infinitely many times, then on what basis are you going to say that one is more likely than the other? You need to get rid of these infinities. This is called, at least among cosmologists, the measure problem. It’s probably a really bad name for it, but it stuck.
That’s where a lot of the action is right now. That’s where a lot of the technical work is happening, that’s where people are, I think, making progress. I think we’re ready for Oprah, almost, and I think that’s a question where we’re going to come full circle, we’re going to learn something about the really deep questions, about what is the universe like on the largest scales, how does quantum gravity work in cosmology? I don’t think we can fully solve this measure problem without getting to those questions, but at the same time, the measure problem allows us a very specific way in. It’s a very concrete problem. If you have a proposal, you can test it, you can rule it out, or you can keep testing it if it still works, and by looking at what works, by looking at what doesn’t conflict with observation, by looking at what makes predictions that seem to be in agreement with what we see, we’re actually learning something about the structure of quantum gravity.
So I think that it’s currently a very fruitful direction. It’s a hard problem, because you don’t have a lot to go by. It’s not like it’s an incremental, tiny little step. Conceptually it’s a very new and difficult problem. But at the same time it’s not that hard to state, and it’s remarkably difficult to come up with simple guesses for how to solve it that you can’t immediately rule out. And so we’re at least in the lucky situation that there’s a pretty fast filter. You don’t have a lot of proposals out there that have even a chance of working.
The thing that’s really amazing, at least to me, is in the beginning we all came from different directions at this problem, we all had our different prejudices. Andrei Linde had some ideas, Alan Guth had some ideas, Alex Vilenkin had some ideas. I thought I was coming in with this radically new idea that we shouldn’t think of the universe as existing on this global scale that no one observer can actually see, that it’s actually important to think about what can happen in the causally connected region to one observer. What can you do in any experiment that doesn’t actually conflict with the laws of physics and require superluminal propagation. We have to ask questions in a way that conform to the laws of physics if we want to get sensible answers. (…)
A lot of things have now happened that didn’t have to happen, a lot of things have happened that give us some confidence that we’re on to something, and at the same time we’re learning something about how to think about the universe on the larger scales.”
“A time-lapse taken from the front of the International Space Station as it orbits our planet at night. This movie begins over the Pacific Ocean and continues over North and South America before entering daylight near Antarctica. Visible cities, countries and landmarks include (in order) Vancouver Island, Victoria, Vancouver, Seattle, Portland, San Francisco, Los Angeles. Phoenix. Multiple cities in Texas, New Mexico and Mexico. Mexico City, the Gulf of Mexico, the Yucatan Peninsula, El Salvador, Lightning in the Pacific Ocean, Guatemala, Panama, Columbia, Ecuador, Peru, Chile, Lake Titicaca, and the Amazon. Also visible is the earths ionosphere (thin yellow line), a satellite (55sec) and the stars of our galaxy.”
The Known Universe as mapped through astronomical observations
The Known Universe takes viewers from the Himalayas through our atmosphere and the inky black of space to the afterglow of the Big Bang. Every satellite, moon, planet, star and galaxy is represented to scale and in it’s correct, measured location according to the best scientific research to-date.
“What we call reality,” [physicist] John Archibald Wheeler wrote coyly, “arises in the last analysis from the posing of yes-no questions.” He added: “All things physical are information-theoretic in origin, and this is a participatory universe.” The whole universe is thus seen as a computer—a cosmic information-processing machine.” — James Gleick, The Information: A History, a Theory, a Flood, Pantheon, 2011
“Reality is an intelligent conversation with the universe.” - cited in What Is Reality?, BBC Horizon documentary, 2011
The illusion of reality | BBC
“Professor Jim Al-Khalili explores how studying the atom forced us to rethink the nature of reality itself. He discovers that there might be parallel universes in which different versions of us exist, finds out that empty space isn’t empty at all, and investigates the differences in our perception of the world in the universe and the reality.” — BBC Four, 2010
What Is Reality? | BBC Horizon
“There is a strange and mysterious world that surrounds us, a world largely hidden from our senses. The quest to explain the true nature of reality is one of the great scientific detective stories.
Clues have been pieced together from deep within the atom, from the event horizon of black holes, and from the far reaches of the cosmos. It may be that that we are part of a cosmic hologram, projected from the edge of the universe. Or that we exist in an infinity of parallel worlds. Your reality may never look quite the same again.” — BBC Horizon, 2011 (Full playlist)
“The night sky is a time machine. Look out and you look back in time. But this “time travel by eyesight” is not just the province of astronomy. It’s as close as the machine on which you are reading these words. Your present exists at the mercy of many overlapping pasts. So where, then, is “now”?
As almost everyone knows, when you stare into the depths of space you are also looking back in time. Catch a glimpse of a relatively nearby star and you see it as it existed when, perhaps, Lincoln was president (if it’s 150 light-years away). Stars near the edge of our own galaxy are only seen as they appeared when the last ice age was in full bloom (30,000 light-years away). And those giant pinwheel assemblies of stars called galaxies are glimpsed, as they existed millions, hundreds of millions or even billions of years in the past. (…)
Stranger still, the sky we see at any moment defines not a single past but multiple overlapping pasts of different depths. The star’s image from 100 years ago and the galaxy image from 100 million years ago reach us at the same time. All of those “thens” define the same “now” for us.
The multiple, foliated pasts comprising our present would be weird enough if it was just a matter of astronomy. But the simple truth is that every aspect of our personal “now” is a layered impression of a world already lost to the past.
To understand how this works, consider the simple fact (…) all we know about the world comes to us via signals: light waves, sound waves and electrical impulses running along our nerves. These signals move at a finite speed. It always takes some finite amount of time for the signal to travel from the world to your body’s sensors (and on to your brain).
A distant galaxy, a distant mountain peak, the not very distant light fixture on the ceiling and even the intimacy of a loved one’s face all live in the past. Those overlapping pasts are times that you — in your “now” — are no longer a part of.
Signal travel time constitutes a delay and all those overlapping delays constitute an essential separation. The inner world of your experience is, in a temporal sense, cut off from the outer world you inhabit.
Let’s take a few examples. Light travels faster than any other entity in the physical universe, propagating with the tremendous velocity of c = 300,000,000 m/s. From high school physics you know that the time it takes a light signal moving at c to cross some distance D is simply t = D/c.
When you look at the mountain peak 30 kilometers away you see it not as it exists now but as it existed a 1/10,000 of a second ago. The light fixture three meters above your head is seen not as it exists now but as it was a hundred millionth of a second ago. Gazing into your partner’s eyes, you see her (or him) not for who they are but for who they were 10-10 of a second in the past. Yes, these numbers are small. Their implication, however, is vast.
We live, each of us, trapped in our own now.
The simple conclusions described above derive, in their way, from relativity theory and they seem to spell the death knell for a philosophical stance called Presentism. According to Presentism only the present moment has ontological validity. In other words: only the present truly exists; only the present is real.
Presentism holds an intuitive sway for many people. It just feels right. For myself, when I try and explore the texture of my own experience, I can’t help but feel a sense of the present’s dominance. Buddhism, with its emphasis on contemplative introspection, has developed a sophisticated presentist stance concerning the nature of reality. “Anyone who has ever mediated for anytime” the abbot of a Zen monastery once told me “finds that the past and future are illusions.”
Yes, but …
The reality that even light travels at a finite speed forces us to confront the strange fact that, at best, the present exists at the fractured center of many overlapping pasts.
So where, then, are we in time? Where is our “now” and how does it live in the midst of a universe comprised of so many “thens”?”
The path light travels determines the image you see.
“You never experience the world as it is. You only experience it in the way light brings it to you.
And light can be taught to lie.
Last week researchers at Cornell University announced they had created a time cloaking device. Using their machine they could hide an event from detection, even if it occurred in plain view of very capable detectors. (…)
Both experiments rely on the complex realization of a simple truth about our experience of the world. We have no “direct” knowledge of the world-in-of-itself but, instead, are forced to rely on signals carried to us from external objects. If the properties of the signals are somehow changed while they are traveling to us then our experience of the world is changed as well. (…)
Nature and light can, however, be manipulated in ways that can make illusions impossible to detect. This is the new physics of cloaking. (…)”
Making sense of a visible quantum object. How can an object that is visible to the naked eye be in two places at the same time?
“Can an object that is visible to the naked eye be in two places at the same time? Common sense and experience told us that the answer is “no” — until recently. In this presentation, physicist Aaron O’Connell tells us a little about the bizarre rules of quantum mechanics, which were thought to be completely different for human-scale objects — but are they really? In a breakthrough experiment, Dr O’Connell blurs that distinction by creating an object that is visible to the unaided eye, but provably in two places at the same time. In this talk, he suggests an intriguing way of thinking about the result:
(…) While there, in an experiment remarkable both for its conceptual simplicity and technical difficulty, Dr O’Connell was the first person to measure quantum effects in an object large enough to see with the naked eye [dissertation: PDF]. Named “Breakthrough of the year” by Science Magazine, the experiment shattered the previous record for the largest quantum object, showing decisively that there is no hard line between the quantum and everyday worlds.”
“Until now, all machines have moved according to the not-surprising laws of classical mechanics, which govern the motion of everyday objects. In contrast, a tiny machine unveiled this year jiggles in ways explicable only by the weird rules of quantum mechanics, which ordinarily govern molecules, atoms, and subatomic particles. The proto-quantum machine opens the way to myriad experimental devices and perhaps tests of our sense of reality.” — Science Magazine
Vlatko Vedral: Decoding Reality: the universe as quantum information
“Everything in our reality is made up of information. From the evolution of life to the dynamics of social ordering to the functioning of quantum computers, they can all be understood in terms of bits of information. We saw that in order to capture all the latest elements of reality we needed to extend Claude Shannon’s original notion of information, and upgrade his notion from bits to quantum bits, or qubits. Qubits incorporate the fact that in quantum theory outcomes to our measurements are intrinsically random.
But where do these qubits come from? Quantum theory allows us to answer this question; but the answer is not quite what we expected. It suggests that these qubits come from nowhere! There is no prior information required in order for information to exist. Information can be created from emptiness. In presenting a solution to the sticky question of ‘law without law’ we find that information breaks the infinite chain of regression in which we always seem to need a more fundamental law to explain the current one. This feature of information, ultimately coming from our understanding of quantum theory, is what distinguishes information from any other concept that could potentially unify our view of reality, such as matter or energy. Information is, in fact, unique in this respect. (…) p. 215
This book will argue that information (and not matter or energy or love) is the building block on which everything is constructed. Information is far more fundamental than matter or energy because it can be successfully applied to both macroscopic interactions, such as economic and social phenomena, and, as I will argue, information can also be used to explain the origin and behaviour of microscopic interactions such as energy and matter.
The question of everything from nothing, creation ex nihilo
As pointed out by David Deutsch and John Archibald Wheeler, however, whatever candidate is proposed for the fundamental building block of the Universe, it still needs to explain its ‘own’ ultimate origin too. In other words, the question of everything from nothing, creation ex nihilo, is key. So if, as I claim, information is this common thread, the question of creation ex nihilo reduces to explaining how some information arises out of no information. Not only will I show how this is possible, I will also argue that information, in contrast to matter and energy, is the only concept that we currently have that can explain its own origin. (…) p.10
This desire to compress information and the natural increase of information in the Universe may initially seem like independent processes, but as we will explore in much more detail later there may be a connection. As we compress and find all-encompassing principles describing our reality, it is these principles that then indicate how much more information there is in our Universe to find. In the same way that Feuerbach states that ‘Man first creates God, and then God creates Man’, we can say that we compress information into laws from which we construct our reality, and this reality then tells us how to further compress information. (…)
I believe this view of reality being defined through information compression is closer to the spirit of science as well as its practice. (…) It is also closer to the scientific meaning of information in that information reflects the degree of uncertainty in our knowledge of a system. (…)
Information is the underlying thread that connects all phenomena we see around us as well as explaining their origin. Our reality is ultimately made up of information. (…) p. 12-13
Information is the language Nature uses to convey its messages and this information comes in discrete units. We use these units to construct our reality. (…) p. 23
Do we define information as a quantity which we can use to do something useful or could we still call it information even if it wasn’t of any use to us? Is information objective or is it subjective? For example, would the same message or piece of news carry the same information for two different people? Is information inherently human or can animals also process information? Going even beyond this, is it a good thing to have a lot of information and to be able to process it quickly or can too much information drown you? These questions all add some colour and vigour to the challenge of achieving an agreed and acceptable definition of information.
The second trouble with information is that, once defined in a rigorous manner, it is measured in a way that is not easy to convey without mathematics. You may be very surprised to hear that even scientists balk at the thought of yet another equation. (…) p. 26-27
By stripping away all irrelevant details we can distil the essence of what information means. (…) Unsurprisingly, we find the basis of our modern concept of information in Ancient Greece. The Ancient Greeks laid the groundwork for its definition when they suggested that the information content of an event somehow depends only on how probable this event really is. Philosophers like Aristotle reasoned that the more surprised we are by an event the more information the event carries. By this logic, having a clear sunny autumn day in England would be a very surprising event, whilst experiencing drizzle randomly throughout this period would not shock anyone. This is because it is very likely, that is, the probability is high, that it will rain in England at any given instant of time. From this we can conclude that less likely events, the ones for which the probability of happening is very small, are those that surprise us more and therefore are the ones that carry more information.
Following this logic, we conclude that information has to be inversely proportional to probability, i.e. events with smaller probability carry more information. In this way, information is reduced to only probabilities and in turn probabilities can be given objective meaning independent of human interpretation or anything else (meaning that whilst you may not like the fact that it rains a lot in England, there is simply nothing you can do to change its probability of occurrence). (…) p. 29
As we saw in the initial chapter on creation ex nihilo, the fundamental question is why there is any information in the first place. For the replication of life we saw that we needed four main components, the protein synthesizer machine [a universal constructing machine], M, the DNA Xerox copier X, the enzymes which act as controllers, C, and the DNA information set [the set of instructions required to construct these three], I. (…) With these it is possible to then create an entity that self-replicates indefi nitely.
A macromolecule responsible for storing the instructions, I, in living systems is called DNA. DNA has four bases: A, C, T, and G. When DNA replicates inside our cells, each base has a specifi c pairing partner. There is huge redundancy in how bases are combined to form amino acid chains. This is a form of error correction. The digital encoding mechanism of DNA ensures that the message gets propagated with high fidelity. Random mutations aided by natural selection necessarily lead to an increase in complexity of life.
The process of creating biological information from no prior biological information is another example of the question of creation ex nihilo. Natural selection does not tell us where biological information comes from – it just gives us a framework of how it propagates. (…) p. 54-55
My argument is that life paradoxically ends not when it underdoses on fuel, but, more fundamentally, when it overdoses on ‘information’ (i.e. when it reaches a saturation point and can no longer process any further information). We have all experienced instances where we feel we cannot absorb any more information. (…)
The Second Law of thermodynamics tells us that in physical terms, a system reaches its death when it reaches its maximum disorder (i.e. it contains as much information as it can handle). This is sometimes (cheerfully) referred to as thermal death, which could really more appropriately be called information overload. This state of maximum disorder is when life effectively becomes a part of the rest of the lifeless Universe. Life no longer has any capacity to evolve and remains entirely at the mercy of the environment. (…) p. 58-59
Physical entropy, which describes how disordered a system is, tends to increase with time. This is known as the Second Law of thermodynamics. The increasing complexity of life is driven by the overall increase in disorder in the Universe. (…) p. 76
This concept is very important in understanding a diverse number of phenomena in Nature and will be the key when we explain the origin of structure in any society.
Mutual information is the formal word used to describe the situation when two (or more) events share information about one another. Having mutual information between events means that they are no longer independent; one event has something to tell you about the other. For example, when someone asks if you’d like a drink in a bar, how many times have you replied ‘I’ll have one if you have one’? This statement means that you are immediately correlating your actions with the actions of the person offering you a drink. If they have a drink, so will you; if they don’t, neither will you. Your choice to drink-or-not-to-drink is completely tied to theirs and hence, in information theory parlance, you both have maximum mutual information.
A little more formally, the whole presence of mutual information can be phrased as an inference indicator. Two things have mutual information if by looking at just one of them you can infer something about one of the properties of the other one. So, in the above example, if I see that you have a drink in front of you that means logically that the person offering you a drink also has a glass in front of them (given that you only drink when the person next to you drinks). (…)
Whenever we discuss mutual information we are really asking how much information an object/person/idea has about another object/person/idea. (…)
When it comes to DNA, its molecules share information about the protein they encode. Different strands of DNA share information about each other as well (we know that A only binds to G and C only binds to T). Furthermore the DNA molecules of different people also share information about one another (a father and a son, for example, share half of their DNA genetic material) and the DNA is itself sharing information with the environment – in that the environment determines through natural selection how the DNA evolves. (…)
One of the phenomena we will try to understand here, using mutual information, is what we call ‘globalization’, or the increasing interconnectedness of disparate societies. (…)
Before we delve further into social phenomena, I need to explain an important concept in physics called a phase transition. Stated somewhat loosely, phase transitions occur in a system when the information shared between the individual constituents become large (so for a gas in a box, for an iron rod in a magnetic field, and for a copper wire connected into an electric circuit, all their constituents share some degree of mutual information).
A high degree of mutual information often leads to a fundamentally different behaviour, although the individual constituents are still the same. To elaborate this point, the individual constituents are not affected on an individual basis, but as a group they exhibit entirely different behaviour. The key is how the individual constituents relate to one another and create a group dynamic. This is captured by the phrase ‘more is different’, by the physicist Philip Anderson, who contributed a great deal to the subject, culminating in his Nobel Prize in 1977.
A common example of a group dynamic is the effect we observe when boiling or freezing water (i.e. conversion of a liquid to a gas or conversion of a liquid to a solid). These extreme and visible changes of structures and behaviour are known as phase transitions. When water freezes, the phase transition occurs as the water molecules becomes more tightly correlated and these correlations manifest themselves in stronger molecular bonds and a more solid structure. The formation of societies and significant changes in every society – such as a revolution or a civil war or the attainment of democracy – can, in fact, be better understood using the language of phase transitions.
I now present one particular example that will explain phase transitions in more detail. This example will then act as our model to explain various social phenomena that we will tackle later in the chapter. Let us imagine a simple solid, made up of a myriad of atoms (billions and billions of them). Atoms usually interact with each other, although these interactions hardly ever stretch beyond their nearest neighbours. So, atoms next to each other will feel each other’s presence only, while the ones that are further apart from each other will typically never directly exchange any information.
It would now be expected that as a result of the ‘nearest neighbour’ interaction, only the atoms next to each other share information while this is not possible where there is no interaction. Though this may sound logical, it is in fact entirely incorrect. Think of a whip: you shake one end and this directly infl uences the speed and range at which the other end moves. You are transferring movement using the interconnectedness of atoms in the whip. Information can be shared between distant atoms because one atom interacts with its neighbours, but the neighbours also interact with their neighbours, and so on. This concept can be explained more elegantly through the concept of ‘six degrees of separation’. You often see it claimed that each person on this planet is at most six people away from any other person. (…) p. 94-97
Why is this networking between people important? You might argue that decisions made by society are to a high degree controlled by individuals – who ultimately think for themselves. It is clear, however, that this thinking is based on the common information shared between individuals. It is this interaction between individuals that is responsible for the different structures within society as well as society itself. (…) In this case, the information shared between individuals becomes much more important. So how do all people agree to make a decision, if they only interact locally, i.e. with a very limited number of neighbours?
In order to understand how local correlations can lead to the establishment of structures within society, let us return to the example of a solid. Solids are regular arrays of atoms. This time, however, rather than talking about how water becomes ice, let’s consider how a solid becomes a magnet. Every atom in a solid can be thought of as a little magnet on its own. Initially these magnets are completely independent of one another and there is no common north/south alignment – meaning that they are all pointing in random directions. The whole solid – the whole collection of atoms – would then be a random collection of magnets and would not be magnetized as a whole (this is known as a paramagnet). All the random little atomic magnets would simply cancel each other out in effect and there would be no net magnetic field.
However, if the atoms interact, then they can affect each other’s state, i.e. they can cause their neighbours to line up with them. Now through the same principle as six degrees of separation, each atom affects the other atoms it is connected to, and in turn these affect their own neighbours, eventually correlating all the atoms in the solid. If the interaction is stronger than the noise due to the external temperature, then all magnets will eventually align in the same direction and the solid as a whole generates a net magnetic field and hence becomes magnetic! All atoms now behave coherently in tune, just like one big magnet. The point at which all atoms ‘spontaneously’ align is known as the point of phase transition, i.e. the point at which a solid becomes a magnet. (…)
You may object that atoms are simple systems compared to humans. After all humans can think, feel, get angry, while atoms are not alive and their range of behaviour is far simpler. But this is not the point! The point is that we are only focusing on one relevant property of humans (or atoms) here. Atoms are not all that simple either, but we are choosing to make them so by looking only at their magnetic properties. Humans are much more complicated still, but now we only want to know about their political preference, and these can be quite simple in practice. (…)
This unevenness in the number of contacts leads to a very important model where there is a great deal of interaction with people close by and then, every once in a while, there is a long-distance interaction with someone far away. This is called a ‘small world network’ and is an excellent model for how and why disease propagates rapidly in our world. When we get ill, disease usually spreads quickly to our closest neighbours. Then it is enough that only one of the neighbours takes a long-distance flight and this can then make the virus spread in distant places. And this is why we are very worried about swine flu and all sorts of other potential viruses that can kill humans.
Let us now consider why some people believe – rightly or wrongly – that the information revolution has and will transform our society more than any other revolution in the past – such as the industrial revolution discussed in earlier chapters. Some sociologists, such as Manuel Castells, believe that the Internet will inflict much more profound transformations in our society than ever previously seen in history. His logic is based on the above idea of phase transitions, though, being a sociologist, he may not be interpreting them in quite the same way as a physicist does mathematically.
To explain, we can think of early societies as very ‘local’ in nature. One tribe exists here, another over there, but with very little communication between them. Even towards the end of the nineteenth century, transfer of ideas and communication in general were still very slow. So for a long time humans have lived in societies where communication was very short range. And, in physics, this would mean that abrupt changes are impossible. Societies have other complexities, so I would say that ‘fundamental change is unlikely’ rather than ‘impossible’. Very recently, through the increasing availability of technology we can travel far and wide, and through the Internet we can learn from and communicate with virtually anyone in the world.
Early societies were like the Ising model, while later ones are more like the small world networks. Increasingly, however, we are approaching the stage where everyone can and does interact with everyone else. And this is exactly when phase transitions become increasingly more likely. Money (and even labour) can travel from one end of the globe to another in a matter of seconds or even faster. This, of course, has an effect on all elements of our society.
Analysing social structures in terms of information theory can frequently reveal very counterintuitive features. This is why it is important to be familiar with a language of information theory, because without a formalized framework, some of the most startling and beautiful effects are much harder to understand in terms of root causes.(…) p. 98
Universe as a quantum computer
Konrad Zuse, a famous German mathematician who pioneered many cryptographic techniques used during World War II, was the first to view the Universe as a computer. (…) The problem, however, is that all these models assume that the Universe is a classical computer. By now, however, we know that the Universe should be understood as a quantum computer.
Our reality evolves because every once in a while we find that we need to edit part of the program that describes reality. We may find that this piece of the program, based on a certain model, is refuted (the underlying model is found to be inaccurate), and hence the program needs to be updated. Refuting a model and changing a part of the program is, as we saw, crucial to changing reality itself because refutations carry much more information than simply confirming a model. (…) p. 192
We can construct our whole reality in this way by looking at it in terms of two distinct but inter-related arrows of knowledge. We have the spontaneous creation of mutual information in the Universe as events unfold, without any prior cause. This kicks off the interplay between the two arrows. On the one hand, through our observations and a series of conjectures and refutations, we compress the information in the Universe into a set of natural laws. These laws are the shortest programs to represent all our observations. On the other hand, we run these programs to generate our picture of reality. It is this picture that then tells us what is, and isn’t, possible to accomplish, in other words, what our limitations are.
The Universe starts empty but potentially with a huge amount of information. The key event that gives the Universe some direction is the first act of ‘symmetry breaking’, the first cut of the sculptor. This act, which we consider as completely random, i.e. without any prior cause, just decides on why one tiny aspect in the Universe is one way rather than another. This first event sets in motion a chain reaction in which, once one rule has been decided, the rest of the Universe needs to proceed in a consistent manner. (…)
This is where the first arrow of knowledge begins. We compress the spontaneous, yet consistent information in the Universe, into a set of natural laws that continuously evolve as we test and discard the erroneous ones. Just as man evolved through a compression of biological information (a series of optimizations for the changing environment), our understanding of the Universe (our reality) has also evolved as we better synthesize and compress the information that we are presented with into more and more accurate laws of Nature. This is how the laws of Nature emerge, and these are the physical, biological, and social principles that our knowledge is based on.
The second arrow of knowledge is the flip-side to the first arrow. Once we have the laws of Nature, we explore their meaning in order to define our reality, in terms of what is and isn’t possible within it. It is a necessary truth that whatever our reality, it is based exclusively on our understanding of these laws. For example, if we have no knowledge of natural selection, all of the species look independently created and without any obvious connection. Of course this is all dynamic in that when we find an event that doesn’t fit our description of reality, then we go back and change the laws, so that the subsequently generated reality also explains this event.
The basis for these two arrows is the darkness of reality, a void from which they were created and within which they operate. Following the first arrow, we ultimately arrive at nothing (ultimately there is no reality, no law without law). The second arrow then lifts us from this nothingness and generates a picture of reality as an interconnected whole.
So our two arrows seem to point in opposite directions to one another. The first compresses the information available into succinct knowledge and the second decompresses the resulting laws into a colourful picture of reality. In this sense our whole reality is encoded into the set of natural laws. We already said that there was an overall direction for information flow in the Universe, i.e. that entropy (disorder) in the Universe can only increase. This gives us a well defined directionality to the Universe, commonly known as the ‘arrow of time’. (…)
The first arrow of knowledge clearly acts like a Maxwell’s demon. It constantly combats the arrow of time and tirelessly compresses disorder into something more meaningful. It connects seemingly random and causeless events into a string of mutually inter-related facts. The second arrow of knowledge, however, acts in the opposite direction of increasing the disorder. By changing our view of reality it instructs us that there are more actions we can take within the new reality than we could with the previous, more limited view.
Within us, within all objects in the Universe, lie these two opposing tendencies. So, is this a constant struggle between new information and hence disorder being created in the Universe, and our efforts to order this into a small set of rules? If so, is this a losing battle? (…)
Scientific knowledge proceeds via a dialogue with Nature. We ask ‘yes-no’ questions through our observations of various phenomena.
Information in this way is created out of no information. By taking a stab in the dark we set a marker which we can then use to refine our understanding by asking such ‘yes-no’ questions. (…)
The whole of our reality emerges by first using the conjectures and refutations to compress observations and then from this compression we deduce what is and isn’t possible. (…) p. 211-214
Viewing reality as information leads us to recognize two competing trends in its evolution. These trends, or let’s call them arrows, work hand in hand, but point in opposite directions. The first arrow orders the world against the Second Law of thermodynamics and compresses all the spontaneously generated information in the Universe into a set of well-defined principles. The second arrow then generates our view of reality from these principles.
It is clear that the more efficient we are in compressing all the spontaneously generated information, the faster we can expand our reality of what is and isn’t possible. But without the second arrow, without an elementary view of our reality, we cannot even begin to describe the Universe. We cannot access parts of the Universe that have no corresponding basis in our reality. After all, whatever is outside our reality is unknown to us. (…)
By exploring our reality we better understand how to look for and compress the information that the Universe produces. This in turn then affects our reality. Everything that we have understood, every piece of knowledge, has been acquired by feeding these two arrows into one another. Whether it is biological propagation of life, astrophysics, economics, or quantum mechanics, these are all a consequence of our constant re-evaluation of reality. So it’s clear that not only does the second arrow depend on the first, it is natural that the first arrow also depends on the second. (…)
We compress information to generate our laws of Nature, and then use these laws of Nature to generate more information, which then gets compressed back into upgraded laws of Nature.
The dynamics of the two arrows is driven by our desire to understand the Universe. As we drill deeper and deeper into our reality we expect to find a better understanding of the Universe. We believe that the Universe to some degree behaves independently of us and the Second Law tells us that the amount of information in the Universe is increasing. But what if with the second arrow, which generates our view of reality, we can affect parts of the Universe and create new information? In other words, through our existence could we affect the Universe within which we exist? This would make the information generated by us a part of the new information the Second Law talks about.
A scenario like this presents no conceptual problem within our picture. This new information can also be captured by the first arrow, as it fights, through conjectures and refutations, to incorporate any new information into the basic laws of Nature. However, could it be that there is no other information in the Universe than that generated by us as we create our own reality?
This leads us to a startling possibility. If indeed the randomness in the Universe, as demonstrated by quantum mechanics, is a consequence of our generation of reality then it is as if we create our own destiny. It is as if we exist within a simulation, where there is a program that is generating us and everything that we see around us. Think back to the movie The Matrix, where Keanu Reeves lives in a simulation until he is offered a way out, a way back into reality. If the randomness in the Universe is due to our own creation of reality, then there is no way out for us. This is because, in the end, we are creators of our own simulation. In such a scenario, Reeves would wake up in his reality only to find himself sitting at the desk programming his own simulation. This closed loop was echoed by John Wheeler who said: ‘physics gives rise to observer-participancy; observer-participancy gives rise to information; information gives rise to physics.
But whether reality is self-simulating (and hence there is no Universe required outside of it) is, by definition, something that we will never know. What we can say, following the logic presented in this book, is that outside of our reality there is no additional description of the Universe that we can understand, there is just emptiness. This means that there is no scope for the ultimate law or supernatural being – given that both of these would exist outside of our reality and in the darkness. Within our reality everything exists through an interconnected web of relationships and the building blocks of this web are bits of information. We process, synthesize, and observe this information in order to construct the reality around us. As information spontaneously emerges from the emptiness we take this into account to update our view of reality. The laws of Nature are information about information and outside of it there is just darkness. This is the gateway to understanding reality.
And I finish with a quote from the Tao Te Ching, which some 2500 years earlier, seems to have beaten me to the punch-line:
The Tao that can be told is not the eternal Tao. The name that can be named is not the eternal name. The nameless is the beginning of heaven and earth. The named is the mother of the ten thousand things. Ever desireless, one can see the mystery. Ever desiring, one sees the manifestations. These two spring from the same source but differ in name; this appears as darkness. Darkness within darkness. The gate to all mystery.
Physicist Vlatko Vedral explains to Aleks Krotoski why he believes the fundamental stuff of the universe is information and how he hopes that one day everything will be explained in this way.
“In Decoding Reality, Vedral argues that we should regard the entire universe as a gigantic quantum computer. Wacky as that may sound, it is backed up by hard science. The laws of physics show that it is not only possible for electrons to store and flip bits: it is mandatory. For more than a decade, quantum-information scientists have been working to determine just how the universe processes information at the most microscopic scale.” — The universe is a quantum computer, New Scientist, 22 March 2010
Interactive 3D model of Solar System Planets and Night Sky
(Click image to see interactive 3D model)
“Solar System Scope space traveller will illustrate you real-time celestial positions with planets and constellations moving over the night sky. You can actively change parameters for a better understanding of happenings in our Solar System and the Universe.”