Vlatko Vedral: Decoding Reality: the universe as quantum information
“Everything in our reality is made up of information. From the evolution of life to the dynamics of social ordering to the functioning of quantum computers, they can all be understood in terms of bits of information. We saw that in order to capture all the latest elements of reality we needed to extend Claude Shannon’s original notion of information, and upgrade his notion from bits to quantum bits, or qubits. Qubits incorporate the fact that in quantum theory outcomes to our measurements are intrinsically random.
But where do these qubits come from? Quantum theory allows us to answer this question; but the answer is not quite what we expected. It suggests that these qubits come from nowhere! There is no prior information required in order for information to exist. Information can be created from emptiness. In presenting a solution to the sticky question of ‘law without law’ we find that information breaks the infinite chain of regression in which we always seem to need a more fundamental law to explain the current one. This feature of information, ultimately coming from our understanding of quantum theory, is what distinguishes information from any other concept that could potentially unify our view of reality, such as matter or energy. Information is, in fact, unique in this respect. (…) p. 215
This book will argue that information (and not matter or energy or love) is the building block on which everything is constructed. Information is far more fundamental than matter or energy because it can be successfully applied to both macroscopic interactions, such as economic and social phenomena, and, as I will argue, information can also be used to explain the origin and behaviour of microscopic interactions such as energy and matter.
The question of everything from nothing, creation ex nihilo
As pointed out by David Deutsch and John Archibald Wheeler, however, whatever candidate is proposed for the fundamental building block of the Universe, it still needs to explain its ‘own’ ultimate origin too. In other words, the question of everything from nothing, creation ex nihilo, is key. So if, as I claim, information is this common thread, the question of creation ex nihilo reduces to explaining how some information arises out of no information. Not only will I show how this is possible, I will also argue that information, in contrast to matter and energy, is the only concept that we currently have that can explain its own origin. (…) p.10
This desire to compress information and the natural increase of information in the Universe may initially seem like independent processes, but as we will explore in much more detail later there may be a connection. As we compress and find all-encompassing principles describing our reality, it is these principles that then indicate how much more information there is in our Universe to find. In the same way that Feuerbach states that ‘Man first creates God, and then God creates Man’, we can say that we compress information into laws from which we construct our reality, and this reality then tells us how to further compress information. (…)
I believe this view of reality being defined through information compression is closer to the spirit of science as well as its practice. (…) It is also closer to the scientific meaning of information in that information reflects the degree of uncertainty in our knowledge of a system. (…)
Information is the underlying thread that connects all phenomena we see around us as well as explaining their origin. Our reality is ultimately made up of information. (…) p. 12-13
Information is the language Nature uses to convey its messages and this information comes in discrete units. We use these units to construct our reality. (…) p. 23
Do we define information as a quantity which we can use to do something useful or could we still call it information even if it wasn’t of any use to us? Is information objective or is it subjective? For example, would the same message or piece of news carry the same information for two different people? Is information inherently human or can animals also process information? Going even beyond this, is it a good thing to have a lot of information and to be able to process it quickly or can too much information drown you? These questions all add some colour and vigour to the challenge of achieving an agreed and acceptable definition of information.
The second trouble with information is that, once defined in a rigorous manner, it is measured in a way that is not easy to convey without mathematics. You may be very surprised to hear that even scientists balk at the thought of yet another equation. (…) p. 26-27
By stripping away all irrelevant details we can distil the essence of what information means. (…) Unsurprisingly, we find the basis of our modern concept of information in Ancient Greece. The Ancient Greeks laid the groundwork for its definition when they suggested that the information content of an event somehow depends only on how probable this event really is. Philosophers like Aristotle reasoned that the more surprised we are by an event the more information the event carries. By this logic, having a clear sunny autumn day in England would be a very surprising event, whilst experiencing drizzle randomly throughout this period would not shock anyone. This is because it is very likely, that is, the probability is high, that it will rain in England at any given instant of time. From this we can conclude that less likely events, the ones for which the probability of happening is very small, are those that surprise us more and therefore are the ones that carry more information.
Following this logic, we conclude that information has to be inversely proportional to probability, i.e. events with smaller probability carry more information. In this way, information is reduced to only probabilities and in turn probabilities can be given objective meaning independent of human interpretation or anything else (meaning that whilst you may not like the fact that it rains a lot in England, there is simply nothing you can do to change its probability of occurrence). (…) p. 29
As we saw in the initial chapter on creation ex nihilo, the fundamental question is why there is any information in the first place. For the replication of life we saw that we needed four main components, the protein synthesizer machine [a universal constructing machine], M, the DNA Xerox copier X, the enzymes which act as controllers, C, and the DNA information set [the set of instructions required to construct these three], I. (…) With these it is possible to then create an entity that self-replicates indefi nitely.
A macromolecule responsible for storing the instructions, I, in living systems is called DNA. DNA has four bases: A, C, T, and G. When DNA replicates inside our cells, each base has a specifi c pairing partner. There is huge redundancy in how bases are combined to form amino acid chains. This is a form of error correction. The digital encoding mechanism of DNA ensures that the message gets propagated with high fidelity. Random mutations aided by natural selection necessarily lead to an increase in complexity of life.
The process of creating biological information from no prior biological information is another example of the question of creation ex nihilo. Natural selection does not tell us where biological information comes from – it just gives us a framework of how it propagates. (…) p. 54-55
My argument is that life paradoxically ends not when it underdoses on fuel, but, more fundamentally, when it overdoses on ‘information’ (i.e. when it reaches a saturation point and can no longer process any further information). We have all experienced instances where we feel we cannot absorb any more information. (…)
The Second Law of thermodynamics tells us that in physical terms, a system reaches its death when it reaches its maximum disorder (i.e. it contains as much information as it can handle). This is sometimes (cheerfully) referred to as thermal death, which could really more appropriately be called information overload. This state of maximum disorder is when life effectively becomes a part of the rest of the lifeless Universe. Life no longer has any capacity to evolve and remains entirely at the mercy of the environment. (…) p. 58-59
Physical entropy, which describes how disordered a system is, tends to increase with time. This is known as the Second Law of thermodynamics. The increasing complexity of life is driven by the overall increase in disorder in the Universe. (…) p. 76
This concept is very important in understanding a diverse number of phenomena in Nature and will be the key when we explain the origin of structure in any society.
Mutual information is the formal word used to describe the situation when two (or more) events share information about one another. Having mutual information between events means that they are no longer independent; one event has something to tell you about the other. For example, when someone asks if you’d like a drink in a bar, how many times have you replied ‘I’ll have one if you have one’? This statement means that you are immediately correlating your actions with the actions of the person offering you a drink. If they have a drink, so will you; if they don’t, neither will you. Your choice to drink-or-not-to-drink is completely tied to theirs and hence, in information theory parlance, you both have maximum mutual information.
A little more formally, the whole presence of mutual information can be phrased as an inference indicator. Two things have mutual information if by looking at just one of them you can infer something about one of the properties of the other one. So, in the above example, if I see that you have a drink in front of you that means logically that the person offering you a drink also has a glass in front of them (given that you only drink when the person next to you drinks). (…)
Whenever we discuss mutual information we are really asking how much information an object/person/idea has about another object/person/idea. (…)
When it comes to DNA, its molecules share information about the protein they encode. Different strands of DNA share information about each other as well (we know that A only binds to G and C only binds to T). Furthermore the DNA molecules of different people also share information about one another (a father and a son, for example, share half of their DNA genetic material) and the DNA is itself sharing information with the environment – in that the environment determines through natural selection how the DNA evolves. (…)
One of the phenomena we will try to understand here, using mutual information, is what we call ‘globalization’, or the increasing interconnectedness of disparate societies. (…)
Before we delve further into social phenomena, I need to explain an important concept in physics called a phase transition. Stated somewhat loosely, phase transitions occur in a system when the information shared between the individual constituents become large (so for a gas in a box, for an iron rod in a magnetic field, and for a copper wire connected into an electric circuit, all their constituents share some degree of mutual information).
A high degree of mutual information often leads to a fundamentally different behaviour, although the individual constituents are still the same. To elaborate this point, the individual constituents are not affected on an individual basis, but as a group they exhibit entirely different behaviour. The key is how the individual constituents relate to one another and create a group dynamic. This is captured by the phrase ‘more is different’, by the physicist Philip Anderson, who contributed a great deal to the subject, culminating in his Nobel Prize in 1977.
A common example of a group dynamic is the effect we observe when boiling or freezing water (i.e. conversion of a liquid to a gas or conversion of a liquid to a solid). These extreme and visible changes of structures and behaviour are known as phase transitions. When water freezes, the phase transition occurs as the water molecules becomes more tightly correlated and these correlations manifest themselves in stronger molecular bonds and a more solid structure. The formation of societies and significant changes in every society – such as a revolution or a civil war or the attainment of democracy – can, in fact, be better understood using the language of phase transitions.
I now present one particular example that will explain phase transitions in more detail. This example will then act as our model to explain various social phenomena that we will tackle later in the chapter. Let us imagine a simple solid, made up of a myriad of atoms (billions and billions of them). Atoms usually interact with each other, although these interactions hardly ever stretch beyond their nearest neighbours. So, atoms next to each other will feel each other’s presence only, while the ones that are further apart from each other will typically never directly exchange any information.
It would now be expected that as a result of the ‘nearest neighbour’ interaction, only the atoms next to each other share information while this is not possible where there is no interaction. Though this may sound logical, it is in fact entirely incorrect. Think of a whip: you shake one end and this directly infl uences the speed and range at which the other end moves. You are transferring movement using the interconnectedness of atoms in the whip. Information can be shared between distant atoms because one atom interacts with its neighbours, but the neighbours also interact with their neighbours, and so on. This concept can be explained more elegantly through the concept of ‘six degrees of separation’. You often see it claimed that each person on this planet is at most six people away from any other person. (…) p. 94-97
Why is this networking between people important? You might argue that decisions made by society are to a high degree controlled by individuals – who ultimately think for themselves. It is clear, however, that this thinking is based on the common information shared between individuals. It is this interaction between individuals that is responsible for the different structures within society as well as society itself. (…) In this case, the information shared between individuals becomes much more important. So how do all people agree to make a decision, if they only interact locally, i.e. with a very limited number of neighbours?
In order to understand how local correlations can lead to the establishment of structures within society, let us return to the example of a solid. Solids are regular arrays of atoms. This time, however, rather than talking about how water becomes ice, let’s consider how a solid becomes a magnet. Every atom in a solid can be thought of as a little magnet on its own. Initially these magnets are completely independent of one another and there is no common north/south alignment – meaning that they are all pointing in random directions. The whole solid – the whole collection of atoms – would then be a random collection of magnets and would not be magnetized as a whole (this is known as a paramagnet). All the random little atomic magnets would simply cancel each other out in effect and there would be no net magnetic field.
However, if the atoms interact, then they can affect each other’s state, i.e. they can cause their neighbours to line up with them. Now through the same principle as six degrees of separation, each atom affects the other atoms it is connected to, and in turn these affect their own neighbours, eventually correlating all the atoms in the solid. If the interaction is stronger than the noise due to the external temperature, then all magnets will eventually align in the same direction and the solid as a whole generates a net magnetic field and hence becomes magnetic! All atoms now behave coherently in tune, just like one big magnet. The point at which all atoms ‘spontaneously’ align is known as the point of phase transition, i.e. the point at which a solid becomes a magnet. (…)
You may object that atoms are simple systems compared to humans. After all humans can think, feel, get angry, while atoms are not alive and their range of behaviour is far simpler. But this is not the point! The point is that we are only focusing on one relevant property of humans (or atoms) here. Atoms are not all that simple either, but we are choosing to make them so by looking only at their magnetic properties. Humans are much more complicated still, but now we only want to know about their political preference, and these can be quite simple in practice. (…)
This unevenness in the number of contacts leads to a very important model where there is a great deal of interaction with people close by and then, every once in a while, there is a long-distance interaction with someone far away. This is called a ‘small world network’ and is an excellent model for how and why disease propagates rapidly in our world. When we get ill, disease usually spreads quickly to our closest neighbours. Then it is enough that only one of the neighbours takes a long-distance flight and this can then make the virus spread in distant places. And this is why we are very worried about swine flu and all sorts of other potential viruses that can kill humans.
Let us now consider why some people believe – rightly or wrongly – that the information revolution has and will transform our society more than any other revolution in the past – such as the industrial revolution discussed in earlier chapters. Some sociologists, such as Manuel Castells, believe that the Internet will inflict much more profound transformations in our society than ever previously seen in history. His logic is based on the above idea of phase transitions, though, being a sociologist, he may not be interpreting them in quite the same way as a physicist does mathematically.
To explain, we can think of early societies as very ‘local’ in nature. One tribe exists here, another over there, but with very little communication between them. Even towards the end of the nineteenth century, transfer of ideas and communication in general were still very slow. So for a long time humans have lived in societies where communication was very short range. And, in physics, this would mean that abrupt changes are impossible. Societies have other complexities, so I would say that ‘fundamental change is unlikely’ rather than ‘impossible’. Very recently, through the increasing availability of technology we can travel far and wide, and through the Internet we can learn from and communicate with virtually anyone
in the world.
Early societies were like the Ising model, while later ones are more like the small world networks. Increasingly, however, we are approaching the stage where everyone can and does interact with everyone else. And this is exactly when phase transitions become increasingly more likely. Money (and even labour) can travel from one end of the globe to another in a matter of seconds or even faster. This, of course, has an effect on all elements of our society.
Analysing social structures in terms of information theory can frequently reveal very counterintuitive features. This is why it is important to be familiar with a language of information theory, because without a formalized framework, some of the most startling and beautiful effects are much harder to understand in terms of root causes.(…) p. 98
Universe as a quantum computer
Konrad Zuse, a famous German mathematician who pioneered many cryptographic techniques used during World War II, was the first to view the Universe as a computer. (…) The problem, however, is that all these models assume that the Universe is a classical computer. By now, however, we know that the Universe should be understood as a quantum computer.
Our reality evolves because every once in a while we find that we need to edit part of the program that describes reality. We may find that this piece of the program, based on a certain model, is refuted (the underlying model is found to be inaccurate), and hence the program needs to be updated. Refuting a model and changing a part of the program is, as we saw, crucial to changing reality itself because refutations carry much more information than simply confirming a model. (…) p. 192
We can construct our whole reality in this way by looking at it in terms of two distinct but inter-related arrows of knowledge. We have the spontaneous creation of mutual information in the Universe as events unfold, without any prior cause. This kicks off the interplay between the two arrows. On the one hand, through our observations and a series of conjectures and refutations, we compress the information in the Universe into a set of natural laws. These laws are the shortest programs to represent all our observations. On the other hand, we run these programs to generate our picture of reality. It is this picture that then tells us what is, and isn’t, possible to accomplish, in other words, what our limitations are.
The Universe starts empty but potentially with a huge amount of information. The key event that gives the Universe some direction is the first act of ‘symmetry breaking’, the first cut of the sculptor. This act, which we consider as completely random, i.e. without any prior cause, just decides on why one tiny aspect in the Universe is one way rather than another. This first event sets in motion a chain reaction in which, once one rule has been decided, the rest of the Universe needs to proceed in a consistent manner. (…)
This is where the first arrow of knowledge begins. We compress the spontaneous, yet consistent information in the Universe, into a set of natural laws that continuously evolve as we test and discard the erroneous ones. Just as man evolved through a compression of biological information (a series of optimizations for the changing environment), our understanding of the Universe (our reality) has also evolved as we better synthesize and compress the information that we are presented with into more and more accurate laws of Nature. This is how the laws of Nature emerge, and these are the physical, biological, and social principles that our knowledge is based on.
The second arrow of knowledge is the flip-side to the first arrow. Once we have the laws of Nature, we explore their meaning in order to define our reality, in terms of what is and isn’t possible within it. It is a necessary truth that whatever our reality, it is based exclusively on our understanding of these laws. For example, if we have no knowledge of natural selection, all of the species look independently created and without any obvious connection. Of course this is all dynamic in that when we find an event that doesn’t fit our description of reality, then we go back and change the laws, so that the subsequently generated reality also explains this event.
The basis for these two arrows is the darkness of reality, a void from which they were created and within which they operate. Following the first arrow, we ultimately arrive at nothing (ultimately there is no reality, no law without law). The second arrow then lifts us from this nothingness and generates a picture of reality as an interconnected whole.
So our two arrows seem to point in opposite directions to one another. The first compresses the information available into succinct knowledge and the second decompresses the resulting laws into a colourful picture of reality. In this sense our whole reality is encoded into the set of natural laws. We already said that there was an overall direction for information flow in the Universe, i.e. that entropy (disorder) in the Universe can only increase. This gives us a well defined directionality to the Universe, commonly known as the ‘arrow of time’. (…)
The first arrow of knowledge clearly acts like a Maxwell’s demon. It constantly combats the arrow of time and tirelessly compresses disorder into something more meaningful. It connects seemingly random and causeless events into a string of mutually inter-related facts. The second arrow of knowledge, however, acts in the opposite direction of increasing the disorder. By changing our view of reality it instructs us that there are more actions we can take within the new reality than we could with the previous, more limited view.
Within us, within all objects in the Universe, lie these two opposing tendencies. So, is this a constant struggle between new information and hence disorder being created in the Universe, and our efforts to order this into a small set of rules? If so, is this a losing battle? (…)
Scientific knowledge proceeds via a dialogue with Nature. We ask ‘yes-no’ questions through our observations of various phenomena.
Information in this way is created out of no information. By taking a stab in the dark we set a marker which we can then use to refine our understanding by asking such ‘yes-no’ questions. (…)
The whole of our reality emerges by first using the conjectures and refutations to compress observations and then from this compression we deduce what is and isn’t possible. (…) p. 211-214
Viewing reality as information leads us to recognize two competing trends in its evolution. These trends, or let’s call them arrows, work hand in hand, but point in opposite directions. The first arrow orders the world against the Second Law of thermodynamics and compresses all the spontaneously generated information in the Universe into a set of well-defined principles. The second arrow then generates our view of reality from these principles.
It is clear that the more efficient we are in compressing all the spontaneously generated information, the faster we can expand our reality of what is and isn’t possible. But without the second arrow, without an elementary view of our reality, we cannot even begin to describe the Universe. We cannot access parts of the Universe that have no corresponding basis in our reality. After all, whatever is outside our reality is unknown to us. (…)
By exploring our reality we better understand how to look for and compress the information that the Universe produces. This in turn then affects our reality. Everything that we have understood, every piece of knowledge, has been acquired by feeding these two arrows into one another. Whether it is biological propagation of life, astrophysics, economics, or quantum mechanics, these are all a consequence of our constant re-evaluation of reality. So it’s clear that not only does the second arrow depend on the first, it is natural that the first arrow also depends on the second. (…)
We compress information to generate our laws of Nature, and then use these laws of Nature to generate more information, which then gets compressed back into upgraded laws of Nature.
The dynamics of the two arrows is driven by our desire to understand the Universe. As we drill deeper and deeper into our reality we expect to find a better understanding of the Universe. We believe that the Universe to some degree behaves independently of us and the Second Law tells us that the amount of information in the Universe is increasing. But what if with the second arrow, which generates our view of reality, we can affect parts of the Universe and create new information? In other words, through our existence could we affect the Universe within which we exist? This would make the information generated by us a part of the new information the Second Law talks about.
A scenario like this presents no conceptual problem within our picture. This new information can also be captured by the first arrow, as it fights, through conjectures and refutations, to incorporate any new information into the basic laws of Nature. However, could it be that there is no other information in the Universe than that generated by us as we create our own reality?
This leads us to a startling possibility. If indeed the randomness in the Universe, as demonstrated by quantum mechanics, is a consequence of our generation of reality then it is as if we create our own destiny. It is as if we exist within a simulation, where there is a program that is generating us and everything that we see around us. Think back to the movie The Matrix, where Keanu Reeves lives in a simulation until he is offered a way out, a way back into reality. If the randomness in the Universe is due to our own creation of reality, then there is no way out for us. This is because, in the end, we are creators of our own simulation. In such a scenario, Reeves would wake up in his reality only to find himself sitting at the desk programming his own simulation. This closed loop was echoed by John Wheeler who said: ‘physics gives rise to observer-participancy; observer-participancy gives rise to information; information gives rise to physics.
But whether reality is self-simulating (and hence there is no Universe required outside of it) is, by definition, something that we will never know. What we can say, following the logic presented in this book, is that outside of our reality there is no additional description of the Universe that we can understand, there is just emptiness. This means that there is no scope for the ultimate law or supernatural being – given that both of these would exist outside of our reality and in the darkness. Within our reality everything exists through an interconnected web of relationships and the building blocks of this web are bits of information. We process, synthesize, and observe this information in order to construct the reality around us. As information spontaneously emerges from the emptiness we take this into account to update our view of reality. The laws of Nature are information about information and outside of it there is just darkness. This is the gateway to understanding reality.
And I finish with a quote from the Tao Te Ching, which some 2500 years earlier, seems to have beaten me to the punch-line:
The Tao that can be told is not the eternal Tao.
The name that can be named is not the eternal name.
The nameless is the beginning of heaven and earth.
The named is the mother of the ten thousand things.
Ever desireless, one can see the mystery.
Ever desiring, one sees the manifestations.
These two spring from the same source but differ in name; this
appears as darkness.
Darkness within darkness.
The gate to all mystery.
— Vlatko Vedral, Professor of Physics at the University of Oxford and CQT (Centre for Quantum Technologies) at the National University of Singapore, Decoding Reality: the universe as quantum information, Oxford University Press, 2010 (Illustration source)
Vlatko Vedral: Everything is information
Physicist Vlatko Vedral explains to Aleks Krotoski why he believes the fundamental stuff of the universe is information and how he hopes that one day everything will be explained in this way.
“In Decoding Reality, Vedral argues that we should regard the entire universe as a gigantic quantum computer. Wacky as that may sound, it is backed up by hard science. The laws of physics show that it is not only possible for electrons to store and flip bits: it is mandatory. For more than a decade, quantum-information scientists have been working to determine just how the universe processes information at the most microscopic scale.” — The universe is a quantum computer, New Scientist, 22 March 2010
☞ Vlatko Vedral, Living in a Quantum World (pdf), Scientific American, 2011
☞ Mark Buchanan, Quantum minds: Why we think like quarks - ‘To be human is to be quantum’, New Scientist, 05 Sep 2011
☞ The Concept of Laws. The special status of the laws of mathematics and physics
☞ David Deutsch: A new way to explain explanation, TED
☞ Stephen Hawking on the univers’s origin
☞ The Relativity of Truth - a brief résumé
☞ Vlatko Vedral, Information and Physics, University of Oxford, National University of Singapore (2012)