Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Feb
3rd
Sun
permalink

'Elegance,' 'Symmetry,' and 'Unity': Is Scientific Truth Always Beautiful?

                   image

"Today the grandest quest of physics is to render compatible the laws of quantum physics—how particles in the subatomic world behave—with the rules that govern stars and planets. That’s because, at present, the formulas that work on one level implode into meaninglessness at the other level. This is deeply ungainly, and significant when the two worlds collide, as occurs in black holes. The quest to unify quantum physics (micro) and general relativity (macro) has spawned heroic efforts, the best-known candidate for a grand unifying concept presently being string theory. String theory proposes that subatomic particles are not particles at all but closed or open vibrating strings, so tiny, a hundred billion billion times shorter than an atomic nucleus’s diameter, that no human instrument can detect them. It’s the “music of the spheres”—think vibrating harp strings—made literal.

A concept related to string theory is “supersymmetry.” Physicists have shown that at extremely high energy levels, similar to those that existed a micro-blink after the big bang, the strength of the electromagnetic force, and strong and weak nuclear forces (which work only on subatomic levels), come tantalizingly close to converging. Physicists have conceived of scenarios in which the three come together precisely, an immensely intellectually and aesthetically pleasing accomplishment. But those scenarios imply the existence of as-yet-undiscovered “partners” for existing particles: The electron would be joined by a “selectron,” quarks by “squarks,” and so on. There was great hope that the $8-billion Large Hadron Collider would provide indirect evidence for these theories, but so far it hasn’t. (…)

[Marcelo Gleiser]: “We look out in the world and we see a very complicated pattern of stuff, and the notion of symmetry is an important way to make sense of the mess. The sun and moon are not perfect spheres, but that kind of approximation works incredibly well to simulate the behavior of these bodies.”

But the idea that what’s beautiful is true and that “symmetry rules,” as Gleiser puts it, “has been catapulted to an almost religious notion in the sciences,” he says. In his own book A Tear at the Edge of Creation (Free Press), Gleiser made a case for the beauty inherent in asymmetry—in the fact that neutrinos, the most common particles in the universe, spin only in one direction, for example, or that amino acids can be produced in laboratories in “left-handed” or “right-handed” forms, but only the “left-handed” form appears in nature. These are nature’s equivalent of Marilyn Monroe’s mole, attractive because of their lopsidedness, and Orrell also makes use of those examples.

But Weinberg, the Nobel-winning physicist at the University of Texas at Austin, counters: “Betting on beauty works remarkably well.” The Large Hadron Collider’s failure to produce evidence of supersymmetry is “disappointing,” he concedes, but he notes that plenty of elegant theories have waited years, even decades, for confirmation. Copernicus’s theory of a Sun-centered universe was developed entirely without experiment—he relied on Ptolemy’s data—and it was eventually embraced precisely because his description of planetary motion was simply more economical and elegant than those of his predecessors; it turned out to be true.

Closer to home, Weinberg says his own work on the weak nuclear force and electromagnetism had its roots in remarkably elegant, purely abstract theories of researchers who came before him, theories that, at first, seemed to be disproved by evidence but were too elegant to stop thinking about. (…)

To Orrell, it’s not just that many scientists are too enamored of beauty; it’s that their notion of beauty is ossified. It is “kind of clichéd,” Orrell says. “I find things like perfect symmetry uninspiring.” (In fairness, the Harvard theoretical physicist Lisa Randall has used the early unbalanced sculptures of Richard Serra as an example of how the asymmetrical can be as fascinating as the symmetrical, in art as in physics. She finds this yin-yang tension perfectly compatible with modern theorizing.)

Orrell also thinks it is more useful to study the behavior of complex systems rather than their constituent elements. (…)

Outside of physics, Orrell reframes complaints about “perfect-model syndrome” in aesthetic terms. Classical economists, for instance, treat humans as symmetrical in terms of what motivates decision-making. In contrast, behavioral economists are introducing asymmetry into that field by replacing Homo economicus with a quirkier, more idiosyncratic and human figure—an aesthetic revision, if you like. (…)

The broader issue, though, is whether science’s search for beautiful, enlightening patterns has reached a point of diminishing returns. If science hasn’t yet hit that point, might it be approaching it? The search for symmetry in nature has had so many successes, observes Stephon Alexander, a Dartmouth physicist, that “there is a danger of forgetting that nature is the one that decides where that game ends.”

Christopher Shea, American writer and editor, Is Scientific Truth Always Beautiful?, The Chronicle of Higher Education, Jan 28, 2013.

The Asymmetry of Life

                 image
                                     Image courtesy of Ben Lansky

"Look into a mirror and you’ll simultaneously see the familiar and the alien: an image of you, but with left and right reversed.

Left-right inequality has significance far beyond that of mirror images, touching on the heart of existence itself. From subatomic physics to life, nature prefers asymmetry to symmetry. There are no equal liberties when neutrinos and proteins are concerned. In the case of neutrinos, particles that spill out of the sun’s nuclear furnace and pass through you by the trillions every second, only leftward-spinning ones exist. Why? No one really knows.

Proteins are long chains of amino acids that can be either left- or right-handed. Here, handedness has to do with how these molecules interact with polarized light, rotating it either to the left or to the right. When synthesized in the lab, amino acids come out fifty-fifty. In living beings, however, all proteins are made of left-handed amino acids. And all sugars in RNA and DNA are right-handed. Life is fundamentally asymmetric.

Is the handedness of life, its chirality (think chiromancer, which means “palm reader”), linked to its origins some 3.5 billion years ago, or did it develop after life was well on its way? If one traces life’s origins from its earliest stages, it’s hard to see how life began without molecular building blocks that were “chirally pure,” consisting solely of left- or right-handed molecules. Indeed, many models show how chirally pure amino acids may link to form precursors of the first protein-like chains. But what could have selected left-handed over right-handed amino acids?

My group’s research suggests that early Earth’s violent environmental upheavals caused many episodes of chiral flip-flopping. The observed left-handedness of terrestrial amino acids is probably a local fluke. Elsewhere in the universe, perhaps even on other planets and moons of our solar system, amino acids may be right-handed. But only sampling such material from many different planetary platforms will determine whether, on balance, biology is lefthanded, right-handed, or ambidextrous.”

Marcelo Gleiser, The Asymmetry of Life, § SEEDMAGAZINE, Sep 7, 2010.

"One of the deepest consequences of symmetries of any kind is their relationship with conservation laws. Every symmetry in a physical system, be it balls rolling down planes, cars moving on roads, planets orbiting the Sun, a photon hitting an electron, or the expanding Universe, is related to a conserved quantity, a quantity that remains unchanged in the course of time. In particular, external (spatial and temporal) symmetries are related to the conservation of momentum and energy, respectively: the total energy and momentum of a system that is temporally and spatially symmetric remains unchanged.

The elementary particles of matter live in a reality very different from ours. The signature property of their world is change: particles can morph into one another, changing their identities. […] One of the greatest triumphs of twentieth-century particle physics was the discovery of the rules dictating the many metamorphoses of matter particles and the symmetry principles behind them. One of its greatest surprises was the realization that some of the symmetries are violated and that these violations have very deep consequences. (…) p.27

Even though matter and antimatter appear in equal footing on the equations describing relativistic particles, antimatter occurs only rarely. […] Somehow, during its infancy, the cosmos selected matter over antimatter. This imperfection is the single most important factor dictating our existence. (…)

Back to the early cosmos: had there been an equal quantity of antimatter particles around, they would have annihilated the corresponding particles of matter and all that would be left would be lots of gamma-ray radiation and some leftover protons and antiprotons in equal amounts. Definitely not our Universe. The tiny initial excess of matter particles is enough to explain the overwhelming excess of matter over antimatter in today’s Universe. The existence of mattter, the stuff we and everything else are made of, depends on a primordial imperfection, the matter-antimatter asymmetry. (…) p.29.

We have seen how the weak interactions violate a series of internal symmetries: charge conjugation, parity, and even the combination of the two. The consequences of these violations are deeply related to our existence: they set the arrow of time at the microscopic level, providing a viable mechanism to generate the excess of matter over antimatter. […] The message from modern particle physics and cosmology is clear: we are the products of imperfections in Nature. (…)

It is not symmetry and perfection that should be our guiding principle, as it has been for millennia. We don’t have to look for the mind of God in Nature and try to express it through our equations. The science we create is just that, our creation. Wonderful as it is, it is always limited, it is always constrained by what we know of the world. […] The notion that there is a well-defined hypermathematical structure that determines all there is in the cosmos is a Platonic delusion with no relationship to physical reality. (…) p. 35.

The critics of this idea miss the fact that a meaningless cosmos that produced humans (and possibly other intelligences) will never be meaningless to them (or to the other intelligences). To exist in a purposeless Universe is even more meaningful than to exist as the result of some kind of mysterious cosmic plan. Why? Because it elevates the emergence of life and mind to a rare event, as opposed to a ubiquitous and premeditated one. For millennia, we believed that God (or gods) protected us from extinction, that we were chosen to be here and thus safe from ultimate destruction. […]

When science proposes that the cosmos has a sense of purpose where in life is a premeditated outcome of natural events, a similar safety blanket mechanism is at play: if life fails here, it will succeed elsewhere. We don’t really need to preserve it. To the contrary, I will argue that unless we accept our fragility and cosmic loneliness, we will never act to protect what we have. (…)

The laws of physics and the laws of chemistry as presently understood have nothing to say about the emergence of life. As Paul Davies remarked in Cosmic Jackpot, notions of a life principle suffer from being teleologic, explaining life as the end goal, a purposeful cosmic strategy. The human mind, of course, would be the crown jewel of such creative drive. Once again we are “chosen” ones, a dangerous proposal. […] Arguments shifting the “mind of God” to the “mind of the cosmos” perpetuate our obsession with the notion of Oneness. Our existence need not be planned to be meaningful.” (…) p.49.

Unified theories, life principles, and self-aware universes are all expressions of our need to find a connection between who we are and the world we live in. I do not question the extreme importance of understanding the connection between man and the cosmos. But I do question that it has to derive from unifying principles. (…) p.50.

My point is that there is no Final Truth to be discovered, no grand plan behind creation. Science advances as new theories engulf or displace old ones. The growth is largely incremental, punctuated by unexpected, worldview-shattering discoveries about the workings of Nature. […]

Once we understand that science is the creation of human minds and not the pursuit of some divine plan (even if metaphorically) we shift the focus of our search for knowledge from the metaphysical to the concrete. (…) p.51.

For a clever fish, water is “just right“ for it to swim in. Had it been too cold, it would freeze; too hot, it would boil. Surely the water temperature had to be just right for the fish to exist. “I’m very important. My existence cannot be an accident,” the proud fish would conclude. Well, he is not very important. He is just a clever fish. The ocean temperature is not being controlled with the purpose of making it possible for it to exist. Quite the opposite: the fish is fragile. A sudden or gradual temperature swing would kill it, as any trout fisherman knows. We so crave for meaningful connections that we see them even when they are not there.

We are soulful creatures in a harsh cosmos. This, to me, is the essence of the human predicament. The gravest mistake we can make is to think that the cosmos has plans for us, that we are somehow special from a cosmic perspective. (…) p.52

We are witnessing the greatest mass extinction since the demise of the dinosaurs 65 million years ago. The difference is that for the first time in history, humans, and not physical causes, are the perpetrators. […] Life recovered from the previous five mass extinctions because the physical causes eventually ceased to act. Unless we understand what is happening and start acting toghether as a species we may end up carving the path toward our own destruction. (…)” p.56

Marcelo Gleiser is the Appleton Professor of Natural Philosophy at Dartmouth College, A Tear at the Edge of Creation, Free Press, 2010.

See also:

Symmetry in Physics - Bibliography - PhilPapers
The Concept of Laws. The special status of the laws of mathematics and physics, Lapidarium notes
Universe tag on Lapidarium notes

Jul
23rd
Mon
permalink

S. Hawking, L. Mlodinow on why is there something rather than nothing and why are the fundamental laws as we have described them

                         

According to the idea of model-dependent realism, our brains interpret the input from our sensory organs by making a model of the outside world. We form mental concepts of our home, trees, other people, the electricity that flows from wall sockets, atoms, molecules, and other universes. These mental concepts are the only reality we can know. There is no modelindependent test of reality. It follows that a well-constructed model creates a reality of its own. An example that can help us think about issues of reality and creation is the Game of Life, invented in 1970 by a young mathematician at Cambridge named John Conway.

The word “game” in the Game of Life is a misleading term. There are no winners and losers; in fact, there are no players. The Game of Life is not really a game but a set of laws that govern a two dimensional universe. It is a deterministic universe: Once you set up a starting configuration, or initial condition, the laws determine what happens in the future.

The world Conway envisioned is a square array, like a chessboard, but extending infinitely in all directions. Each square can be in one of two states: alive (shown in green) or dead (shown in black). Each square has eight neighbors: the up, down, left, and right neighbors and four diagonal neighbors. Time in this world is not continuous but moves forward in discrete steps. Given any arrangement of dead and live squares, the number of live neighbors determine what happens next according to the following laws:

1. A live square with two or three live neighbors survives (survival).
2. A dead square with exactly three live neighbors becomes a live cell (birth).
3. In all other cases a cell dies or remains dead. In the case that a live square has zero or one neighbor, it is said to die of loneliness; if it has more than three neighbors, it is said to die of overcrowding.

That’s all there is to it: Given any initial condition, these laws generate generation after generation. An isolated living square or two adjacent live squares die in the next generation because they don’t have enough neighbors. Three live squares along a diagonal live a bit longer. After the first time step the end squares die, leaving just the middle square, which dies in the following generation. Any diagonal line of squares “evaporates” in just this manner. But if three live squares are placed horizontally in a row, again the center has two neighbors and survives while the two end squares die, but in this case the cells just above and below the center cell experience a birth. The row therefore turns into a column. Similarly, the next generation the column back turns into a row, and so forth. Such oscillating configurations are called blinkers.

If three live squares are placed in the shape of an L, a new behavior occurs. In the next generation the square cradled by the L will give birth, leading to a 2 × 2 block. The block belongs to a pattern type called the still life because it will pass from generation to generation unaltered. Many types of patterns exist that morph in the early generations but soon turn into a still life, or die, or return to their original form and then repeat the process. There are also patterns called gliders, which morph into other shapes and, after a few generations, return to their original form, but in a position one square down along the diagonal. If you watch these develop over time, they appear to crawl along the array. When these gliders collide, curious behaviors can occur, depending on each glider’s shape at the moment of collision.

What makes this universe interesting is that although the fundamental “physics” of this universe is simple, the “chemistry” can be complicated. That is, composite objects exist on different scales. At the smallest scale, the fundamental physics tells us that there are just live and dead squares. On a larger scale, there are gliders, blinkers, and still-life blocks. At a still larger scale there are even more complex objects, such as glider guns: stationary patterns that periodically give birth to new gliders that leave the nest and stream down the diagonal. (…)

If you observed the Game of Life universe for a while on any particular scale, you could deduce laws governing the objects on that scale. For example, on the scale of objects just a few squares across you might have laws such as “Blocks never move,” “Gliders move diagonally,” and various laws for what happens when objects collide. You could create an entire physics on any level of composite objects. The laws would entail entities and concepts that have no place among the original laws. For example, there are no concepts such as “collide” or “move” in the original laws. Those describe merely the life and death of individual stationary squares. As in our universe, in the Game of Life your reality depends on the model you employ.

Conway and his students created this world because they wanted to know if a universe with fundamental rules as simple as the ones they defined could contain objects complex enough to replicate. In the Game of Life world, do composite objects exist that, after merely following the laws of that world for some generations, will spawn others of their kind? Not only were Conway and his students able to demonstrate that this is possible, but they even showed that such an object would be, in a sense, intelligent! What do we mean by that? To be precise, they showed that the huge conglomerations of squares that self-replicate are “universal Turing machines.” For our purposes that means that for any calculation a computer in our physical world can in principle carry out, if the machine were fed the appropriate input—that is, supplied the appropriate Game of Life world environment—then some generations later the machine would be in a state from which an output could be read that would correspond to the result of that computer calculation. (…)

In the Game of Life, as in our world, self-reproducing patterns are complex objects. One estimate, based on the earlier work of mathematician John von Neumann, places the minimum size of a selfreplicating pattern in the Game of Life at ten trillion squares—roughly the number of molecules in a single human cell. One can define living beings as complex systems of limited size that are stable and that reproduce themselves.

The objects described above satisfy the reproduction condition but are probably not stable: A small disturbance from outside would probably wreck the delicate mechanism. However, it is easy to imagine that slightly more complicated laws would allow complex systems with all the attributes of life. Imagine a entity of that type, an object in a Conway-type world. Such an object would respond to environmental stimuli, and hence appear to make decisions. Would such life be aware of itself? Would it be self-conscious? This is a question on which opinion is sharply divided. Some people claim that self-awareness is something unique to humans. It gives them free will, the ability to choose between different courses of action.

How can one tell if a being has free will?

If one encounters an alien, how can one tell if it is just a robot or it has a mind of its own? The behavior of a robot would be completely determined, unlike that of a being with free will. Thus one could in principle detect a robot as a being whose actions can be predicted. (…) This may be impossibly difficult if the being is large and complex. We cannot even solve exactly the equations for three or more particles interacting with each other. Since an alien the size of a human would contain about a thousand trillion trillion particles even if the alien were a robot, it would be impossible to solve the equations and predict what it would do. We would therefore have to say that any complex being has free will—not as a fundamental feature, but as an effective theory, an admission of our inability to do the calculations that would enable us to predict its actions.

The example of Conway’s Game of Life shows that even a very simple set of laws can produce complex features similar to those of intelligent life. There must be many sets of laws with this property. What picks out the fundamental laws (as opposed to the apparent laws) that govern our universe? As in Conway’s universe, the laws of our universe determine the evolution of the system, given the state at any one time. In Conway’s world we are the creators—we choose the initial state of the universe by specifying objects and their positions at the start of the game. (…)

If the total energy of the universe must always remain zero, and it costs energy to create a body, how can a whole universe be created from nothing? That is why there must be a law like gravity. Because gravity is attractive, gravitational energy is negative: One has to do work to separate a gravitationally bound system, such as the earth and moon. This negative energy can balance the positive energy needed to create matter, but it’s not quite that simple. The negative gravitational energy of the earth, for example, is less than a billionth of the positive energy of the matter particles the earth is made of. A body such as a star will have more negative gravitational energy, and the smaller it is (the closer the different parts of it are to each other), the greater this negative gravitational energy will be. But before it can become greater than the positive energy of the matter, the star will collapse to a black hole, and black holes have positive energy. That’s why empty space is stable. Bodies such as stars or black holes cannot just appear out of nothing. But a whole universe can.

Because gravity shapes space and time, it allows space-time to be locally stable but globally unstable. On the scale of the entire universe, the positive energy of the matter can be balanced by the negative gravitational energy, and so there is no restriction on the creation of whole universes. Because there is a law like gravity, the universe can and will create itself from nothing. (…) Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist. It is not necessary to invoke God to light the blue touch paper and set the universe going.

Why are the fundamental laws as we have described them?

The ultimate theory must be consistent and must predict finite results for quantities that we can measure. We’ve seen that there must be a law like gravity, and we saw in Chapter 5 that for a theory of gravity to predict finite quantities, the theory must have what is called supersymmetry between the forces of nature and the matter on which they act. M-theory is the most general supersymmetric theory of gravity. For these reasons M-theory is the only candidate for a complete theory of the universe. If it is finite—and this has yet to be proved—it will be a model of a universe that creates itself. We must be part of this universe, because there is no other consistent model.

M-theory is the unified theory Einstein was hoping to find. The fact that we human beings—who are ourselves mere collections of fundamental particles of nature—have been able to come this close to an understanding of the laws governing us and our universe is a great triumph. But perhaps the true miracle is that abstract considerations of logic lead to a unique theory that predicts and describes a vast universe full of the amazing variety that we see. If the theory is confirmed by observation, it will be the successful conclusion of a search going back more than 3,000 years. We will have found the grand design.”

Stephen Hawking, British theoretical physicist and author, Leonard Mlodinow, The Grand Design, Random House, 2010.

See also:

Stephen Hawking on the universe’s origin
☞ Tim Maudlin, What Happened Before the Big Bang? The New Philosophy of Cosmology
Vlatko Vedral: Decoding Reality: the universe as quantum information
The Concept of Laws. The special status of the laws of mathematics and physics
Raphael Bousso: Thinking About the Universe on the Larger Scales
Lisa Randall on the effective theory

May
27th
Sun
permalink

Science Is Not About Certainty. Science is about overcoming our own ideas and a continuous challenge of common sense

       

“At the core of all well-founded belief lies belief that is unfounded.”

Ludwig Wittgenstein, On Certainty, #253, J. & J. Harper Editions, New York, 1969. 

"The value of philosophy is, in fact, to be sought largely in its very uncertainty. The man who has no tincture of philosophy goes through life imprisoned in the prejudices derived from common sense, from the habitual beliefs of his age or his nation, and from convictions which have grown up in his mind without the co-operation or consent of his deliberate reason. To such a man the world tends to become definite, finite, obvious; common objects rouse no questions, and unfamiliar possibilities are contemptuously rejected. As soon as we begin to philosophize, on the contrary, we find that even the most everyday things lead to problems to which only very incomplete answers can be given.

Philosophy, though unable to tell us with certainty what is the true answer to the doubts it raises, is able to suggest many possibilities which enlarge our thoughts and free them from the tyranny of custom. Thus, while diminishing our feeling of certainty as to what things are, it greatly increases our knowledge as to what they may be; it removes the somewhat arrogant dogmatism of those who have never traveled into the region of liberating doubt, and it keeps alive our sense of wonder by showing familiar things in an unfamiliar aspect.”

Bertrand RussellThe Problems of Philosophy (1912), Cosimo, Inc, 2010, p. 113-114.

We say that we have some theories about science. Science is about hypothetico-deductive methods, we have observations, we have data, data require to be organized in theories.  So then we have theories. These theories are suggested or produced from the data somehow, then checked in terms of the data. Then time passes, we have more data, theories evolve, we throw away a theory, and we find another theory which is better, a better understanding of the data, and so on and so forth. This is a standard idea of how science works, which implies that science is about empirical content, the true interesting relevant content of science is its empirical content. Since theories change, the empirical content is the solid part of what science is. Now, there’s something disturbing, for me as a theoretical scientist, in all this. I feel that something is missing. Something of the story is missing. I’ve been asking to myself what is this thing missing? (…)

This is particularly relevant today in science, and particularly in physics, because if I’m allowed to be polemical, in my field, in fundamental theoretical physics, it is 30 years that we fail. There hasn’t been a major success in theoretical physics in the last few decades, after the standard model, somehow. Of course there are ideas. These ideas might turn out to be right. Loop quantum gravity might turn out to be right, or not. String theory might turn out to be right, or not. But we don’t know, and for the moment, nature has not said yes in any sense.

I suspect that this might be in part because of the wrong ideas we have about science, and because methodologically we are doing something wrong, at least in theoretical physics, and perhaps also in other sciences.

Anaximander. Changing something in the conceptual structure that we have in grasping reality

Let me tell you a story to explain what I mean. The story is an old story about my latest, greatest passion outside theoretical physics: an ancient scientist, or so I would say, even if often je is called a philosopher: Anaximander. I am fascinated by this character, Anaximander. I went into understanding what he did, and to me he’s a scientist. He did something that is very typical of science, and which shows some aspect of what science is. So what is the story with Anaximander? It’s the following, in brief:

Until him, all the civilizations of the planet, everybody around the world, thought that the structure of the world was: the sky over our heads and the earth under our feet. There’s an up and a down, heavy things fall from the up to the down, and that’s reality. Reality is oriented up and down, heaven’s up and earth is down. Then comes Anaximander and says: no, is something else. ‘The earth is a finite body that floats in space, without falling, and the sky is not just over our head; it is all around.’

How he gets it? Well obviously he looks at the sky, you see things going around, the stars, the heavens, the moon, the planets, everything moves around and keeps turning around us. It’s sort of reasonable to think that below us is nothing, so it seems simple to get to this conclusion. Except that nobody else got to this conclusion. In centuries and centuries of ancient civilizations, nobody got there. The Chinese didn’t get there until the 17th century, when Matteo Ricci and the Jesuits went to China and told them. In spite of centuries of Imperial Astronomical Institute which was studying the sky. The Indians only learned this when the Greeks arrived to tell them. The Africans, in America, in Australia… nobody else got to this simple realization that the sky is not just over our head, it’s also under our feet. Why?

Because obviously it’s easy to suggest that the earth sort of floats in nothing, but then you have to answer the question: why doesn’t it fall? The genius of Anaximander was to answer this question. We know his answer, from Aristotle, from other people. He doesn’t answer this question, in fact. He questions this question. He says why should it fall? Things fall toward the earth. Why the earth itself should fall? In other words, he realizes that the obvious generalization from every small heavy object falling, to the earth itself falling, might be wrong. He proposes an alternative, which is that objects fall towards the earth, which means that the direction of falling changes around the earth.

This means that up and down become notions relative to the earth. Which is rather simple to figure out for us now: we’ve learned this idea. But if you think of the difficulty when we were children, to understand how people in Sydney could live upside-down, clearly requires some changing in something structural in our basic language in terms of which we understand the world. In other words, up and down means something different before and after Anaximander’s revolution.

He understands something about reality, essentially by changing something in the conceptual structure that we have in grasping reality. In doing so, he is not doing a theory; he understands something which in some precise sense is forever. It’s some uncovered truth, which to a large extent is a negative truth. He frees ourselves from prejudice, a prejudice that was ingrained in the conceptual structure we had for thinking about space.

Why I think this is interesting?  Because I think that this is what happens at every major step, at least in physics; in fact, I think this is what happened at every step, even not major. When I give a thesis to students, most of the time the problem I give for a thesis is not solved. It’s not solved because the solution of the question, most of the time, is not solving in the question, it’s just questioning the question itself. Is realizing that in the way the problem was formulated, there was some implicit prejudice assumption that was the one to be dropped.   

If this is so, the idea that we have data and theories, and then we have a rational agent that constructs theories from the data using his rationality, his mind, his intelligence, his conceptual structure, and juggles theories and data, doesn’t make any sense, because what is being challenged at every step is not the theory, it’s the conceptual structure used in constructing theories and interpreting the data. In other words, it’s not changing theories that we go ahead, but changing the way we think about the world.

The prototype of this way of thinking, I think the example that makes it more clear, is Einstein's discovery of special relativity. On the one hand there was Newtonian mechanics, which was extremely successful with its empirical content. On the other hand there was Maxwell’s theory, with its empirical content, which was extremely successful, too. But there was a contradiction between the two.

If Einstein had gone to school to learn what science is, if he had read Kuhn, and the philosopher explaining what science is, if he was any one of my colleagues today who are looking for a solution of the big problem of physics today, what would he do?

He would say, okay, the empirical content is the strong part of the theory. The idea in classical mechanics that velocity is relative: forget about it. The Maxwell equations, forget about them. Because this is a volatile part of our knowledge. The theories themselves have to be changed, okay? What we keep solid is the data, and we modify the theory so that it makes sense coherently, and coherently with the data.

That’s not at all what Einstein does. Einstein does the contrary. He takes the theories very seriously. He believes the theory. He says, look, classical mechanics is so successful that when it says that velocity is relative, we should take it seriously, and we should believe it. And the Maxwell equations are so successful that we should believe the Maxwell equations. He has so much trust in the theory itself, in the qualitative content of the theory, that qualitative content that Kuhn says changes all the time, that we learned not to take too seriously, and so much faith in this, confidence in that, that he’s ready to do what? To force coherence between these two, the two theories, by challenging something completely different, which is something that is in our head, which is how we think about time.

He’s changing something in common sense, something about the elementary structure in terms of which we think of the world, on the basis of the trust of the past results in physics. This is exactly the opposite of what is done today in physics. If you read Physical Review today, it’s all about theories that challenge completely and deeply the content of previous theories: so theories in which there is no Lorentz invariance, which are not relativistic, which are not general covariant, quantum mechanics might be wrong…

Every physicist today is immediately ready to say, okay, all of our past knowledge about the world is wrong. Let’s randomly pick some new idea. I suspect that this is not a small component of the long-term lack of success of theoretical physics. You understand something new about the world, either from new data that arrive, or from thinking deeply on what we have already learned about the world. But thinking means also accepting what we’ve learned, challenging what we think, and knowing that in some of the things that we think, there may be something to modify and to change.

Science is not about the data, but about the tools that we use

What are then the aspects of doing science that I think are under-evaluated, and should come up-front? First, science is about constructing visions of the world, about rearranging our conceptual structure, about creating new concepts which were not there before, and even more, about changing, challenging the a-priori that we have. So it’s nothing to do about the assembly of data and the way of organizing the assembly of data. It has everything to do about the way we think, and about our mental vision of the world. Science is a process in which we keep exploring ways of thinking, and changing our image of the world, our vision of the world, to find new ones that work a little bit better.

In doing that, what we have learned in the past is our main ingredient, especially the negative things we have learned. If we have learned that the earth is not flat, there will be no theory in the future in which the earth is ‘flat.’ If we have learned that the earth is not at the center of the universe, that’s forever. We’re not going to go back on this. If you have learned that simultaneity is relative, with Einstein, we’re not going back to absolute simultaneity, like many people think. This means that when an experiment measures neutrinos going faster than light, we should be very suspicious, and of course check and see whether there is something very deep that is happening. But it is absurd that everybody jumps and says okay, Einstein was wrong, just for a little anomaly that shows so. It never works like that in science.

The past knowledge is always with us, and it’s our main ingredient for understanding. The theoretical ideas which are based on ‘let’s imagine that this may happen because why not’ are not taking us anywhere.

I seem to be saying two things that contradict each other. On the one hand, we trust the knowledge, and on the other hand, we are always ready to modify in-depth part of our conceptual structure about the world. There is no contradiction between the two, because the idea of the contradiction comes from what I see as the deepest misunderstanding about science, which is the idea that science is about certainty

Science is not about certainty. Science is about finding the most reliable way of thinking, at the present level of knowledge. Science is extremely reliable; it’s not certain. In fact, not only it’s not certain, but it’s the lack of certainty that grounds it. Scientific ideas are credible not because they are sure, but because they are the ones that have survived all the possible past critiques, and they are the most credible because they were put on the table for everybody’s criticism.

The very expression ‘scientifically proven’ is a contradiction in terms. There is nothing that is scientifically proven. The core of science is the deep awareness that we have wrong ideas, we have prejudices. We have ingrained prejudices. In our conceptual structure for grasping reality there might be something not appropriate, something we may have to revise to understand better. So at any moment, we have a vision of reality that is effective, it’s good, it’s the best we have found so far. It’s the most credible we have found so far, its mostly correct.

But at the same time it’s not taken for certain, and any element of it is a priori open for revision. Why do we have this continuous…? On the one hand, we have this brain, and it has evolved for millions of years. It has evolved for us, for basically running the savannah and run after and eat deer and try not to be eaten by the lions. We have a brain that is tuned to meters and hours, which is not particularly well-tuned to think about atoms and galaxies. So we have to get out of that.  

At the same time I think we have been selected for going out of the forest, perhaps, going out of Africa, for being as smart as possible, as animals that escape lions. This continuous effort that is part of us to change our own way of thinking, to readapt, is a very part of our nature. We are not changing our mind away from nature; it is our natural history that continues to change that.      

If I can make a final comment about this way of thinking about science, or two final comments: One is that science is not about the data. The empirical content of scientific theory is not what is relevant. The data serves to suggest the theory, to confirm the theory, to disconfirm the theory, to prove the theory wrong. But these are the tools that we use. What interests us is the content of the theory. What interests us is what the theory says about the world. General relativity says space-time is curved. The data of general relativity are that Mercury perihelion moves 43 degrees per century, with respect to that computed with Newtonian mechanics.    

Who cares? Who cares about these details? If that was the content of general relativity, general relativity would be boring. General relativity is interesting not because of its data, but because it tells us that as far as we know today, the best way of conceptualizing space-time is as a curved object. It gives us a better way of grasping reality than Newtonian mechanics, because it tells us that there can be black holes, because it tells us there’s a Big Bang. This is the content of the scientific theory.

All living beings on earth have common ancestors. This is a content of scientific theory, not the specific data used to check the theory. So the focus of scientific thinking, I believe, should be on the content of the theory, the past theory, the previous theories, try to see what they hold concretely and what they suggest to us for changing in our conceptual frame themselves.  

Scientific thinking vs religious thinking

The final consideration regards just one comment about this understanding of science and this long conflict that has crossed the centuries between scientific thinking and religious thinking. I think often it is misunderstood. The question is, why can’t we live happily together, and why can’t people pray to their gods and study the universe without this continuous clash? I think that this continuous clash is a little bit unavoidable, for the opposite reason from the one often presented. It’s unavoidable not because science pretends to know the answers. But it’s the other way around, because if scientific thinking is this, then it is a constant reminder to ourselves that we don’t know the answers.

In religious thinking, often this is unacceptable. What is unacceptable is not a scientist that says I know, but it’s a scientist that says I don’t know, and how could you know? Based, at least in many religions, in some religions, or in some ways of being religious, an idea that there should be truth that one can hold and not be questioned. This way of thinking is naturally disturbed by a way of thinking which is based on continuous revision, not of the theories, of even the core ground of the way in which we think.     

The core of science is not certainty, it’s continuous uncertainty

So summarizing, I think science is not about data; it’s not about the empirical content, about our vision of the world. It’s about overcoming our own ideas, and about going beyond common sense continuously. Science is a continuous challenge of common sense, and the core of science is not certainty, it’s continuous uncertainty. I would even say the joy of taking what we think, being aware that in everything we think, there are probably still an enormous amount of prejudices and mistakes, and try to learn to look a little bit larger, knowing that there is always a larger point of view that we’ll expect in the future.    

We are very far from the final theory of the world, in my field, in physics, I think extremely far. Every hope of saying, well we are almost there, we’ve solved all the problems, is nonsense. And we are very wrong when we discard the value of theories like quantum mechanics, general relativity or special relativity, for that matter. And throw them away, trying something else randomly. On the basis of what we know, we should learn something more, and at the same time we should somehow take our vision for what it is, a vision that is the best vision that we have, but then continuous evolving the vision. (…) 

String theory's a beautiful theory. It might work, but I suspect it's not going to work. I suspect it's not going to work because it's not sufficiently grounded in everything we know so far about the world, and especially in what I think or perceive as the main physical content of general relativity.  

String theory’s a big guesswork. I think physics has never been a guesswork; it has been a way of unlearning how to think about something, and learning about how to think a little bit different by reading the novelty into the details of what we already know. Copernicus didn’t have any new data, any major new idea, he just took Ptolemy, in the details of Ptolemy, and he read in the details of Ptolemy the fact that the equants, the epicycles, the deferents were in certain proportions between them, the way to look at the same construction from a slightly different perspective and discover the earth is not the center of the universe.

Einstein, as I said, took seriously Maxwell’s theory and classical mechanics to get special relativity. So loop quantum gravity is an attempt to do the same thing: take seriously general relativity, take seriously quantum mechanics, and out of that, bring them together, even if this means a theory where there’s no time, no fundamental time, so we have rethink the world without basic time. The theory, on the one hand, is very conservative, because it’s based on what we know. But it’s totally radical because it forces us to change something big in our way of thinking.

String theorists think differently. They say well, let’s go out to infinity, where somehow the full covariance of general relativity is not there. There we know what is time, we know what is space, because we’re at asymptotic distances, at large distances. The theory’s wilder, more different, more new, but in my opinion, it’s more based on the old conceptual structure. It’s attached to the old conceptual structure, and not attached to the novel content of the theories that have proven empirically successful. That’s how my way of reading science matches with the specifics of the research work that I do, and specifically of loop quantum gravity.

Of course we don’t know. I want to be very clear. I think that string theory’s a great attempt to go ahead, done by great people. My only polemical attitude with string theory is when I hear, but I hear less and less now, when I hear ‘oh, we know the solution already, certain it’s string theory.’ That’s certainly wrong and false. What is true is that that’s a good set of ideas; loop quantum gravity is another good set of ideas. We have to wait and see which one of the theories turns out to work, and ultimately to be empirically confirmed.    

Should a scientist think about philosophy, or not?

This may take me to another point, which is should a scientist think about philosophy, or not? It’s sort of the fashion today to discard philosophy, to say now we have science, we don’t need philosophy. I find this attitude very naïve for two reasons. One is historical. Just look back. Heisenberg would have never done quantum mechanics without being full of philosophy. Einstein would have never done relativity without having read all the philosophers and have a head full of philosophy. Galileo would never have done what he had done without having a head full of Plato. Newton thought of himself as a philosopher, and started by discussing this with Descartes, and had strong philosophical ideas.

But even Maxwell, Boltzmann, I mean, all the major steps of science in the past were done by people who were very aware of methodological, fundamental, even metaphysical questions being posed. When Heisenberg does quantum mechanics, he is in a completely philosophical mind. He says in classical mechanics there’s something philosophically wrong, there’s not enough emphasis on empiricism. It is exactly this philosophical reading of him that allows him to construct this fantastically new physical theory, scientific theory, which is quantum mechanics.  

             
Paul Dirac and Richard Feynman. From The Strangest Man. Photograph by A. John Coleman, courtesy AIP Emilio Segre Visual Archives, Physics Today collection

The divorce between this strict dialogue between philosophers and scientists is very recent, and somehow it’s after the war, in the second half of the 20th century. It has worked because in the first half of the 20thcentury, people were so smart. Einstein and Heisenberg and Dirac and company put together relativity and quantum theory and did all the conceptual work. The physics of the second half of the century has been, in a sense, a physics of application of the great ideas of the people of the ’30s, of the Einsteins and the Heisenbergs.

When you want to apply thes ideas, when you do atomic physics, you need less conceptual thinking. But now we are back to the basics, in a sense. When we do quantum gravity it’s not just application. I think that the scientists who say I don’t care about philosophy, it’s not true they don’t care about philosophy, because they have a philosophy. They are using a philosophy of science. They are applying a methodology. They have a head full of ideas about what is the philosophy they’re using; just they’re not aware of them, and they take them for granted, as if this was obvious and clear. When it’s far from obvious and clear. They are just taking a position without knowing that there are many other possibilities around that might work much better, and might be more interesting for them.

I think there is narrow-mindedness, if I might say so, in many of my colleague scientists that don’t want to learn what is being said in the philosophy of science. There is also a narrow-mindedness in a lot of probably areas of philosophy and the humanities in which they don’t want to learn about science, which is even more narrow-minded. Somehow cultures reach, enlarge. I’m throwing down an open door if I say it here, but restricting our vision of reality today on just the core content of science or the core content of humanities is just being blind to the complexity of reality that we can grasp from a number of points of view, which talk to one another enormously, and which I believe can teach one another enormously.”

Carlo Rovelli, Italian theoretical physicist, working on quantum gravity and on foundations of spacetime physics. He is professor of physics at the University of the Mediterranean in Marseille, France and member of the Intitut Universitaire de France. To see the whole video and read the transcript, click Science Is Not About Certainty: A Philosophy Of Physics, Edge, May 24, 2012. (Illustration source)

See also:

Raphael Bousso: Thinking About the Universe on the Larger Scales
David Deutsch: A new way to explain explanation
Galileo and the relationship between the humanities and the sciences
The Relativity of Truth - a brief résumé, Lapidarium notes
Philosophy vs science: which can answer the big questions of life?
☞ ‘Cognition, perception, relativity’ tag on Lapidarium notes

Mar
26th
Mon
permalink

Science historian George Dyson: Unravelling the digital code
      
                                                     George Dyson (Photo: Wired)

"It was not made for those who sell oil or sardines."

— G. W. Leibniz, ca. 1674, on his calculating machine

A universe of self-replicating code

Digital organisms, while not necessarily any more alive than a phone book, are strings of code that replicate and evolve over time. Digital codes are strings of binary digits — bits. Google is a fantastically large number, so large it is almost beyond comprehension, distributed and replicated across all kinds of hosts. When you click on a link, you are replicating the string of code that it links to. Replication of code sequences isn’t life, any more than replication of nucleotide sequences is, but we know that it sometimes leads to life.

Q [Kevin Kelly]: Are we in that digital universe right now, as we talk on the phone?

George Dyson: Sure. You’re recording this conversation using a digital recorder — into an empty matrix of addresses on a microchip that is being filled up at 44 kilobytes per second. That address space full of numbers is the digital universe.

Q: How fast is this universe expanding?

G.D.: Like our own universe at the beginning, it’s more exploding than expanding. We’re all so immersed in it that it’s hard to perceive. Last time I checked, the digital universe was expanding at the rate of five trillion bits per second in storage and two trillion transistors per second on the processing side. (…)

Q: Where is this digital universe heading?

G.D.: This universe is open to the evolution of all kinds of things. It’s cycling faster and faster. Even with Google and YouTube and Facebook, we can’t consume it all. And we aren’t aware what this space is filling up with. From a human perspective, computers are idle 99 per cent of the time. While they’re waiting for us to come up with instructions, computation is happening without us, as computers write instructions for each other. As Turing showed, this space can’t be supervised. As the digital universe expands, so does this wild, undomesticated side.”

— George Dyson interviewed by Kevin Kelly in Science historian George Dyson: Unravelling the digital code, Wired, Mar 5, 2012.

"Just as we later worried about recombinant DNA, what if these things escaped? What would they do to the world? Could this be the end of the world as we know it if these self-replicating numerical creatures got loose?

But, we now live in a world where they did get loose—a world increasingly run by self-replicating strings of code. Everything we love and use today is, in a lot of ways, self-reproducing exactly as Turing, von Neumann, and Barricelli prescribed. It’s a very symbiotic relationship: the same way life found a way to use the self-replicating qualities of these polynucleotide molecules to the great benefit of life as a whole, there’s no reason life won’t use the self-replicating abilities of digital code, and that’s what’s happening. If you look at what people like Craig Venter and the thousand less-known companies are doing, we’re doing exactly that, from the bottom up. (…)

What’s, in a way, missing in today’s world is more biology of the Internet. More people like Nils Barricelli to go out and look at what’s going on, not from a business or what’s legal point of view, but just to observe what’s going on.

Many of these things we read about in the front page of the newspaper every day, about what’s proper or improper, or ethical or unethical, really concern this issue of autonomous self-replicating codes. What happens if you subscribe to a service and then as part of that service, unbeknownst to you, a piece of self-replicating code inhabits your machine, and it goes out and does something else? Who is responsible for that? And we’re in an increasingly gray zone as to where that’s going. (…)

Why is Apple one of the world’s most valuable companies? It’s not only because their machines are so beautifully designed, which is great and wonderful, but because those machines represent a closed numerical system. And they’re making great strides in expanding that system. It’s no longer at all odd to have a Mac laptop. It’s almost the normal thing.

But I’d like to take this to a different level, if I can change the subject… Ten or 20 years ago I was preaching that we should look at digital code as biologists: the Darwin Among the Machines stuff. People thought that was crazy, and now it’s firmly the accepted metaphor for what’s going on. And Kevin Kelly quoted me in Wired, he asked me for my last word on what companies should do about this. And I said, “Well, they should hire more biologists.”

But what we’re missing now, on another level, is not just biology, but cosmology. People treat the digital universe as some sort of metaphor, just a cute word for all these products. The universe of Apple, the universe of Google, the universe of Facebook, that these collectively constitute the digital universe, and we can only see it in human terms and what does this do for us?

We’re missing a tremendous opportunity. We’re asleep at the switch because it’s not a metaphor. In 1945 we actually did create a new universe. This is a universe of numbers with a life of their own, that we only see in terms of what those numbers can do for us. Can they record this interview? Can they play our music? Can they order our books on Amazon? If you cross the mirror in the other direction, there really is a universe of self-reproducing digital code. When I last checked, it was growing by five trillion bits per second. And that’s not just a metaphor for something else. It actually is. It’s a physical reality.

We’re still here at the big bang of this thing, and we’re not studying it enough. Who’s the cosmologist really looking at this in terms of what it might become in 10,000 years? What’s it going to be in 100 years? Here we are at the very beginning and we just may simply not be asking the right questions about what’s going on. Try looking at it from the other side, not from our side as human beings. Scientists are the people who can do that kind of thing. You can look at viruses from the point of view of a virus, not from the point of view of someone getting sick.

Very few people are looking at this digital universe in an objective way. Danny Hillis is one of the few people who is. His comment, made exactly 30 years ago in 1982, was that "memory locations are just wires turned sideways in time". That’s just so profound. That should be engraved on the wall. Because we don’t realize that there is this very different universe that does not have the same physics as our universe. It’s completely different physics. Yet, from the perspective of that universe, there is physics, and we have almost no physicists looking at it, as to what it’s like. And if we want to understand the sort of organisms that would evolve in that totally different universe, you have to understand the physics of the world in which they are in.  It’s like looking for life on another planet. Danny has that perspective. Most people say just, “well, a wire is a wire. It’s not a memory location turned sideways in time.” You have to have that sort of relativistic view of things.

We are still so close to the beginning of this explosion that we are still immersed in the initial fireball. Yet, in that short period of time, for instance, it was not long ago that to transfer money electronically you had to fill out paper forms on both ends and then wait a day for your money to be transferred. And, in a very few years, it’s a dozen years or so, most of the money in the world is moving electronically all the time.

The best example of this is what we call the flash crash of May 6th, two years ago, when suddenly, the whole system started behaving unpredictably. Large amounts of money were lost in milliseconds, and then the money came back, and we quietly (although the SEC held an investigation) swept it under the rug and just said, “well, it recovered. Things are okay.” But nobody knows what happened, or most of us don’t know.

There was a great Dutch documentary—Money and Speed: Inside the Black Box—where they spoke to someone named Eric Scott Hunsader who actually had captured the data on a much finer time scale, and there was all sorts of very interesting stuff going on. But it’s happening so quickly that it’s below what our normal trading programs are able to observe, they just aren’t accounting for those very fast things. And this could be happening all around us—not just in the world of finance. We would not necessarily even perceive it, that there’s a whole world of communication that’s not human communication. It’s machines communicating with machines. And they may be communicating money, or information that has other meaning—but if it is money, we eventually notice it. It’s just the small warm pond sitting there waiting for the spark.

It’s an unbelievably interesting time to be a digital biologist or a digital physicist, or a digital chemist. A good metaphor is chemistry. We’re starting to address code by template, rather than by numerical location—the way biological molecules do.

We’re living in a completely different world. The flash crash was an example: you could have gone out for a cup of coffee and missed the whole thing, and come back and your company lost a billion dollars and got back 999 million, while you were taking your lunch break. It just happened so fast, and it spread so quickly.

So, yes, the fear scenario is there, that some malevolent digital virus could bring down the financial system. But on the other hand, the miracle of this flash crash was not that it happened, but that it recovered so quickly. Yet, in those milliseconds, somebody made off with a lot of money. We still don’t know who that was, and maybe we don’t want to know.

The reason we’re here today (surrounded by this expanding digital universe) is because in 1936, or 1935, this oddball 23-year-old undergraduate student, Alan Turing, developed this theoretical framework to understand a problem in mathematical logic, and the way he solved that problem turned out to establish the model for all this computation. And I believe we wold have arrived here, sooner or later, without Alan Turing or John von Neumann, but it was Turing who developed the one-dimensional model, and von Neumann who developed the two-dimensional implementation, for this increasingly three-dimensional digital universe in which everything we do is immersed. And so, the next breakthrough in understanding will also I think come from some oddball. It won’t be one of our great, known scientists. It’ll be some 22-year-old kid somewhere who makes more sense of this.

But, we’re going back to biology, and of course, it’s impossible not to talk about money, and all these other ways that this impacts our life as human beings. What I was trying to say is that this digital universe really is so different that the physics itself is different. If you want to understand what types of life-like or self-reproducing forms would develop in a universe like that, you actually want to look at the sort of physics and chemistry of how that universe is completely different from ours. An example is how not only its time scale but how time operates is completely different, so that things can be going on in that world in microseconds that suddenly have a real effect on ours.

Again, money is a very good example, because money really is a sort of a gentlemen’s agreement to agree on where the money is at a given time. Banks decide, well, this money is here today and it’s there tomorrow. And when it’s being moved around in microseconds, you can have a collapse, where suddenly you hit the bell and you don’t know where the money is. And then everybody’s saying, “Where’s the money? What happened to it?” And I think that’s what happened. And there are other recent cases where it looks like a huge amount of money just suddenly disappeared, because we lost the common agreement on where it is at an exact point in time. We can’t account for those time periods as accurately as the computers can.

One number that’s interesting, and easy to remember, was in the year 1953, there were 53 kilobytes of high-speed memory on planet earth. This is random access high-speed memory. Now you can buy those 53 kilobytes for an immeasurably small, thousandth of one cent or something. If you draw the graph, it’s a very nice, clean graph. That’s sort of Moore’s Law; that it’s doubling. It has a doubling time that’s surprisingly short, and no end in sight, no matter what the technology does. We’re doubling the number of bits in a extraordinarily short time.

And we have never seen that. Or I mean, we have seen numbers like that, in epidemics or chain reactions, and there’s no question it’s a very interesting phenomenon. But still, it’s very hard not to just look at it from our point of view. What does it mean to us? What does it mean to my investments? What does it mean to my ability to have all the music I want on my iPhone? That kind of thing. But there’s something else going on. We’re seeing a fraction of one percent of it, and there’s this other 99.99 percent that people just aren’t looking at.

The beginning of this was driven by two problems. The problem of nuclear weapons design, and the problem of code breaking were the two drivers of the dawn of this computational universe. There were others, but those were the main ones.

What’s the driver today? You want one word? It’s advertising. And, you may think advertising is very trivial, and of no real importance, but I think it’s the driver. If you look at what most of these codes are doing, they’re trying to get the audience, trying to deliver the audience. The money is flowing as advertising.

And it is interesting that Samuel Butler imagined all this in 1863, and then in his book Erewhon. And then 1901, before he died, he wrote a draft for “Erewhon Revisited.” In there, he called out advertising, saying that advertising would be the driving force of these machines evolving and taking over the world. Even then at the close of 19th century England, he saw advertising as the way we would grant power to the machines.

If you had to say what’s the most powerful algorithm set loose on planet earth right now? Originally, yes, it was the Monte Carlo code for doing neutron calculations. Now it’s probably the AdWords algorithm. And the two are related: if you look at the way AdWords works, it is a Monte Carlo process. It’s a sort of statistical sampling of the entire search space, and a monetizing of it, which as we know, is a brilliant piece of work. And that’s not to diminish all the other great codes out there.

We live in a world where we measure numbers of computers in billions, and numbers of what we call servers, which are the equivalent of in the old days, of what would be called mainframes. Those are in the millions, hundreds of millions.

Two of the pioneers of this—to single out only two pioneers—were John Von Neumann and Alan Turing. If they were here today Turing would be 100. Von Neumann would be 109. I think they would understand what’s going on immediately—it would take them a few minutes, if not a day, to figure out, to understand what was going on. And, they both died working on biology, and I think they would be immediately fascinated by the way biological code and digital code are now intertwined. Von Neumann’s consuming passion at the end was self-reproducing automata. And Alan Turing was interested in the question of how molecules could self-organize to produce organisms.

They would be, on the other hand, astonished that we’re still running their machines, that we don’t have different computers. We’re still just running your straight Von Neumann/Turing machine with no real modification. So they might not find our computers all that interesting, but they would be diving into the architecture of the Internet, and looking at it.

In both cases, they would be amazed by the direct connection between the code running on computers and the code running in biology—that all these biotech companies are directly reading and writing nucleotide sequences in and out of electronic memory, with almost no human intervention. That’s more or less completely mechanized now, so there’s direct translation, and once you translate to nucleotides, it’s a small step, a difficult step, but, an inevitable step to translate directly to proteins. And that’s Craig Venter’s world, and it’s a very, very different world when we get there.

The question of how and when humans are going to expand into the universe, the space travel question, is, in my view, almost rendered obsolete by this growth of a digitally-coded biology, because those digital organisms—maybe they don’t exist now, but as long as the system keeps going, they’re inevitable—can travel at the speed of light. They can propagate. They’re going to be so immeasurably far ahead that maybe humans will be dragged along with it.

But while our digital footprint is propagating at the speed of light, we’re having very big trouble even getting to the eleven kilometers per second it takes to get into lower earth orbit. The digital world is clearly winning on that front. And that’s for the distant future. But it changes the game of launching things, if you no longer have to launch physical objects, in order to transmit life.”

George Dyson, author and historian of technology whose publications broadly cover the evolution of technology in relation to the physical environment and the direction of society, A universe of self-replicating code, Edge, Mar 26, 2012.

See also:

Jameson Dungan on information and synthetic biology
Vlatko Vedral: Decoding Reality: the universe as quantum information
Rethinking “Out of Africa: A Conversation with Christopher Stringer (2011)
A Short Course In Synthetic Genomics, The Edge Master Class with George Church & Craig Venter (2009)
Eat Me Before I Eat You! A New Foe For Bad Bugs: A Conversation with Kary Mullis (2010)
Mapping The Neanderthal Genome. A Conversation with Svante Pääbo (2009)
Engineering Biology”: A Conversation with Drew Endy (2008)
☞ “Life: A Gene-Centric View A Conversation in Munich with Craig Venter & Raichard Dawkins (2008)
Ants Have Algorithms: A Talk with Ian Couzin (2008)
Life: What A Concept, The Edge Seminar, Freeman Dyson, J. Craig Venter, George Church, Dimitar Sasselov, Seth Lloyd, Robert Shapiro (2007)
Code II J. Doyne Farmer v. Charles Simonyi (1998)
Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

Feb
23rd
Thu
permalink

The Scale of the Universe ☞ Planck length up to the entire universe (interactive visualisation)

"Space is big. You just won’t believe how vastly, hugely, mind- bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space."

Douglas Adams, The Hitchhiker’s Guide to the Galaxy


                                                           Click image to explore

Zoom from the edge of the universe to the quantum foam of spacetime and learn the scale of things.

The Powers of Ten (1977)

"Powers of Ten takes us on an adventure in magnitudes. Starting at a picnic by the lakeside in Chicago, this famous film transports us to the outer edges of the universe. Every ten seconds we view the starting point from ten times farther out until our own galaxy is visible only a s a speck of light among many others. Returning to Earth with breathtaking speed, we move inward- into the hand of the sleeping picnicker- with ten times more magnification every ten seconds. Our journey ends inside a proton of a carbon atom within a DNA molecule in a white blood cell."

See also:

The Scale of the Universe (documentary)
Planck scale, Wiki
Observable universe, Wiki
The Universe By Numbers - The Physics of the Universe
The Scale of the Universe, University of California, San Diego Center for Astrophysics & Space Sciences
☞ Alasdair Wilkins, These are the biggest numbers in the universe
Universe tag on Lapidarium notes
Universe tag on Lapidarium

Jan
22nd
Sun
permalink

What Happened Before the Big Bang? The New Philosophy of Cosmology

    

Tim Maudlin: “There are problems that are fairly specific to cosmology. Standard cosmology, or what was considered standard cosmology twenty years ago, led people to the conclude that the universe that we see around us began in a big bang, or put another way, in some very hot, very dense state. And if you think about the characteristics of that state, in order to explain the evolution of the universe, that state had to be a very low entropy state, and there’s a line of thought that says that anything that is very low entropy is in some sense very improbable or unlikely. And if you carry that line of thought forward, you then say “Well gee, you’re telling me the universe began in some extremely unlikely or improbable state” and you wonder is there any explanation for that. Is there any principle that you can use to account for the big bang state?

This question of accounting for what we call the “big bang state” — the search for a physical explanation of it — is probably the most important question within the philosophy of cosmology, and there are a couple different lines of thought about it. One that’s becoming more and more prevalent in the physics community is the idea that the big bang state itself arose out of some previous condition, and that therefore there might be an explanation of it in terms of the previously existing dynamics by which it came about. There are other ideas, for instance that maybe there might be special sorts of laws, or special sorts of explanatory principles, that would apply uniquely to the initial state of the universe.

One common strategy for thinking about this is to suggest that what we used to call the whole universe is just a small part of everything there is, and that we live in a kind of bubble universe, a small region of something much larger. And the beginning of this region, what we call the big bang, came about by some physical process, from something before it, and that we happen to find ourselves in this region because this is a region that can support life. The idea being that there are lots of these bubble universes, maybe an infinite number of bubble universes, all very different from one another. Part of the explanation of what’s called the anthropic principle says, “Well now, if that’s the case, we as living beings will certainly find ourselves in one of those bubbles that happens to support living beings.” That gives you a kind of account for why the universe we see around us has certain properties. (…)

Newton would call what he was doing natural philosophy, that’s actually the name of his book: Mathematical Principles of Natural Philosophy." Philosophy, traditionally, is what everybody thought they were doing. It’s what Aristotle thought he was doing when he wrote his book called Physics. So it’s not as if there’s this big gap between physical inquiry and philosophical inquiry. They’re both interested in the world on a very general scale, and people who work in the foundations of physics, that is, the group that works on the foundations of physics, is about equally divided between people who live in philosophy departments, people who live in physics departments, and people who live in mathematics departments.

Q: In May of last year Stephen Hawking gave a talk for Google in which he said that philosophy was dead, and that it was dead because it had failed to keep up with science, and in particular physics. Is he wrong or is he describing a failure of philosophy that your project hopes to address?

Maudlin: Hawking is a brilliant man, but he’s not an expert in what’s going on in philosophy, evidently. Over the past thirty years the philosophy of physics has become seamlessly integrated with the foundations of physics work done by actual physicists, so the situation is actually the exact opposite of what he describes. I think he just doesn’t know what he’s talking about. I mean there’s no reason why he should. Why should he spend a lot of time reading the philosophy of physics? I’m sure it’s very difficult for him to do. But I think he’s just … uninformed. (…)

Q: Do you think that physics has neglected some of these foundational questions as it has become, increasingly, a kind of engine for the applied sciences, focusing on the manipulation, rather than say, the explanation, of the physical world? 

Maudlin: Look, physics has definitely avoided what were traditionally considered to be foundational physical questions, but the reason for that goes back to the foundation of quantum mechanics. The problem is that quantum mechanics was developed as a mathematical tool. Physicists understood how to use it as a tool for making predictions, but without an agreement or understanding about what it was telling us about the physical world. And that’s very clear when you look at any of the foundational discussions. This is what Einstein was upset about; this is what Schrodinger was upset about.

Quantum mechanics was merely a calculational technique that was not well understood as a physical theory. Bohr and Heisenberg tried to argue that asking for a clear physical theory was something you shouldn’t do anymore. That it was something outmoded. And they were wrong, Bohr and Heisenberg were wrong about that. But the effect of it was to shut down perfectly legitimate physics questions within the physics community for about half a century. And now we’re coming out of that, fortunately.

Q And what’s driving the renaissance?

Maudlin: Well, the questions never went away. There were always people who were willing to ask them. Probably the greatest physicist in the last half of the twentieth century, who pressed very hard on these questions, was John Stewart Bell. So you can’t suppress it forever, it will always bubble up. It came back because people became less and less willing to simply say, “Well, Bohr told us not to ask those questions,” which is sort of a ridiculous thing to say.

Q: Are the topics that have scientists completely flustered especially fertile ground for philosophers? For example I’ve been doing a ton of research for a piece about the James Webb Space Telescope, the successor to the Hubble Space Telescope, and none of the astronomers I’ve talked to seem to have a clue as to how to use it to solve the mystery of dark energy. Is there, or will there be, a philosophy of dark energy in the same way that a body of philosophy seems to have flowered around the mysteries of quantum mechanics?

Maudlin: There will be. There can be a philosophy of anything really, but it’s perhaps not as fancy as you’re making it out. The basic philosophical question, going back to Plato, is “What is x?” What is virtue? What is justice? What is matter? What is time? You can ask that about dark energy - what is it? And it’s a perfectly good question.

There are different ways of thinking about the phenomena which we attribute to dark energy. Some ways of thinking about it say that what you’re really doing is adjusting the laws of nature themselves. Some other ways of thinking about it suggest that you’ve discovered a component or constituent of nature that we need to understand better, and seek the source of. So, the question — What is this thing fundamentally? — is a philosophical question, and is a fundamental physical question, and will lead to interesting avenues of

Q: One example of philosophy of cosmology that seems to have trickled out to the layman is the idea of fine tuning - the notion that in the set of all possible physics, the subset that permits the evolution of life is very small, and that from this it is possible to conclude that the universe is either one of a large number of universes, a multiverse, or that perhaps some agent has fine tuned the universe with the expectation that it generate life. Do you expect that idea to have staying power, and if not what are some of the compelling arguments against it?

Maudlin: A lot of attention has been given to the fine tuning argument. Let me just say first of all, that the fine tuning argument as you state it, which is a perfectly correct statement of it, depends upon making judgments about the likelihood, or probability of something. Like, “how likely is it that the mass of the electron would be related to the mass of the proton in a certain way?” Now, one can first be a little puzzled by what you mean by “how likely” or “probable” something like that is. You can ask how likely it is that I’ll roll double sixes when I throw dice, but we understand the way you get a handle on the use of probabilities in that instance. It’s not as clear how you even make judgments like that about the likelihood of the various constants of nature (an so on) that are usually referred to in the fine tuning argument.

Now let me say one more thing about fine tuning. I talk to physicists a lot, and none of the physicists I talk to want to rely on the fine tuning argument to argue for a cosmology that has lots of bubble universes, or lots of worlds. What they want to argue is that this arises naturally from an analysis of the fundamental physics, that the fundamental physics, quite apart from any cosmological considerations, will give you a mechanism by which these worlds will be produced, and a mechanism by which different worlds will have different constants, or different laws, and so on.  If that’s true, then if there are enough of these worlds, it will be likely that some of them have the right combination of constants to permit life. But their arguments tend not to be “we have to believe in these many worlds to solve the fine tuning problem,” they tend to be “these many worlds are generated by physics we have other reasons for believing in.”

If we give up on that, and it turns out there aren’t these many worlds, that physics is unable to generate them, then it’s not that the only option is that there was some intelligent designer. It would be a terrible mistake to think that those are the only two ways things could go. You would have to again think hard about what you mean by probability, and about what sorts of explanations there might be. Part of the problem is that right now there are just way too many freely adjustable parameters in physics. Everybody agrees about that. There seem to be many things we call constants of nature that you could imagine setting at different values, and most physicists think there shouldn’t be that many, that many of them are related to one another.

Physicists think that at the end of the day there should be one complete equation to describe all physics, because any two physical systems interact and physics has to tell them what to do. And physicists generally like to have only a few constants, or parameters of nature. This is what Einstein meant when he famously said he wanted to understand what kind of choices God had —using his metaphor— how free his choices were in creating the universe, which is just asking how many freely adjustable parameters there are. Physicists tend to prefer theories that reduce that number, and as you reduce it, the problem of fine tuning tends to go away. But, again, this is just stuff we don’t understand well enough yet.

Q: I know that the nature of time is considered to be an especially tricky problem for physics, one that physicists seem prepared, or even eager, to hand over to philosophers. Why is that?

Maudlin: That’s a very interesting question, and we could have a long conversation about that. I’m not sure it’s accurate to say that physicists want to hand time over to philosophers. Some physicists are very adamant about wanting to say things about it; Sean Carroll for example is very adamant about saying that time is real. You have others saying that time is just an illusion, that there isn’t really a direction of time, and so forth. I myself think that all of the reasons that lead people to say things like that have very little merit, and that people have just been misled, largely by mistaking the mathematics they use to describe reality for reality itself. If you think that mathematical objects are not in time, and mathematical objects don’t change — which is perfectly true — and then you’re always using mathematical objects to describe the world, you could easily fall into the idea that the world itself doesn’t change, because your representations of it don’t.

There are other, technical reasons that people have thought that you don’t need a direction of time, or that physics doesn’t postulate a direction of time. My own view is that none of those arguments are very good. To the question as to why a physicist would want to hand time over to philosophers, the answer would be that physicists for almost a hundred years have been dissuaded from trying to think about fundamental questions. I think most physicists would quite rightly say “I don’t have the tools to answer a question like ‘what is time?’ - I have the tools to solve a differential equation.” The asking of fundamental physical questions is just not part of the training of a physicist anymore.

Q: I recently came across a paper about Fermi’s Paradox and Self-Replicating Probes, and while it had kind of a science fiction tone to it, it occurred to me as I was reading it that philosophers might be uniquely suited to speculating about, or at least evaluating the probabilistic arguments for the existence of life elsewhere in the universe. Do you expect philosophers of cosmology to enter into those debates, or will the discipline confine itself to issues that emerge directly from physics?

Maudlin: This is really a physical question. If you think of life, of intelligent life, it is, among other things, a physical phenomenon — it occurs when the physical conditions are right. And so the question of how likely it is that life will emerge, and how frequently it will emerge, does connect up to physics, and does connect up to cosmology, because when you’re asking how likely it is that somewhere there’s life, you’re talking about the broad scope of the physical universe. And philosophers do tend to be pretty well schooled in certain kinds of probabilistic analysis, and so it may come up. I wouldn’t rule it in or rule it out.

I will make one comment about these kinds of arguments which seems to me to somehow have eluded everyone. When people make these probabilistic equations, like the Drake Equation, which you’re familiar with — they introduce variables for the frequency of earth-like planets, for the evolution of life on those planets, and so on. The question remains as to how often, after life evolves, you’ll have intelligent life capable of making technology.

What people haven’t seemed to notice is that on earth, of all the billions of species that have evolved, only one has developed intelligence to the level of producing technology. Which means that kind of intelligence is really not very useful. It’s not actually, in the general case, of much evolutionary value. We tend to think, because we love to think of ourselves, human beings, as the top of the evolutionary ladder, that the intelligence we have, that makes us human beings, is the thing that all of evolution is striving toward. But what we know is that that’s not true.

Obviously it doesn’t matter that much if you’re a beetle, that you be really smart. If it were, evolution would have produced much more intelligent beetles. We have no empirical data to suggest that there’s a high probability that evolution on another planet would lead to technological intelligence. There is just too much we don’t know.”

Tim Maudlin, (B.A. Yale, Physics and Philosophy; Ph.D. Pittsburgh, History and Philosophy of Science), interviewed by Ross Andersen, What Happened Before the Big Bang? The New Philosophy of Cosmology, The Atlantic, Jan 2012.

Illustrations: 1 - Cambridge Digital Gallery Newton Collection, 2 - Aristotle, Ptolemy, and Copernicus discussing astronomy, Published in 1632, Library of Congress.

See also:

The Concept of Laws. The special status of the laws of mathematics and physics
Raphael Bousso: Thinking About the Universe on the Larger Scales
Stephen Hawking on the univers’s origin
Universe tag on Lapidarium notes
Universe tag on Lapidarium

Nov
22nd
Tue
permalink

Raphael Bousso: Thinking About the Universe on the Larger Scales

        

“‘The far-reaching questions are things like how do we unify all the laws of nature, how do you do quantum gravity, how do you understand how gravitation and quantum mechanics fit together, how does that fit in with all the other matter and forces that we know?’ That’s a really far-reaching and important question. 

Another far-reaching question is "what does the world look like on the largest scales?" What does the universe look like on the largest scales? How special is the part of the universe that we see?  Are there other possibilities?  Those questions are connected with each other, but in order to try to answer them, we have to try to come up with specific models, with specific ways to think about these questions, with ways to break them down into pieces, and of course, most importantly, with ways to relate them to observation and experiment. 

One important hint that came along on the theoretical side a long time ago was string theory, which wasn’t invented for this sort of deep-sounding questions. It was invented to understand something about the strong force, but then it took on its own life and became this amazing structure that could be explored and which started spitting out these answers to questions that you hadn’t even thought of asking yet, such as quantum gravity.  It started doing quantum gravity for you. (…)

Another hint that helps us break things up and lower the questions down to accessible levels is, of course, observational: what do we see when we look out the window? The one thing that’s really remarkable that we see, and it’s remarkable in the way that the question of why the sky is not bright at night is remarkable, is (it sounds stupid, but when you really think about it, it’s a profound question, and it needs an explanation: “Why isn’t there a star everywhere you look?”)  A similar kind of question is: "Why is the universe so large?"  It’s actually extremely remarkable that the universe is so large, from the viewpoint of fundamental physics.  A lot of amazing things have to happen for the universe to not be incredibly small, and I can go into that. 

One of the things that has to happen is that the energy of empty space has to be very, very small for the universe to be large, and in fact, just by looking out the window and seeing that you can see a few miles out, it’s an experiment that already tells you that the energy of empty space is a ridiculously small number, 0.000 and then dozens of zeros and then a 1.  Just by looking out the window you learn that.  

The funny thing is that when you calculate what the energy of empty space should be using theories you have available, really well-tested stuff that’s been tested in accelerators, like particle theory, the standard model, things that we know work, you use that to estimate the energy of empty space, and you can’t calculate it exactly on the dot. But you can calculate what the size of different contributions is, and they’re absolutely huge. They should be much larger than what you already know it can possibly be, again, not just by factor of 10 or 100, but by a factor of billions, of billions of billions of billions. 

This requires an explanation.  It’s only one of the things that has to go right for the universe to become as large as we see it, but it is one of the most mysterious properties that turned out to be right for the universe to become large, but it needs an explanation.

Funnily enough, because we knew that that number had to be so small, that is the energy of empty space, the weight of empty space, had to be so small, it became the lore within at least a large part of the physics community that it was probably zero for some unknown reason.  And one day we’d wake up and discover why it’s exactly zero. But instead, one day in ‘98 we woke up and discovered that it’s non-zero. One day we woke up in ‘98 and we discovered that cosmologists had done some experiments that looked at how fast the universe has been accelerating at different stages of its life, and they discovered that the universe had started to accelerate its expansion, when we used to think that what it would do is explode at the Big Bang, and then kind of get slower and slower in the way that galaxies expand away from each other.  Instead, it’s like you went off the brakes and stepped on the gas pedal a few billion years ago; the universe is accelerating. That’s exactly what a universe does if the energy of empty space is non-zero and positive, and you could look at how fast its acceleration is happening, and deduce the actual value of this number. In the last 13 years a lot of independent observations have come together to corroborate this conclusion.

It’s still true that the main thing that we needed to explain is why the cosmological constant, or the energy of empty space, isn’t huge. But now we also know that the explanation was definitely not going to be that for some symmetry reason that number is exactly zero. And so we needed an explanation that would tell us why that number is not huge, but also not exactly zero.

The amazing thing is that string theory, which wasn’t invented for this purpose, managed to provide such an explanation, and in my mind this is the first serious contact between observation, experiment on the one side, and string theory on the other. It was always interesting to have a consistent theory of quantum gravity, it’s very hard to write that down in the first place, but it turned out that string theory has exactly the kind of ingredients that make it possible to explain why the energy of empty space has this bizarre, very small, but non-zero value. 

I thought I was going to become a mathematician, and then decided to study physics instead, at the last minute, because I realized that I actually cared about understanding Nature, and not just some abstract, perhaps beautiful, but abstract construct. I went to Cambridge, the one in England, for my PhD. I worked with Stephen Hawking on questions of quantum properties of black holes, and how they might interplay with early universe cosmology.  (…)

Another topic that I started thinking about was trying to understand the small but non-zero value of the cosmological constant, energy of empty space, or as people like to call it, dark energy.  I worked on that subject with Joe Polchinski, at KITP, in Santa Barbara, and we realized that string theory offers a way of understanding this, and I would argue that that is the leading explanation currently of this mysterious problem. (…)

I don’t do experiments in the sense that I would walk into a lab and start connecting wires to something. But it matters tremendously to me that the theory that I work on is supposed to actually explain something about Nature. The problem is that the more highly developed physics becomes, we start asking questions which, for technological reasons, are not in the realm of day-to-day experimental feedback. We can’t ask about quantum gravity and expect at the same time to be getting some analog of the spectroscopic data that in the late 19th century fed the quest for quantum mechanics.  And I think it is a perfectly reasonable reaction to say, “Well, in that case I think that the subject is too risky to work on.”  But I think it’s also a reasonable reaction to say, “Well, but the question, it’s obviously a sensible one.”  It’s clearly important to understand how to reconcile quantum mechanics and general relativity.  They’re both great theories, but they totally contradict each other, and there are many reasons to believe that by understanding each other we will learn very profound things about how Nature works. Now, it could be that we are not smart enough to do this, in particular without constant feedback from experiments, but we could have been pessimistic at so many junctures in the past and we found a way around. 

I don’t think that we’re going to understand a lot about quantum gravity by building more particle accelerators.  We’ll understand a lot of other things, even a few things about quantum gravity, but ratherindirectly. But we’ll look elsewhere, we’ll look at cosmological experiments, we’ll use the universe to tell us about very high energies. We’ll come up with ideas that we can’t even dream about right now.  I’m always in awe of the inventiveness of my experimental colleagues, and I don’t doubt that they will deliver for us eventually. 

It has been said that it’s been a golden age for cosmology in the last 15 years or so, and it’s true.  I was very lucky with timing. When I was a graduate student, the COBE satellite was launched, and started flying and returning data, and that really marked the beginning of an era where cosmology was no longer the sort of subject where there were maybe one or two numbers to measure and people had uncertainties on say how fast the universe expands. They couldn’t even agree on how fast galaxies are moving away from each other.

And from this, we move to a data-rich age where you have unbelievably detailed information about how matter is distributed in the universe, how fast the universe is, not just expanding right now, but the expansion history, how fast it was expanding at earlier times, and so on.  Things were measured that seemed out of reach just a few years earlier, and so indeed it’s no longer possible to look down on cosmology as this sort of hand-waving subject where you can say almost anything and never be in conflict with the data. In fact, a lot of theories have gone down the road of being eliminated by data in the past 15 years or so, and several more are probably going to go down that road pretty soon. (…)

Inflation looks really good.  It’s not like we have a smoking gun confirmation of it, but it has passed so many tests, it could have been ruled out quite a few times by now, that it I would say is looking really interesting right now. 

Inflation comes in many detailed varieties, but it does make a number of rather generic predictions, and unless you work very hard to avoid them, they come with pretty much every inflation model you grab off the shelf. One of those predictions is that the spatial geometry of the universe would be flat.  It should be the kind of geometry that you learn about in high school as opposed to the weird kind of geometry that mathematicians study in university, and that has turned out to be the case. To within a percent precision,  we now know that the universe is spatially flat. Inflation predicts a particular pattern of perturbations in the sky, and again, to the extent that we have the data, and we have very precise data by now, there was plenty of opportunity to rule out that prediction, but inflation still stands.  So there are a number of general predictions that inflation makes which have held up very well, but we’re not yet at a point where we can say, it’s this particular make and model of inflation that is the right one, and not this other one.  We’re zooming in.  Some types of inflation have been ruled out, large classes of models have been ruled out, but we haven’t zoomed in on the one right answer, and that might still take a while, I would expect. 

I was saying that string theory has in a way surprised us by being able to solve a problem that other theories, including some that were invented for that purpose alone, had not been able to address, i.e. the problem of why empty space weighs so little, why is there so little dark energy. The way that string theory does this is very similar to the way that we can explain the enormous variety of that we see when we look at the chair, the table, and the sofa in this room. What are these things? 

They’re basically a few basic ingredients, electrons, quarks, and photons. You’ve got five different particles, and you put them together, and now you’ve got lots and lots of these particle. There are very few fundamental ingredients, but you have many copies of them. You have many quarks, you have many electrons, and when you put them together you have a huge number of possibilities of what you can make.  It’s just like with a big box of Legos, there are lots of different things you can build out of that.  With a big box of quarks and electrons you can build a table if you want, or you can build a chair if you want.  It’s your choice.  And strictly speaking, if I take one atom and I move it over here to a slightly different place on this chair, I’ve built a different object. These objects in technical lingo will be called solutions of a certain theory called the standard model. If I have a block of iron, I move an atom over there, it’s a different solution of the standard model.

The fact that there are innumerably many different solutions of the standard model does not of course mean that the standard model of particle physics (this triumph of human thinking) is somehow unbelievably complicated, or that it’s a theory of anything, or that it has no predictive power, it just means that it is rich enough to accommodate the rich phenomenology we actually see in nature, while at the same time starting from a very simple setup.  There are only certain quarks. There is only one kind of electron.  There are only certain ways you can put them together, and you cannot make arbitrary materials with them.  There are statistical laws that govern how very large numbers of atoms behave, so even though things look like they get incredibly complicated, actually they start simplifying again when you get to really large numbers

In string theory we’re doing a different kind of building of iron blocks. String theory is a theory that wants to live in ten dimensions, nine spatial dimensions and one time. We live in three spatial dimensions and one time, or at least so it seems to us. And this used to be viewed as a little bit of an embarrassment for string theory, not fatal, because it’s actually fairly easy to imagine how some of those spatial dimensions could be curled up into circles so small that they wouldn’t be visible even under our best microscopes.  But it might have seemed nicer if the theory had just matched up directly with observation.

It matches up with observation very nicely when you start realizing that there are many different ways to curl up the six unwanted dimensions. How do you curl them up?  Well, it’s not like they just bend themselves into some random shape.  They get shaped into a small bunch of circles, whatever shape they want to take, depending on what matter there is around. 

Similarly to how the shape of your Lego car depends on how you put the pieces together, the shape of this chair depends on how you put the atoms in it together, the shape of the extra dimensions depends on how you put certain fundamental string theory objects together.  Now, string theory actually is even more rigorous about what kind of fundamental ingredients it allows you to play with than the Lego Company or the standard model.  It allows you to play with fluxes, D-branes, and strings, and these are objects that we didn’t put into the theory, the theory gives them to us and says, “This is what you get to play with.”  But depending on how it warps strings and other objects called D-branes and fluxes in the extra six dimensions, these six dimensions take on a different shape.  In effect, this means that there are many different ways of making a three-dimensional world, just as there are many ways of building a block of iron, or a Lego car, there are many different ways of making a three-plus-one dimensional-seeming world

Of course, none of these worlds are truly three-plus-one dimensional.  If you could build a strong enough accelerator, you could see all these extra dimensions.  If you could build an even better accelerator, you might be able to even manipulate them and make a different three-plus-one dimensional world in your lab.  But naturally you would expect that this happens at energy scales that are currently and probably for a long time inaccessible to us.  But you have to take into account the fact that string theory has this enormous richness in how many different three-plus-one dimensional worlds it can make.

Joe Polchinski and I did an estimate, and we figured that there should be not millions or billions of different ways of making a three-plus-one dimensional world, but ten to the hundreds, maybe ten to the five hundred different ways of doing this. This is interesting for a number of reasons, but the reason that seemed the most important to us is that it implies that string theory can help us understand why the energy of the vacuum is so small.  Because, after all, what we call “the vacuum" is simply a particular three-plus-one dimensional world, what that one looks like when it’s empty. And what that one looks like when it’s empty is basically, it still has all the effects from all this stuff that you have in the extra dimensions, all these choices you have there about what to put. 

For every three-plus-one dimensional world, you expect that in particular the energy of the vacuum is going to be different, the amount of dark energy, or cosmological constant is going to be different.  And so if you have ten to the five hundred ways of making a three-plus-one dimensional world, and some of them just by accident, the energy of the vacuum is going to be incredibly close to zero. 

The other thing that is going to happen is that in about half of these three-plus-one dimensional worlds, the vacuum is going to have positive energy. So even if you don’t start out the universe in the right one, where by “right one” I mean the one that later develops beings like us to observe it, you could start it out in a pretty much random state, another way of making a three-dimensional world.  What would happen is it would grow very fast, because positive vacuum energy needs acceleration, as we observed today in the sky, it will grow very fast, and then by quantum mechanical processes it would decay, and you would see changes in the way that matter is put into these extra dimensions, and locally you would have different three-plus-one dimensional worlds appearing.  (…)

What happens is the universe gets very, very large, all these different vacua, three-dimensional worlds that have positive weight, grow unboundedly, and decay locally, and new vacuole appear that try to eat them up, but they don’t eat them up fast enough. So the parents grow faster than the children can eat them up, and so you make everything.  You fill the universe with these different vacua, these different kinds of regions in which empty space have all sorts of different weights. Then you can ask, “Well, in such a theory, where are the observers going to be?" To just give the most primitive answer to this question, it’s actually very useful to remember the story about the holographic principle. (…)

If you have a lot of vacuum energy, then even though the universe globally grows and grows and grows, if you sit somewhere and look around, there is a horizon around you. The region that’s causally connected, where particles can interact and form structure is inversely related to the amount of vacuumed energy you have.  This is why I said earlier that just by looking out the window and seeing that the universe is large, we know that there has to be very little vacuum energy.  If there’s a lot of vacuum energy, the universe is a tiny little box from the viewpoint of anybody sitting in it. The holographic principle tells you that the amount of information in the tiny little box is proportional to the area of its surface.  If the vacuum energy has this sort of typical value that it has in most of the vacua, that surface allows for only a few bits of informationSo whatever you think observers look like, they probably are a little bit more complicated than a few bits. 

And so you can immediately understand that you don’t expect observers to exist in the typical regions.  They will exist in places where the vacuum energy happens to be unusually small due to accidental cancellations between different ingredients in these extra dimensions, and where, therefore, there is room for a lot of complexity.  And so you have a way of understanding both the existence of regions in the universe somewhere with very small vacuum energy, and also of understanding why we live in those particular rather atypical regions. 

What’s interesting about this is the idea that maybe the universe is a very large multi-verse with different kinds of vacua in it was actually thrown around independently of string theory for some time, in the context of trying to solve this famous cosmological constant problem. But it’s not actually that easy to get it all right. If you just imagine that the vacuum energy got smaller and smaller and smaller as the universe went on, that the vacua are nicely lined up with each one that you decay into having slightly smaller vacuum energy than the previous one, you cannot solve this problem. You can make the vacuum energy small, but you also empty out the universe. You won’t have any matter in it. (…)

I think that the things that haven’t hit Oprah yet, and which are up and coming are questions like, well, if the universe is really accelerating its expansion, then we know that it’s going to get infinitely large, and that things will happen over and over and over. And just simply because if you have infinitely many tries at something, then every possible outcome is going to happen infinitely many times, no matter how unlikely it is.

This is actually something that predates this string theory multiverse that I was talking about. It’s a very robust question in the sense that even if you believe string theory is a bunch of crap, you still have to worry about this problem because it’s based on just observation. You see that the universe is currently expanding in an accelerated way, and unless there’s some kind of conspiracy that’s going to make this go away very quickly, it means that you have to address this problem of infinities. But the problem becomes even more important in the context of the string landscape because it’s very difficult to make any predictions in the landscape if you don’t tame those infinities.

Why?  Because you want to say that seeing this thing in your experiment is more likely than that thing, so that if you see the unlikely thing, you can rule out your theory the way we always like to do physics. But if both things happen infinitely many times, then on what basis are you going to say that one is more likely than the other? You need to get rid of these infinities.  This is called, at least among cosmologists, the measure problem It’s probably a really bad name for it, but it stuck.

That’s where a lot of the action is right now.  That’s where a lot of the technical work is happening, that’s where people are, I think, making progress.  I think we’re ready for Oprah, almost, and I think that’s a question where we’re going to come full circle, we’re going to learn something about the really deep questions, about what is the universe like on the largest scales, how does quantum gravity work in cosmology? I don’t think we can fully solve this measure problem without getting to those questions, but at the same time, the measure problem allows us a very specific way in. It’s a very concrete problem. If you have a proposal, you can test it, you can rule it out, or you can keep testing it if it still works, and by looking at what works, by looking at what doesn’t conflict with observation, by looking at what makes predictions that seem to be in agreement with what we see, we’re actually learning something about the structure of quantum gravity.  

So I think that it’s currently a very fruitful direction. It’s a hard problem, because you don’t have a lot to go by. It’s not like it’s an incremental, tiny little step.  Conceptually it’s a very new and difficult problem. But at the same time it’s not that hard to state, and it’s remarkably difficult to come up with simple guesses for how to solve it that you can’t immediately rule out. And so we’re at least in the lucky situation that there’s a pretty fast filter. You don’t have a lot of proposals out there that have even a chance of working.

The thing that’s really amazing, at least to me, is in the beginning we all came from different directions at this problem, we all had our different prejudices.  Andrei Linde had some ideas, Alan Guth had some ideas, Alex Vilenkin had some ideas.  I thought I was coming in with this radically new idea that we shouldn’t think of the universe as existing on this global scale that no one observer can actually see, that it’s actually important to think about what can happen in the causally connected region to one observer. What can you do in any experiment that doesn’t actually conflict with the laws of physics and require superluminal propagation. We have to ask questions in a way that conform to the laws of physics if we want to get sensible answers. (…)

A lot of things have now happened that didn’t have to happen, a lot of things have happened that give us some confidence that we’re on to something, and at the same time we’re learning something about how to think about the universe on the larger scales.”

Raphael Bousso, theoretical physicist and string theorist. Currently he is a professor at Department of Physics, UC Berkeley, Thinking About the Universe on the Larger Scales, Edge, Oct 24, 2011. (Illustration source)

See also:

The Concept of Laws. The special status of the laws of mathematics and physics
☞ Tim Maudlin, What Happened Before the Big Bang? The New Philosophy of Cosmology, The Atlantic, Jan 2012.
Vlatko Vedral: Decoding Reality: the universe as quantum information
Universe tag on Lapidarium notes
Universe tag on Lapidarium

Sep
5th
Mon
permalink

Quantum minds: Why we think like quarks - ‘To be human is to be quantum’

                     

"The quantum world defies the rules of ordinary logic. Particles routinely occupy two or more places at the same time and don’t even have well-defined properties until they are measured. It’s all strange, yet true - quantum theory is the most accurate scientific theory ever tested and its mathematics is perfectly suited to the weirdness of the atomic world. (…)

Human thinking, as many of us know, often fails to respect the principles of classical logic. We make systematic errors when reasoning with probabilities, for example. Physicist Diederik Aerts of the Free University of Brussels, Belgium, has shown that these errors actually make sense within a wider logic based on quantum mathematics. The same logic also seems to fit naturally with how people link concepts together, often on the basis of loose associations and blurred boundaries. That means search algorithms based on quantum logic could uncover meanings in masses of text more efficiently than classical algorithms.

It may sound preposterous to imagine that the mathematics of quantum theory has something to say about the nature of human thinking. This is not to say there is anything quantum going on in the brain, only that "quantum" mathematics really isn’t owned by physics at all, and turns out to be better than classical mathematics in capturing the fuzzy and flexible ways that humans use ideas. "People often follow a different way of thinking than the one dictated by classical logic," says Aerts. “The mathematics of quantum theory turns out to describe this quite well.”

It’s a finding that has kicked off a burgeoning field known as “quantum interaction”, which explores how quantum theory can be useful in areas having nothing to do with physics, ranging from human language and cognition to biology and economics. (…)

One thing that distinguishes quantum from classical physics is how probabilities work. Suppose, for example, that you spray some particles towards a screen with two slits in it, and study the results on the wall behind (see diagram below). Close slit B, and particles going through A will make a pattern behind it. Close A instead, and a similar pattern will form behind slit B. Keep both A and B open and the pattern you should get - ordinary physics and logic would suggest - should be the sum of these two component patterns.

But the quantum world doesn’t obey. When electrons or photons in a beam pass through the two slits, they act as waves and produce an interference pattern on the wall. The pattern with A and B open just isn’t the sum of the two patterns with either A or B open alone, but something entirely different - one that varies as light and dark stripes. (…)

The phenomenon may go well beyond physics, and one example of this is the violation of what logicians call the “sure thing” principle. This is the idea that if you prefer one action over another in one situation - coffee over tea in situation A, say, when it’s before noon - and you prefer the same thing in the opposite situation - coffee over tea in situation B, when it’s after noon - then you should have the same preference when you don’t know the situation: that is, coffee over tea when you don’t know what time it is.

Remarkably, people don’t respect this rule. (…)

Flexible logic

Suppose you ask people to put various objects, such as an ashtray, a painting and a sink, into one of two categories: “home furnishings” and “furniture”. Next, you ask if these objects belong to the combined category “home furnishings or furniture”. Obviously, if “ashtray” or “painting” belongs in home furnishings, then it certainly belongs in the bigger, more inclusive combined category too. But many experiments over the past two decades document what psychologists call the disjunction effect - that people often place things in the first category, but not in the broader one. Again, two possibilities listed simultaneously lead to strange results.

These experiments demonstrate that people aren’t logical, at least by classical standards. But quantum theory, Aerts argues, offers richer logical possibilities. For example, two quantum events, A and B, are described by so-called probability amplitudes, alpha and beta. To calculate the probability of A happening, you must square this amplitude alpha and likewise to work out the probability of B happening. For A or B to happen, the probability amplitude is alpha plus beta. When you square this to work out the probability, you get the probability of A (alpha squared) plus that of B (beta squared) plus an additional amount - an “interference term” which might be positive or negative.

This interference term makes quantum logic more flexible. In fact, Aerts has shown that many results demonstrating the disjunction effect fit naturally within a model in which quantum interference can play a role. The way we violate the sure thing principle can be similarly explained with quantum interference, according to economist Jerome Busemeyer of Indiana University in Bloomington and psychologist Emmanuel Pothos of the University of Wales in Swansea. "Quantum probabilities have the potential to provide a better framework for modelling human decision making," says Busemeyer.

The strange links go beyond probability, Aerts argues, to the realm of quantum uncertainty. One aspect of this is that the properties of particles such as electrons do not exist until they are measured. The experiment doing the measuring determines what properties an electron might have.

Hilbert's mathematics includes this effect by representing the quantum state of an electron by a so-called state vector - a kind of arrow existing in an abstract, high-dimensional space known as Hilbert space. An experiment can change the state vector arrow, projecting it in just one direction in the space. This is known as contextuality and it represents how the context of a specific experiment changes the possible properties of the electron being measured.

The meaning of words, too, changes according to their context, giving language a “quantum” feel. For instance, you would think that if a thing, X, is also a Y, then a “tall X” would also be a “tall Y” - a tall oak is a tall tree, for example. But that’s not always the case. A chihuahua is a dog, but a tall chihuahua is not a tall dog; “tall” changes meaning by virtue of the word next to it. Likewise, the way “red” is defined depends on whether you are talking about “red wine”, “red hair”, “red eyes” or “red soil”. “The structure of human conceptual knowledge is quantum-like because context plays a fundamental role,” says Aerts.

These peculiar similarities also apply to how search engines retrieve information. Around a decade ago, computer scientists Dominic Widdows, now at Google Research in Pittsburgh, Pennsylvania, and Keith van Rijsbergen of the University of Glasgow, UK, realised that the mathematics they had been building into search engines was essentially the same as that of quantum theory. (…)

Quantum leaps

An urgent challenge is to get computers to find meaning in data in much the same way people do, says Widdows. If you want to research a topic such as the “story of rock” with geophysics and rock formation in mind, you don’t want a search engine to give you millions of pages on rock music. One approach would be to include “-songs” in your search terms in order to remove any pages that mention “songs”. This is called negation and is based on classical logic. While it would be an improvement, you would still find lots of pages about rock music that just don’t happen to mention the word songs.

Widdows has found that a negation based on quantum logic works much better. Interpreting “not” in the quantum sense means taking “songs” as an arrow in a multidimensional Hilbert space called semantic space, where words with the same meaning are grouped together. Negation means removing from the search pages that shares any component in common with this vector, which would include pages with words like music, guitar, Hendrix and so on. As a result, the search becomes much more specific to what the user wants.

"It seems to work because it corresponds more closely to the vague reasoning people often use when searching for information," says Widdows. "We often rely on hunches, and traditionally, computers are very bad at hunches. This is just where the quantum-inspired models give fresh insights."

That work is now being used to create entirely new ways of retrieving information. Widdows, working with Trevor Cohen at the University of Texas in Houston, and others, has shown that quantum operations in semantic Hilbert spaces are a powerful means of finding previously unrecognised associations between concepts. This may even offer a route towards computers being truly able to discover things for themselves. (…)

Why should quantum logic fit human behaviour? Peter Bruza at Queensland University of Technology in Brisbane, Australia, suggests the reason is to do with our finite brain being overwhelmed by the complexity of the environment yet having to take action long before it can calculate its way to the certainty demanded by classical logic. Quantum logic may be more suitable to making decisions that work well enough, even if they’re not logically faultless. “The constraints we face are often the natural enemy of getting completely accurate and justified answers,” says Bruza.

This idea fits with the views of some psychologists, who argue that strict classical logic only plays a small part in the human mind. Cognitive psychologist Peter Gardenfors of Lund University in Sweden, for example, argues that much of our thinking operates on a largely unconscious level, where thought follows a less restrictive logic and forms loose associations between concepts.

Aerts agrees. “It seems that we’re really on to something deep we don’t yet fully understand.” This is not to say that the human brain or consciousness have anything to do with quantum physics, only that the mathematical language of quantum theory happens to match the description of human decision-making.

Perhaps only humans, with our seemingly illogical minds, are uniquely capable of discovering and understanding quantum theory. To be human is to be quantum.”

Mark Buchanan, American physicist and author, Quantum minds: Why we think like quarks, New Scientist, 05 September 2011 (Illustration source)

See also:

Vlatko Vedral: Decoding Reality: the universe as quantum information
Geoffrey West on Why Cities Keep Growing, Corporations and People Always Die, and Life Gets Faster
The Concept of Laws. The special status of the laws of mathematics and physics, Lapidarium

Aug
3rd
Wed
permalink

What Is Reality?

"What we call reality,” [physicist] John Archibald Wheeler wrote coyly, “arises in the last analysis from the posing of yes-no questions.” He added: “All things physical are information-theoretic in origin, and this is a participatory universe.” The whole universe is thus seen as a computer—a cosmic information-processing machine.”James Gleick, The Information: A History, a Theory, a Flood, Pantheon, 2011

"Reality is an intelligent conversation with the universe." - cited in What Is Reality?, BBC Horizon documentary, 2011

The illusion of reality | BBC

"Professor Jim Al-Khalili explores how studying the atom forced us to rethink the nature of reality itself. He discovers that there might be parallel universes in which different versions of us exist, finds out that empty space isn’t empty at all, and investigates the differences in our perception of the world in the universe and the reality.” — BBC Four, 2010

What Is Reality? | BBC Horizon

"There is a strange and mysterious world that surrounds us, a world largely hidden from our senses. The quest to explain the true nature of reality is one of the great scientific detective stories.

Clues have been pieced together from deep within the atom, from the event horizon of black holes, and from the far reaches of the cosmos. It may be that that we are part of a cosmic hologram, projected from the edge of the universe. Or that we exist in an infinity of parallel worlds. Your reality may never look quite the same again.” — BBC Horizon, 2011 (Full playlist)

[This note will be gradually expanded…]

See also:

CERN, The Large Hadron Collider
Fermilab - Scientists at Fermilab carry out research in high-energy physics to answer the questions: What is the universe made of? How does it work? Where did it come from?
Quantum Optics, Quantum Nanophysics, Quantum Information, Universitat Wien
Realism, Stanford Encyclopedia of Philosophy
Objectivity, IEP
Phenomenology Online - Materials discussing and exemplifying phenomenological research
Facts, Stanford Encyclopedia of Philosophy
Consensus reality, Wiki
The Relativity of Truth - a brief résumé, Lapidarium

Jul
28th
Thu
permalink

Where Is Now? The Paradox Of The Present


The night sky is a time machine. Look out and you look back in time. But this “time travel by eyesight” is not just the province of astronomy. It’s as close as the machine on which you are reading these words. Your present exists at the mercy of many overlapping pasts. So where, then, is “now”?

As almost everyone knows, when you stare into the depths of space you are also looking back in time. Catch a glimpse of a relatively nearby star and you see it as it existed when, perhaps, Lincoln was president (if it’s 150 light-years away). Stars near the edge of our own galaxy are only seen as they appeared when the last ice age was in full bloom (30,000 light-years away). And those giant pinwheel assemblies of stars called galaxies are glimpsed, as they existed millions, hundreds of millions or even billions of years in the past. (…)

Stranger still, the sky we see at any moment defines not a single past but multiple overlapping pasts of different depths. The star’s image from 100 years ago and the galaxy image from 100 million years ago reach us at the same time. All of those “thens” define the same “now” for us.

The multiple, foliated pasts comprising our present would be weird enough if it was just a matter of astronomy. But the simple truth is that every aspect of our personal “now” is a layered impression of a world already lost to the past.

To understand how this works, consider the simple fact (…) all we know about the world comes to us via signals: light waves, sound waves and electrical impulses running along our nerves. These signals move at a finite speed. It always takes some finite amount of time for the signal to travel from the world to your body’s sensors (and on to your brain).

A distant galaxy, a distant mountain peak, the not very distant light fixture on the ceiling and even the intimacy of a loved one’s face all live in the past. Those overlapping pasts are times that you — in your “now” — are no longer a part of.

Signal travel time constitutes a delay and all those overlapping delays constitute an essential separation. The inner world of your experience is, in a temporal sense, cut off from the outer world you inhabit.

Let’s take a few examples. Light travels faster than any other entity in the physical universe, propagating with the tremendous velocity of c = 300,000,000 m/s. From high school physics you know that the time it takes a light signal moving at c to cross some distance D is simply t = D/c.

When you look at the mountain peak 30 kilometers away you see it not as it exists now but as it existed a 1/10,000 of a second ago. The light fixture three meters above your head is seen not as it exists now but as it was a hundred millionth of a second ago. Gazing into your partner’s eyes, you see her (or him) not for who they are but for who they were 10-10 of a second in the past. Yes, these numbers are small. Their implication, however, is vast.

We live, each of us, trapped in our own now.

The simple conclusions described above derive, in their way, from relativity theory and they seem to spell the death knell for a philosophical stance called Presentism. According to Presentism only the present moment has ontological validity. In other words: only the present truly exists; only the present is real.

Presentism holds an intuitive sway for many people. It just feels right. For myself, when I try and explore the texture of my own experience, I can’t help but feel a sense of the present’s dominance. Buddhism, with its emphasis on contemplative introspection, has developed a sophisticated presentist stance concerning the nature of reality. “Anyone who has ever mediated for anytime” the abbot of a Zen monastery once told me “finds that the past and future are illusions.”

Yes, but …

The reality that even light travels at a finite speed forces us to confront the strange fact that, at best, the present exists at the fractured center of many overlapping pasts.

So where, then, are we in time? Where is our “now” and how does it live in the midst of a universe comprised of so many “thens”?”

Adam Frank, US physicist, astronomer and writer, Department of Physics And Astronomy at University of Rochester, Where Is Now? The Paradox Of The Present, npr, July 26, 2011 (Picture source)

See also:

Adam Frank, Hidden In Plain View: The Physics Of Cloaking Time, Space And Experience, npr, July 19, 2011 (Image: Jonathan Nackstrand/AFP)

                       
                      The path light travels determines the image you see.

You never experience the world as it is. You only experience it in the way light brings it to you.

And light can be taught to lie.

Last week researchers at Cornell University announced they had created a time cloaking device. Using their machine they could hide an event from detection, even if it occurred in plain view of very capable detectors. (…)

Both experiments rely on the complex realization of a simple truth about our experience of the world. We have no “direct” knowledge of the world-in-of-itself but, instead, are forced to rely on signals carried to us from external objects. If the properties of the signals are somehow changed while they are traveling to us then our experience of the world is changed as well. (…)

Nature and light can, however, be manipulated in ways that can make illusions impossible to detect. This is the new physics of cloaking. (…)”

Thermodynamic Asymmetry in Time, Stanford Encyclopedia of Philosophy
The Experience and Perception of Time, Stanford Encyclopedia of Philosophy
Time tag on Lapidarium
Time tag on Lapidarium notes

Jul
18th
Mon
permalink

Making sense of a visible quantum object. How can an object that is visible to the naked eye be in two places at the same time?

"Can an object that is visible to the naked eye be in two places at the same time? Common sense and experience told us that the answer is "no" — until recently. In this presentation, physicist Aaron O’Connell tells us a little about the bizarre rules of quantum mechanics, which were thought to be completely different for human-scale objects — but are they really? In a breakthrough experiment, Dr O’Connell blurs that distinction by creating an object that is visible to the unaided eye, but provably in two places at the same time. In this talk, he suggests an intriguing way of thinking about the result:

(…) While there, in an experiment remarkable both for its conceptual simplicity and technical difficulty, Dr O’Connell was the first person to measure quantum effects in an object large enough to see with the naked eye [dissertation: PDF]. Named “Breakthrough of the year" by Science Magazine, the experiment shattered the previous record for the largest quantum object, showing decisively that there is no hard line between the quantum and everyday worlds.”

"Until now, all machines have moved according to the not-surprising laws of classical mechanics, which govern the motion of everyday objects. In contrast, a tiny machine unveiled this year jiggles in ways explicable only by the weird rules of quantum mechanics, which ordinarily govern molecules, atoms, and subatomic particles. The proto-quantum machine opens the way to myriad experimental devices and perhaps tests of our sense of reality.” — Science Magazine

Aaron O’Connell, American experimental quantum physicist. While working under Andrew N. Cleland and John M. Martinis at the University of California, Santa Barbara, he created the world’s first quantum machine, Making sense of a visible quantum object, Guardian, 20 June 2011

See also: ☞ Vlatko Vedral: Decoding Reality: the universe as quantum information

Jun
17th
Fri
permalink

Jameson Dungan on information and synthetic biology

          

"What is now proved was once only imagin’d." - William Blake, The Marriage of Heaven and Hell

"How profoundly and fundamentally important information and information theory are in the underlying operations of the universe. We no longer refer to “the speed of light” but rather to “the speed of information.” Quantum Information Theory [3] states information must be physical. Information has deep and profound connections to entropy, thermodynamics, and is an underlying property of reality. If entropy is responsible for the arrow of time, then the speed of information is responsible for causality. Ultimately, we discovered that information and information processing are the core components of what we call life.

In the year 2010, humanity understood that life is code, code is information, and information could be hacked when the first synthetic life form was created inside a digital computer as code, printed out and booted up, becoming a self replicating entity. This process might sound crude and archaic today in our Post-Singularity Age, but this was the time that set the stage that gave rise to the world as we know it. History does have its visionaries, even before the discovery of DNA the great Erwin Schrodinger [4] stated “Life is information,” and that was in 1944, well over 100 years ago.

As biological technologies proliferated at many times the rate of Moore’s Law, [5] it became apparent that synthetic biology was going to be the Second Industrial Revolution. Before this time, molecular manufacturing used violent techniques such as harsh acids, extreme temperatures, intense pressures, and toxic chemicals all in brute force techniques. (…) The synergy of information technology and biology were amazing. Not only were they just compatible, but complimentary as well. Genes are linear information of digital code that can be executed through transcription factors. Biology is an information process we call metabolism. Today, we don’t even think of it as synthetic biology, but this technology is responsible for the cleaning of our air and producing pure water, as well as fabricating a wide range of materials, pharmaceuticals, and growing energy from artificial and modified versions of photosynthesis. It was long argued that pixel density would remain superior. However, who can argue against the size of cells operating as pixels, only many orders of magnitude smaller, especially when burnt out pixels can regenerate themselves.

Today, whole buildings and cities emerge from genetic code. The self-assembling cities and maintenance repair systems for the buildings within them are all based in code. Even at the beginning of the 21st century, microbes were used to mine a quarter of the world’s copper supply through bioleaching. They discovered extremophiles [6] capable of metabolizing rocks and excreting pure copper ore. This technique has been refined and genetically upgraded to mine huge asteroids, giving us the stockpiles of raw materials we maintain in orbit today. These resources are used for the space stations, dry docks, space based solar fields, and the artificial ring under construction around Earth.

After some time, this mysterious field of biology became standardized, and tools were abstracted for ease of engineering. It is almost amusing to think back to a time before there was engineering of any field of science. The first field developed was electrical engineering. Slowly, science yielded practicable formulas, such as Maxwell Equations, [7] which describe electrical and magnetic phenomena. After collective experiments and equations gave rise to enough data and accurate theory, people started to understand that these rules could be designed, standardized, made modular, and ultimately be engineered. There were whole Computer Assisted Drafting (CAD) systems that made constructing electrical systems so easy that children with little technical understanding could engineer circuits and computers. The same thing happened in the field of Synthetic Organic Chemistry.

First, there was alchemy and out of its ashes rose chemistry and eventually the same thing happened. Enough data, formulas, and experimentation gave rise to the notion that not only could chemistry be understood and predicted, but purposely and specifically designed and engineered. Engineering Biology was no different. It was the garage hackers and wetware tinkers who started the huge industries today, such as Microbesoft. [8] As we now see, there are even home compilers and incubator kits for children to Do-It-Yourself (DIY) their pets. They design them from lists of parts, or they make their own. The computer compiles the virtual organism and compresses the information into DNA. Once the biological program begins to run, it takes only six to eight weeks in the incubator and a totally unique species of a pet is yours.

To simplify things, the English alphabet contains twenty-six letters. With these twenty-six symbols, one can connect them together to make all the known words in the entire language, and coin new words as well. Those words can join to make statements and sentences with meanings, which can go on to make whole paragraphs, pages, books, essentially every type or body of information since the dawn of time; likewise in biology. Everything physical—all the molecular machinery of all life on Earth—is made up of twenty amino acids joined together in linear sequences. These twenty amino acids are the alphabet of life; only life speaks in three dimensions.

The difficulty of engineering biology was not in understanding the compiler language of the ribosome, or how to standardize and modularize biological parts. It was the non-linear complex adaptive systems that were difficult to predict and engineer. This was because of the recursive feedback loops and self-assembling characteristics that would yield emergent properties and systems many times more complex than the levels below them. Proteins and enzymes spontaneously synchronize and self-assemble into higher structures, orchestrating a symphony of gene expression and metabolic ballet. Today, the information of many individual genomes can exist redundantly in gene swarms. The idea of swarming was inspired by bees because in each hive, bees contain fifty percent identical DNA, having all inherited half of their genes from the same mother, the queen. This is analogous to early hard drive RAID 2 configurations; only today each organism in the swarm contains fifty percent of the collective genes in the swarm. Many organisms of the collective group can fail or be destroyed while preserving the complete integrity of all individual genomes. These systems could have only been understood through developments in the mathematical fields of non-linear dynamics, chaos theory, [9] and complexity science. (…)

Now we are experimenting with a wide new range of molecules and elements, broadening the definition of life and bio-molecules. We have artificial base-pairs other than G, A, C, and T which code for more than the original twenty amino acids by using a four or five codon [10] system instead of the traditional three. (…)

There is almost no end to how flexible biology could be and what we can create from recursive information. In the Post-Singularity world we are limited only by our imagination.”

Jameson Dungan a student of nature, life-hacker, and singularity enthusiast pursuing a path in Synthetic Biology, In Our Post-Singularity Age, The Journal of Geoethical Nanotechnology, June 2011 (Illustration source: Life 2.0, The Economist)

See also:

Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

Jun
16th
Thu
permalink

The Future of Science…Is Art?

                              
   Leonardo Da Vinci, Study for an angel’s face from The Virgin of the Rocks, ca. 1483

"This pencil study stunningly illustrates for me a key parallel between science and the arts: They strive for representation and expression, to capture some essential truth about a chosen subject with simplicity and economy. My equations and diagrams are no more the world I’m trying to describe than the artist’s pencil strokes are the woman he drew. However, it shows what’s possible, despite that limitation. The woman that emerges from the simple pencil strokes is so alive that she stares into your soul. In attempting to capture the universe, I mustn’t confuse my equations with the real thing, but from them some essential truths about nature will spring forth, transcending the mathematics and coming to life."

— Clifford Johnson, Physicist, University of Southern California © Alinari Archives/Corbis

"Physics is a form of insight, and as such, it’s a form of art."

David Bohm, American-born British quantum physicist who made contributions in the fields of theoretical physics, philosophy and neuropsychology, and to the Manhattan Project (1917-1992)

Science needs the arts. We need to find a place for the artist within the experimental process, to rediscover what Bohr observed when he looked at those cubist paintings. The current constraints of science make it clear that the breach between our two cultures is not merely an academic problem that stifles conversation at cocktail parties. Rather, it is a practical problem, and it holds back science’s theories. If we want answers to our most essential questions, then we will need to bridge our cultural divide. By heeding the wisdom of the arts, science can gain the kinds of new insights and perspectives that are the seeds of scientific progress. (…)

Since its inception in the early 20th century, neuroscience has succeeded in becoming intimate with the brain. Scientists have reduced our sensations to a set of discrete circuits. They have imaged our cortex as it thinks about itself, and calculated the shape of ion channels, which are machined to subatomic specifications.

And yet, despite this vast material knowledge, we remain strangely ignorant of what our matter creates. We know the synapse, but don’t know ourselves. In fact, the logic of reductionism implies that our self-consciousness is really an elaborate illusion, an epiphenomenon generated by some electrical shudder in the frontal cortex. There is no ghost in the machine; there is only the vibration of the machinery. Your head contains 100 billion electrical cells, but not one of them is you, or knows or cares about you. In fact, you don’t even exist. The brain is nothing but an infinite regress of matter, reducible to the callous laws of physics. (…)

Neuroscience excels at unraveling the mind from the bottom up. But our self-consciousness seems to require a top-down approach. As the novelist Richard Powers wrote, “If we knew the world only through synapses, how could we know the synapse?” The paradox of neuroscience is that its astonishing progress has exposed the limitations of its paradigm, as reductionism has failed to solve our emergent mind. Much of our experiences remain outside its range.

This world of human experience is the world of the arts. The novelist and the painter and the poet embrace those ephemeral aspects of the mind that cannot be reduced, or dissected, or translated into the activity of an acronym. They strive to capture life as it’s lived. As Virginia Woolf put it, the task of the novelist is to “examine for a moment an ordinary mind on an ordinary day…[tracing] the pattern, however disconnected and incoherent in appearance, which each sight or incident scores upon the consciousness.” She tried to describe the mind from the inside.

Neuroscience has yet to capture this first-person perspective. Its reductionist approach has no place for the “I” at the center of everything. It struggles with the question of qualia. Artists like Woolf, however, have been studying such emergent phenomena for centuries, and have amassed a large body of knowledge about such mysterious aspects of the mind. They have constructed elegant models of human consciousness that manage to express the texture of our experience, distilling the details of real life into prose and plot. That’s why their novels have endured: because they feel true. And they feel true because they capture a layer of reality that reductionism cannot.”

Jonah Lehrer, American science journalist, The Future of Science…Is Art?, SEED, Jan 16, 2008

See also:

Werner Herzog and Lawrence Krauss on Connecting Science and Art
Richard Feynman and Jirayr Zorthian on science, art and beauty
Piet Hein on Art and Science
Art and Science tag on Lapidarium