Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Pensieri a caso
Photography
A Box Of Stories
Reading Space
Homepage

Twitter
Facebook

Contact

Archive

Mar
27th
Wed
permalink

Hilary Putnam - ‘A philosopher in the age of science’

                  

"Imagine two scientists are proposing competing theories about the motion of the moon. One scientist argues that the moon orbits the earth at such and such a speed due to the effects of gravity and other Newtonian forces. The other, agreeing to the exact same observations, argues that behind Newtonian forces there are actually undetectable space-aliens who are using sophisticated tractor beams to move every object in the universe. No amount of observation will resolve this conflict. They agree on every observation and measurement. One just has a more baroque theory than the other. Reasonably, most of us think the simpler theory is better.

But when we ask why this theory is better, we find ourselves resorting to things that are patently non-factual. We may argue that theories which postulate useless entities are worse than simpler ones—citing the value of simplicity. We may argue that the space-alien theory contradicts too many other judgements—citing the value of coherence. We can give a whole slew of reasons why one theory is better than another, but there is no rulebook out there for scientists to point to which resolves the matter objectively. Even appeals to the great pragmatic value of the first theory or arguments that point out the lack of explanatory and predictive power of the space-alien theory, are still appeals to a value. No amount of observation will tell you why being pragmatic makes one theory better—it is something for which you have to argue. No matter what kind of fact we are trying to establish, it is going to be inextricably tied to the values we hold. (…)

In [Hilary Putnam’s] view, there is no reason to suppose that a complete account of reality can be given using a single set of concepts. That is, it is not possible to reduce all types of explanation to one set of objective concepts. Suppose I say, “Keith drove like a maniac” and you ask me why. We would usually explain the event in terms of value-laden concepts like intention, emotion, and so on—“Keith was really stressed out”—and this seems to work perfectly fine. Now we can also take the exact same event and describe it using an entirely different set of scientific concepts— say “there was a chain of electrochemical reactions from this brain to this foot” or “there was x pressure on the accelerator which caused y torque on the wheels.” These might be true descriptions, but they simply don’t give us the whole or even a marginally complete picture of Keith driving like a maniac. We could describe every single relevant physical detail of that event and still have no explanation. Nor, according to Putnam, should we expect there to be. The full scope of reality is simply too complex to be fully described by one method of explanation.

The problem with all of this, and one that Putnam has struggled with, is what sort of picture of reality we are left with once we accept these three central arguments: the collapse of the fact-value dichotomy, the truth of semantic externalism and conceptual relativity. (…)

We could—like Putnam before the 1970s—become robust realists and simply accept that values and norms are no less a part of the world than elementary particles and mathematical objects. We could—like Putnam until the 1990s—become “internal realists” and, in a vaguely Kantian move define reality in terms of mind-dependent concepts and idealised rational categories. Or we could adopt Putnam’s current position—a more modest realism which argues that there is a mind-independent world out there and that it is compatible with our ordinary human values. Of course Putnam has his reasons for believing what he does now, and they largely derive from his faith in our ability to represent reality correctly. But the strength of his arguments convincing us to be wary of the scientific stance leave us with little left of trust in it.”

, A philosopher in the age of science, Prospect, March, 14, 2013. [Hilary Putnam — American philosopher, mathematician and computer scientist who has been a central figure in analytic philosophy since the 1960s, currently Cogan University Professor Emeritus at Harvard University.]

Feb
3rd
Sun
permalink

'Elegance,' 'Symmetry,' and 'Unity': Is Scientific Truth Always Beautiful?

                   image

"Today the grandest quest of physics is to render compatible the laws of quantum physics—how particles in the subatomic world behave—with the rules that govern stars and planets. That’s because, at present, the formulas that work on one level implode into meaninglessness at the other level. This is deeply ungainly, and significant when the two worlds collide, as occurs in black holes. The quest to unify quantum physics (micro) and general relativity (macro) has spawned heroic efforts, the best-known candidate for a grand unifying concept presently being string theory. String theory proposes that subatomic particles are not particles at all but closed or open vibrating strings, so tiny, a hundred billion billion times shorter than an atomic nucleus’s diameter, that no human instrument can detect them. It’s the “music of the spheres”—think vibrating harp strings—made literal.

A concept related to string theory is “supersymmetry.” Physicists have shown that at extremely high energy levels, similar to those that existed a micro-blink after the big bang, the strength of the electromagnetic force, and strong and weak nuclear forces (which work only on subatomic levels), come tantalizingly close to converging. Physicists have conceived of scenarios in which the three come together precisely, an immensely intellectually and aesthetically pleasing accomplishment. But those scenarios imply the existence of as-yet-undiscovered “partners” for existing particles: The electron would be joined by a “selectron,” quarks by “squarks,” and so on. There was great hope that the $8-billion Large Hadron Collider would provide indirect evidence for these theories, but so far it hasn’t. (…)

[Marcelo Gleiser]: “We look out in the world and we see a very complicated pattern of stuff, and the notion of symmetry is an important way to make sense of the mess. The sun and moon are not perfect spheres, but that kind of approximation works incredibly well to simulate the behavior of these bodies.”

But the idea that what’s beautiful is true and that “symmetry rules,” as Gleiser puts it, “has been catapulted to an almost religious notion in the sciences,” he says. In his own book A Tear at the Edge of Creation (Free Press), Gleiser made a case for the beauty inherent in asymmetry—in the fact that neutrinos, the most common particles in the universe, spin only in one direction, for example, or that amino acids can be produced in laboratories in “left-handed” or “right-handed” forms, but only the “left-handed” form appears in nature. These are nature’s equivalent of Marilyn Monroe’s mole, attractive because of their lopsidedness, and Orrell also makes use of those examples.

But Weinberg, the Nobel-winning physicist at the University of Texas at Austin, counters: “Betting on beauty works remarkably well.” The Large Hadron Collider’s failure to produce evidence of supersymmetry is “disappointing,” he concedes, but he notes that plenty of elegant theories have waited years, even decades, for confirmation. Copernicus’s theory of a Sun-centered universe was developed entirely without experiment—he relied on Ptolemy’s data—and it was eventually embraced precisely because his description of planetary motion was simply more economical and elegant than those of his predecessors; it turned out to be true.

Closer to home, Weinberg says his own work on the weak nuclear force and electromagnetism had its roots in remarkably elegant, purely abstract theories of researchers who came before him, theories that, at first, seemed to be disproved by evidence but were too elegant to stop thinking about. (…)

To Orrell, it’s not just that many scientists are too enamored of beauty; it’s that their notion of beauty is ossified. It is “kind of clichéd,” Orrell says. “I find things like perfect symmetry uninspiring.” (In fairness, the Harvard theoretical physicist Lisa Randall has used the early unbalanced sculptures of Richard Serra as an example of how the asymmetrical can be as fascinating as the symmetrical, in art as in physics. She finds this yin-yang tension perfectly compatible with modern theorizing.)

Orrell also thinks it is more useful to study the behavior of complex systems rather than their constituent elements. (…)

Outside of physics, Orrell reframes complaints about “perfect-model syndrome” in aesthetic terms. Classical economists, for instance, treat humans as symmetrical in terms of what motivates decision-making. In contrast, behavioral economists are introducing asymmetry into that field by replacing Homo economicus with a quirkier, more idiosyncratic and human figure—an aesthetic revision, if you like. (…)

The broader issue, though, is whether science’s search for beautiful, enlightening patterns has reached a point of diminishing returns. If science hasn’t yet hit that point, might it be approaching it? The search for symmetry in nature has had so many successes, observes Stephon Alexander, a Dartmouth physicist, that “there is a danger of forgetting that nature is the one that decides where that game ends.”

Christopher Shea, American writer and editor, Is Scientific Truth Always Beautiful?, The Chronicle of Higher Education, Jan 28, 2013.

The Asymmetry of Life

                 image
                                     Image courtesy of Ben Lansky

"Look into a mirror and you’ll simultaneously see the familiar and the alien: an image of you, but with left and right reversed.

Left-right inequality has significance far beyond that of mirror images, touching on the heart of existence itself. From subatomic physics to life, nature prefers asymmetry to symmetry. There are no equal liberties when neutrinos and proteins are concerned. In the case of neutrinos, particles that spill out of the sun’s nuclear furnace and pass through you by the trillions every second, only leftward-spinning ones exist. Why? No one really knows.

Proteins are long chains of amino acids that can be either left- or right-handed. Here, handedness has to do with how these molecules interact with polarized light, rotating it either to the left or to the right. When synthesized in the lab, amino acids come out fifty-fifty. In living beings, however, all proteins are made of left-handed amino acids. And all sugars in RNA and DNA are right-handed. Life is fundamentally asymmetric.

Is the handedness of life, its chirality (think chiromancer, which means “palm reader”), linked to its origins some 3.5 billion years ago, or did it develop after life was well on its way? If one traces life’s origins from its earliest stages, it’s hard to see how life began without molecular building blocks that were “chirally pure,” consisting solely of left- or right-handed molecules. Indeed, many models show how chirally pure amino acids may link to form precursors of the first protein-like chains. But what could have selected left-handed over right-handed amino acids?

My group’s research suggests that early Earth’s violent environmental upheavals caused many episodes of chiral flip-flopping. The observed left-handedness of terrestrial amino acids is probably a local fluke. Elsewhere in the universe, perhaps even on other planets and moons of our solar system, amino acids may be right-handed. But only sampling such material from many different planetary platforms will determine whether, on balance, biology is lefthanded, right-handed, or ambidextrous.”

Marcelo Gleiser, The Asymmetry of Life, § SEEDMAGAZINE, Sep 7, 2010.

"One of the deepest consequences of symmetries of any kind is their relationship with conservation laws. Every symmetry in a physical system, be it balls rolling down planes, cars moving on roads, planets orbiting the Sun, a photon hitting an electron, or the expanding Universe, is related to a conserved quantity, a quantity that remains unchanged in the course of time. In particular, external (spatial and temporal) symmetries are related to the conservation of momentum and energy, respectively: the total energy and momentum of a system that is temporally and spatially symmetric remains unchanged.

The elementary particles of matter live in a reality very different from ours. The signature property of their world is change: particles can morph into one another, changing their identities. […] One of the greatest triumphs of twentieth-century particle physics was the discovery of the rules dictating the many metamorphoses of matter particles and the symmetry principles behind them. One of its greatest surprises was the realization that some of the symmetries are violated and that these violations have very deep consequences. (…) p.27

Even though matter and antimatter appear in equal footing on the equations describing relativistic particles, antimatter occurs only rarely. […] Somehow, during its infancy, the cosmos selected matter over antimatter. This imperfection is the single most important factor dictating our existence. (…)

Back to the early cosmos: had there been an equal quantity of antimatter particles around, they would have annihilated the corresponding particles of matter and all that would be left would be lots of gamma-ray radiation and some leftover protons and antiprotons in equal amounts. Definitely not our Universe. The tiny initial excess of matter particles is enough to explain the overwhelming excess of matter over antimatter in today’s Universe. The existence of mattter, the stuff we and everything else are made of, depends on a primordial imperfection, the matter-antimatter asymmetry. (…) p.29.

We have seen how the weak interactions violate a series of internal symmetries: charge conjugation, parity, and even the combination of the two. The consequences of these violations are deeply related to our existence: they set the arrow of time at the microscopic level, providing a viable mechanism to generate the excess of matter over antimatter. […] The message from modern particle physics and cosmology is clear: we are the products of imperfections in Nature. (…)

It is not symmetry and perfection that should be our guiding principle, as it has been for millennia. We don’t have to look for the mind of God in Nature and try to express it through our equations. The science we create is just that, our creation. Wonderful as it is, it is always limited, it is always constrained by what we know of the world. […] The notion that there is a well-defined hypermathematical structure that determines all there is in the cosmos is a Platonic delusion with no relationship to physical reality. (…) p. 35.

The critics of this idea miss the fact that a meaningless cosmos that produced humans (and possibly other intelligences) will never be meaningless to them (or to the other intelligences). To exist in a purposeless Universe is even more meaningful than to exist as the result of some kind of mysterious cosmic plan. Why? Because it elevates the emergence of life and mind to a rare event, as opposed to a ubiquitous and premeditated one. For millennia, we believed that God (or gods) protected us from extinction, that we were chosen to be here and thus safe from ultimate destruction. […]

When science proposes that the cosmos has a sense of purpose where in life is a premeditated outcome of natural events, a similar safety blanket mechanism is at play: if life fails here, it will succeed elsewhere. We don’t really need to preserve it. To the contrary, I will argue that unless we accept our fragility and cosmic loneliness, we will never act to protect what we have. (…)

The laws of physics and the laws of chemistry as presently understood have nothing to say about the emergence of life. As Paul Davies remarked in Cosmic Jackpot, notions of a life principle suffer from being teleologic, explaining life as the end goal, a purposeful cosmic strategy. The human mind, of course, would be the crown jewel of such creative drive. Once again we are “chosen” ones, a dangerous proposal. […] Arguments shifting the “mind of God” to the “mind of the cosmos” perpetuate our obsession with the notion of Oneness. Our existence need not be planned to be meaningful.” (…) p.49.

Unified theories, life principles, and self-aware universes are all expressions of our need to find a connection between who we are and the world we live in. I do not question the extreme importance of understanding the connection between man and the cosmos. But I do question that it has to derive from unifying principles. (…) p.50.

My point is that there is no Final Truth to be discovered, no grand plan behind creation. Science advances as new theories engulf or displace old ones. The growth is largely incremental, punctuated by unexpected, worldview-shattering discoveries about the workings of Nature. […]

Once we understand that science is the creation of human minds and not the pursuit of some divine plan (even if metaphorically) we shift the focus of our search for knowledge from the metaphysical to the concrete. (…) p.51.

For a clever fish, water is “just right“ for it to swim in. Had it been too cold, it would freeze; too hot, it would boil. Surely the water temperature had to be just right for the fish to exist. “I’m very important. My existence cannot be an accident,” the proud fish would conclude. Well, he is not very important. He is just a clever fish. The ocean temperature is not being controlled with the purpose of making it possible for it to exist. Quite the opposite: the fish is fragile. A sudden or gradual temperature swing would kill it, as any trout fisherman knows. We so crave for meaningful connections that we see them even when they are not there.

We are soulful creatures in a harsh cosmos. This, to me, is the essence of the human predicament. The gravest mistake we can make is to think that the cosmos has plans for us, that we are somehow special from a cosmic perspective. (…) p.52

We are witnessing the greatest mass extinction since the demise of the dinosaurs 65 million years ago. The difference is that for the first time in history, humans, and not physical causes, are the perpetrators. […] Life recovered from the previous five mass extinctions because the physical causes eventually ceased to act. Unless we understand what is happening and start acting toghether as a species we may end up carving the path toward our own destruction. (…)” p.56

Marcelo Gleiser is the Appleton Professor of Natural Philosophy at Dartmouth College, A Tear at the Edge of Creation, Free Press, 2010.

See also:

Symmetry in Physics - Bibliography - PhilPapers
The Concept of Laws. The special status of the laws of mathematics and physics, Lapidarium notes
Universe tag on Lapidarium notes

Dec
10th
Mon
permalink

Cargo cult science by Richard Feynman
  image

Adapted from the Caltech commencement address given in 1974.

"During the Middle Ages there were all kinds of crazy ideas, such as that a piece of rhinoceros horn would increase potency. Then a method was discovered for separating the ideas—which was to try one to see if it worked, and if it didn’t work, to eliminate it. This method became organized, of course, into science. And it developed very well, so that we are now in the scientific age. It is such a scientific age, in fact that we have difficulty in understanding how witch doctors could ever have existed, when nothing that they proposed ever really worked—or very little of it did.
 
But even today I meet lots of people who sooner or later get me into a conversation about UFOS, or astrology, or some form of mysticism, expanded consciousness, new types of awareness, ESP, and so forth. And I’ve concluded that it’s not a scientific world.
 
Most people believe so many wonderful things that I decided to investigate why they did. And what has been referred to as my curiosity for investigation has landed me in a difficulty where I found so much junk that I’m overwhelmed. First I started out by investigating various ideas of mysticism, and mystic experiences. I went into isolation tanks and got many hours of hallucinations, so I know something about that. Then I went to Esalen, which is a hotbed of this kind of thought (it’s a wonderful place; you should go visit there). Then I became overwhelmed. I didn’t realize how much there was.
 
At Esalen there are some large baths fed by hot springs situated on a ledge about thirty feet above the ocean. One of my most pleasurable experiences has been to sit in one of those baths and watch the waves crashing onto the rocky shore below, to gaze into the clear blue sky above, and to study a beautiful nude as she quietly appears and settles into the bath with me.
 
One time I sat down in a bath where there was a beautiful girl sitting with a guy who didn’t seem to know her. Right away I began thinking, “Gee! How am I gonna get started talking to this beautiful nude babe?”
 
I’m trying to figure out what to say, when the guy says to her, I’m, uh, studying massage. Could I practice on you?”
 
"Sure," she says. They get out of the bath and she lies down on a massage table nearby.
 
I think to myself, “What a nifty line! I can never think of anything like that!” He starts to rub her big toe. “I think I feel it, “he says. “I feel a kind of dent—is that the pituitary?”
 
I blurt out, “You’re a helluva long way from the pituitary, man!”
 
They looked at me, horrified—I had blown my cover—and said, “It’s reflexology!”
 
I quickly closed my eyes and appeared to be meditating.
 
That’s just an example of the kind of things that overwhelm me. I also looked into extrasensory perception and PSI phenomena, and the latest craze there was Uri Geller, a man who is supposed to be able to bend keys by rubbing them with his finger. So I went to his hotel room, on his invitation, to see a demonstration of both mindreading and bending keys. He didn’t do any mindreading that succeeded; nobody can read my mind, I guess. And my boy held a key and Geller rubbed it, and nothing happened. Then he told us it works better under water, and so you can picture all of us standing in the bathroom with the water turned on and the key under it, and him rubbing the key with his finger. Nothing happened. So I was unable to investigate that phenomenon.
 
But then I began to think, what else is there that we believe? (And I thought then about the witch doctors, and how easy it would have been to cheek on them by noticing that nothing really worked.) So I found things that even more people believe, such as that we have some knowledge of how to educate. There are big schools of reading methods and mathematics methods, and so forth, but if you notice, you’ll see the reading scores keep going down—or hardly going up in spite of the fact that we continually use these same people to improve the methods. There’s a witch doctor remedy that doesn’t work. It ought to be looked into; how do they know that their method should work? Another example is how to treat criminals. We obviously have made no progress—lots of theory, but no progress— in decreasing the amount of crime by the method that we use to handle criminals.
 
Yet these things are said to be scientific. We study them. And I think ordinary people with commonsense ideas are intimidated by this pseudoscience. A teacher who has some good idea of how to teach her children to read is forced by the school system to do it some other way—or is even fooled by the school system into thinking that her method is not necessarily a good one. Or a parent of bad boys, after disciplining them in one way or another, feels guilty for the rest of her life because she didn’t do “the right thing,” according to the experts.
 
So we really ought to look into theories that don’t work, and science that isn’t science.
 
I think the educational and psychological studies I mentioned are examples of what I would like to call cargo cult science. In the South Seas there is a cargo cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to imitate things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he’s the controller—and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.
 
Now it behooves me, of course, to tell you what they’re missing. But it would be just about as difficult to explain to the South Sea Islanders how they have to arrange things so that they get some wealth in their system. It is not something simple like telling them how to improve the shapes of the earphones. But there is one feature I notice that is generally missing in cargo cult science. That is the idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.
 
Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
 
In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.
 
The easiest way to explain this idea is to contrast it, for example, with advertising. Last night I heard that Wesson oil doesn’t soak through food. Well, that’s true. It’s not dishonest; but the thing I’m talking about is not just a matter of not being dishonest, it’s a matter of scientific integrity, which is another level. The fact that should be added to that advertising statement is that no oils soak through food, if operated at a certain temperature. If operated at another temperature, they all will— including Wesson oil. So it’s the implication which has been conveyed, not the fact, which is true, and the difference is what we have to deal with.
 
We’ve learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work. And it’s this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science.
 
A great deal of their difficulty is, of course, the difficulty of the subject and the inapplicability of the scientific method to the subject. Nevertheless it should be remarked that this is not the only difficulty. That’s why the planes didn’t land—but they don’t land.
 
We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.
 
Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.
 
But this long history of learning how not to fool ourselves—of having utter scientific integrity—is, I’m sorry to say, something that we haven’t specifically included in any particular course that I know of. We just hope you’ve caught on by osmosis.
 
The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.
 
I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I am not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being. We’ll leave those problems up to you and your rabbi. I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you are maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.
 
For example, I was a little surprised when I was talking to a friend who was going to go on the radio. He does work on cosmology and astronomy, and he wondered how he would explain what the applications of this work were. “Well,” I said, “there aren’t any.” He said, “Yes, but then we won’t get support for more research of this kind.” I think that’s kind of dishonest. If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing—and if they don’t want to support you under those circumstances, then that’s their decision.
 
One example of the principle is this: If you’ve made up your mind to test a theory, or you want to explain some idea, you should always decide to publish it whichever way it comes out. If we only publish results of a certain kind, we can make the argument look good. We must publish both kinds of results.
 
I say that’s also important in giving certain types of government advice. Supposing a senator asked you for advice about whether drilling a hole should be done in his state; and you decide it would be better in some other state. If you don’t publish such a result, it seems to me you’re not giving scientific advice. You’re being used. If your answer happens to come out in the direction the government or the politicians like, they can use it as an argument in their favor; if it comes out the other way, they don’t publish it at all. That’s not giving scientific advice.
 
Other kinds of errors are more characteristic of poor science. When I was at Cornell, I often talked to the people in the psychology department. One of the students told me she wanted to do an experiment that went something like this—it had been found by others that under certain circumstances, X, rats did something, A. She was curious as to whether, if she changed the circumstances to Y, they would still do A. So her proposal was to do the experiment under circumstances Y and see if they still did A.
 
I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person—to do it under condition X to see if she could also get result A, and then change to Y and see if A changed. Then she would know that the real difference was the thing she thought she had under control.
 
She was very delighted with this new idea, and went to her professor. And his reply was, no, you cannot do that, because the experiment has already been done and you would be wasting time. This was in about 1947 or so, and it seems to have been the general policy then to not try to repeat psychological experiments, but only to change the conditions and see what happens.
 
Nowadays there’s a certain danger of the same thing happening, even in the famous (?) field of physics. I was shocked to hear of an experiment done at the big accelerator at the National Accelerator Laboratory, where a person used deuterium. In order to compare his heavy hydrogen results to what might happen with light hydrogen” he had to use data from someone else’s experiment on light hydrogen, which was done on different apparatus. When asked why, he said it was because he couldn’t get time on the program (because there’s so little time and it’s such expensive apparatus) to do the experiment with light hydrogen on this apparatus because there wouldn’t be any new result. And so the men in charge of programs at NAL are so anxious for new results, in order to get more money to keep the thing going for public relations purposes, they are destroying—possibly—the value of the experiments themselves, which is the whole purpose of the thing. It is often hard for the experimenters there to complete their work as their scientific integrity demands.
 
All experiments in psychology are not of this type, however. For example, there have been many experiments running rats through all kinds of mazes, and so on—with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.
 
The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and still the rats could tell.
 
He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.
 
Now, from a scientific standpoint, that is an A-number-one experiment. That is the experiment that makes rat-running experiments sensible, because it uncovers the clues that the rat is really using—not what you think it’s using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat-running.
 
I looked into the subsequent history of this research. The next experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn’t discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats. But not paying attention to experiments like that is a characteristic of cargo cult science.
 
Another example is the ESP experiments of Mr. Rhine, and other people. As various people have made criticisms—and they themselves have made criticisms of their own experiments—they improve the techniques so that the effects are smaller, and smaller, and smaller until they gradually disappear. All the parapsychologists are looking for some experiment that can be repeated—that you can do again and get the same effect—statistically, even. They run a million rats no, it’s people this time they do a lot of things and get a certain statistical effect. Next time they try it they don’t get it any more. And now you find a man saying that it is an irrelevant demand to expect a repeatable experiment. This is science?
 
This man also speaks about a new institution, in a talk in which he was resigning as Director of the Institute of Parapsychology. And, in telling people what to do next, he says that one of the things they have to do is be sure they only train students who have shown their ability to get PSI results to an acceptable extent— not to waste their time on those ambitious and interested students who get only chance results. It is very dangerous to have such a policy in teaching—to teach students only how to get certain results, rather than how to do an experiment with scientific integrity.
 
So I have just one wish for you—the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom.”     

   image

Richard Feynman, American theoretical physicist known for his work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics, and the physics of the superfluidity of supercooled liquid helium, as well as in particle physics (he proposed the parton model), Laureate of the Nobel Prize in Physics, (1918-1988), Cargo cult science, Caltech commencement address given in 1974. (Pictures source: 1) Scientific American, 2) Richard Feynman at Caltech giving his famous lecture he entitled "There’s Plenty of Room at the Bottom." (credit: California Institute of Technology))

See also:

Richard Feynman on how we would look for a new law (the key to science)
Richard Feynman on the way nature work: “You don’t like it? Go somewhere else!”
Richard Feynman on the likelihood of Flying Saucers
Richard Feynman tag on Lapidarium

Sep
9th
Sun
permalink

Philosophy vs science: which can answer the big questions of life?

 

"In the eighteenth century, philosophers considered the whole of human knowledge, including science, to be their field and discussed questions such as: did the universe have a beginning? However, in the nineteenth and twentieth centuries, science became too technical and mathematical for the philosophers, or anyone else except a few specialists. Philosophers reduced the scope of their inquiries so much that Wittgenstein, the most famous philosopher of this century, said, “The sole remaining task for philosophy is the analysis of language.” (…)

However, if we do discover a complete theory, it should in time be understandable in broad principle by everyone, not just a few scientists. Then we shall all, philosophers, scientists, and just ordinary people, be able to take part in the discussion of the question of why it is that we and the universe exist.”

Stephen Hawking, A Brief History of Time, Bantam Dell Publishing Group, 1988.

"Science is what you know, philosophy is what you don’t know"
Bertrand Russell

"Every science begins as philosophy and ends as art; it arises in hypothesis and flows into achievement."

Will Durant, American writer, historian, and philosopher (1885-1981), The Pleasures of Philosophy, 1929.

"Getting to your question of morality, for example, science provides the basis for moral decisions, which are sensible only if they are based on reason, which is itself based on empirical evidence. Without some knowledge of the consequences of actions, which must be based on empirical evidence, then I think “reason” alone is impotent. If I don’t know what my actions will produce, then I cannot make a sensible decision about whether they are moral or not. Ultimately, I think our understanding of neurobiology and evolutionary biology and psychology will reduce our understanding of morality to some well-defined biological constructs. (…)

Take homosexuality, for example. Iron age scriptures might argue that homosexuality is “wrong”, but scientific discoveries about the frequency of homosexual behaviour in a variety of species tell us that it is completely natural in a rather fixed fraction of populations and that it has no apparent negative evolutionary impacts. This surely tells us that it is biologically based, not harmful and not innately “wrong”. In fact, I think you actually accede to this point about the impact of science when you argue that our research into non-human cognition has altered our view of ethics. (….)

"Why" questions

The [“why”] question is meaningless. (…) Not only has “why” become “how” but “why” no longer has any useful meaning, given that it presumes purpose for which there is no evidence. (…)

It is not a large leap of the imagination to expect that we will one day be able to break down those social actions, studied on a macro scale, to biological reactions at a micro scale.

In a purely practical sense, this may be computationally too difficult to do in the near future, and maybe it will always be so, but everything I know about the universe makes me timid to use the word always. What isn’t ruled out by the laws of physics is, in some sense, inevitable. So, right now, I cannot imagine that I could computationally determine the motion of all the particles in the room in which I am breathing air, so that I have to take average quantities and do statistics in order to compute physical behaviour. But, one day, who knows? (…)

We won’t really know the answer to whether science can yield a complete picture of reality, good at all levels, unless we try. (…) I continue to be surprised by the progress that is possible by continuing to ask questions of nature and let her answer through experiment. Stars are easier to understand than people, I expect, but that is what makes the enterprise so exciting.

The mysteries are what make life worth living and I would be sad if the day comes when we can no longer find answerable questions that have yet to be answered, and puzzles that can be solved. What surprises me is how we have become victims of our own success, at least in certain areas. When it comes to the universe as a whole, we may be frighteningly close to the limits of empirical inquiry as a guide to understanding. After that, we will have to rely on good ideas alone, and that is always much harder and less reliable.”

Lawrence Krauss, Canadian-American theoretical physicist who is a professor of physics, Foundation Professor of the School of Earth and Space Exploration, and director of the Origins Project at Arizona State University, Philosophy v science: which can answer the big questions of life?, The Observer, 9 Sept 2012

[This post will be gradually expanded…]

See also:

Science Is Not About Certainty. Science is about overcoming our own ideas and a continuous challenge of common sense
David Deutsch: A new way to explain explanation
Galileo and the relationship between the humanities and the sciences
Will Durant, The Pleasures of Philosophy

Jul
24th
Tue
permalink

Dirk Helbing on A New Kind Of Socio-inspired Technology

The big unexplored continent in science is actually social science, so we really need to understand much better the principles that make our society work well, and socially interactive systems. Our future information society will be characterized by computers that behave like humans in many respects. In ten years from now, we will have computers as powerful as our brain, and that will really fundamentally change society. Many professional jobs will be done much better by computers. How will that change society? How will that change business? What impacts does that have for science, actually?

There are two big global trends. One is big data. That means in the next ten years we’ll produce as many data, or even more data than in the past 1,000 years. The other trend is hyperconnectivity. That means we have networking our world going on at a rapid pace; we’re creating an Internet of things. So everyone is talking to everyone else, and everything becomes interdependent. What are the implications of that? (…)

But on the other hand, it turns out that we are, at the same time, creating highways for disaster spreading. We see many extreme events, we see problems such as the flash crash, or also the financial crisis. That is related to the fact that we have interconnected everything. In some sense, we have created unstable systems. We can show that many of the global trends that we are seeing at the moment, like increasing connectivity, increase in the speed, increase in complexity, are very good in the beginning, but (and this is kind of surprising) there is a turning point and that turning point can turn into a tipping point that makes the systems shift in an unknown way.

It requires two things to understand our systems, which is social science and complexity science; social science because computers of tomorrow are basically creating artificial social systems. Just take financial trading today, it’s done by the most powerful computers. These computers are creating a view of the environment; in this case the financial world. They’re making projections into the future. They’re communicating with each other. They have really many features of humans. And that basically establishes an artificial society, which means also we may have all the problems that we are facing in society if we don’t design these systems well. The flash crash is just one of those examples that shows that, if many of those components — the computers in this case — interact with each other, then some surprising effects can happen. And in that case, $600 billion were actually evaporating within 20 minutes.

Of course, the markets recovered, but in some sense, as many solid stocks turned into penny stocks within minutes, it also changed the ownership structure of companies within just a few minutes. That is really a completely new dimension happening when we are building on these fully automated systems, and those social systems can show a breakdown of coordination, tragedies of the commons, crime or cyber war, all these kinds of things will happen if we don’t design them right.

We really need to understand those systems, not just their components. It’s not good enough to have wonderful gadgets like smartphones and computers; each of them working fine in separation. Their interaction is creating a completely new world, and it is very important to recognize that it’s not just a gradual change of our world; there is a sudden transition in the behavior of those systems, as the coupling strength exceeds a certain threshold.

A traffic flow in a circle

I’d like to demonstrate that for a system that you can easily imagine: traffic flow in a circle. Now, if the density is high enough, then the following will happen: after some time, although every driver is trying hard to go at a reasonable speed, cars will be stopped by a so-called ‘phantom traffic jam.’ That means smooth traffic flow will break down, no matter how hard the drivers will try to maintain speed. The question is, why is this happening? If you would ask drivers, they would say, “hey, there was a stupid driver in front of me who didn’t know how to drive!” Everybody would say that. But it turns out it’s a systemic instability that is creating this problem.

That means a small variation in the speed is amplified over time, and the next driver has to brake a little bit harder in order to compensate for a delayed reaction. That creates a chain reaction among drivers, which finally stops traffic flow. These kinds of cascading effects are all over the place in the network systems that we have created, like power grids, for example, or our financial markets. It’s not always as harmless as in traffic jams. We’re just losing time in traffic jams, so people could say, okay, it’s not a very serious problem. But if you think about crowds, for example, we have this transition towards a large density of the crowd, then what will happen is a crowd disaster. That means people will die, although nobody wants to harm anybody else. Things will just go out of control. Even though there might be hundreds or thousands of policemen or security forces trying to prevent these things from happening.

This is really a surprising behavior of these kinds of strongly-networked systems. The question is, what implication does that have for other network systems that we have created, such as the financial system? There is evidence that the fact that now every bank is interconnected with every other bank has destabilized the system. That means that there is a systemic instability in place that makes it so hard to control, or even impossible to control. We see that the big players, and also regulators, have large difficulties to get control of these systems.  

That tells us something that we need to change our perspective regarding these systems. Those complex systems are not characterized anymore by the properties of their components. But they’re characterized by what is the outcome of the interactions between those components. As a result of those interactions, self-organization is going on in these systems. New emergent properties come up. They can be very surprising, actually, and that means we cannot understand those systems anymore, based on what we see, which is the components.

We need to have new instruments and tools to understand these kinds of systems. Our intuition will not work here. And that is what we want to create: we want to come up with a new information platform for everybody that is bringing together big data with exa-scale computing, with people, and with crowd sourcing, basically connecting the intelligence of the brains of the world.

One component that is going to measure the state of the world is called the Planetary Nervous System. That will measure not just the physical state of the world and the environmental situation, but it is also very important actually that we learn how to measure social capital, such as trust and solidarity and punctuality and these kinds of things, because this is actually very important for economic value generation, but also for social well-being.

Those properties as social capital, like trust, they result from social network interactions. We’ve seen that one of the biggest problems of the financial crisis was this evaporation of trust. It has burned tens of thousands of billion dollars. If we would learn how to stabilize trust, or build trust, that would be worth a lot of money, really. Today, however, we’re not considering the value of social capital. It can happen that we destroyed it or that we exploit it, such as we’ve exploited and destroyed our environment. If we learn how much is the value of social capital, we will start to protect it. Also we’ll take it into account in our insurance policies. Because today, no insurance is taking into account the value of social capital. It’s the material damage that we take into account, but not the social capital. That means, in some sense, we’re underinsured. We’re taking bigger risks than we should.

This is something that we want to learn, how to quantify the fundaments of society, to quantify the social footprint. It means to quantify the implications of our decisions and actions.

The second component, the Living Earth Simulator will be very important here, because that will look at what-if scenarios. It will take those big data generated by the Planetary Nervous System and allow us to look at different scenarios, to explore the various options that we have, and the potential side effects or cascading effects, and unexpected behaviors, because those interdependencies make our global systems really hard to understand. In many cases, we just overlook what would happen if we fix a problem over here: It might have unwanted side effects; in many cases, that is happening in other parts of our world.

We are using supercomputers today in all areas of our development. Like if we are developing a car, a plane or medical tracks or so, supercomputers are being used, also in the financial world. But we don’t have a kind of political or a business flight simulator that helps us to explore different opportunities. I think this is what we can create as our understanding of society progresses. We now have much better ideas of how social coordination comes about, what are the preconditions for cooperation. What are conditions that create conflict, or crime, or war, or epidemicspreading, in the good and the bad sense?

We’re using, of course, viral marketing today in order to increase the success of our products. But at the same time, also we are suffering from a quick spreading of emerging diseases, or of computer viruses, and Trojan horses, and so on. We need to understand these kinds of phenomena, and with the data and the computer power that is coming up, it becomes within reach to actually get a much better picture of these things.

The third component will be the Global Participatory Platform [pdf]. That basically makes those other tools available for everybody: for business leaders, for political decision-makers, and for citizens. We want to create an open data and modeling platform that creates a new information ecosystem that allows you to create new businesses, to come up with large-scale cooperation much more easily, and to lower the barriers for social, political and economic participation.

So these are the three big elements. We’ll furthermore  build exploratories of society, of the economy and environment and technology, in order to be able to anticipate possible crises, but also to see opportunities that are coming up. Those exploratories will bring these three elements together. That means the measurement component, the computer simulation component, and the participation, the interactiveness.

In some sense, we’re going to create virtual worlds that may look like our real world, copies of our world that allow us to explore polices in advance or certain kinds of planning in advance. Just to make it a little bit more concrete, we could, for example, check out a new airport or a new city quarter before it’s being built. Today we have these architectural plans, and competitions, and then the most beautiful design will have win. But then, in practice, it can happen that it doesn’t work so well. People have to stand in line in queues, or are obstructing each other. Many things may not work out as the architect imagined that.                 

What if we populated basically these architectural plans with real people? They could check it out, live there for some months and see how much they like it. Maybe even change the design. That means, the people that would use these facilities and would live in these new quarters of the city could actually participate in the design of the city. In the same sense, you can scale that up. Just imagine Google Earth or Google Street View filled with people, and have something like a serious kind of Second Life. Then we could have not just one history; we can check out many possible futures by actually trying out different financial architectures, or different decision rules, or different intellectual property rights and see what happens.                 

We could have even different virtual planets, with different laws and different cultures and different kinds of societies. And you could choose the planet that you like most. So in some sense, now a new age is opening up with almost unlimited resources. We’re, of course, still living in a material world, in which we have a lot of restrictions, because resources are limited. They’re scarce and there’s a lot of competition for these scarce resources. But information can be multiplied as much as you like. Of course, there is some cost, and also some energy needed for that, but it’s relatively low cost, actually. So we can create really almost infinite new possibilities for creativity, for productivity, for interaction. And it is extremely interesting that we have a completely new world coming up here, absolutely new opportunities that need to be checked out.

But now the question is: how will it all work? Or how would you make it work? Because the information systems that we have created are even more complex than our financial system. We know the financial system is extremely difficult to regulate and to control. How would you want to control an information system of this complexity? I think that cannot be done top-down. We are seeing now a trend that complex systems are run in a more and more decentralized way. We’re learning somehow to use self-organization principles in order to run these kinds of systems. We have seen that in the Internet, we are seeing t for smart grids, but also for traffic control.

I have been working myself on these new ways of self-control. It’s very interesting. Classically one has tried to optimize traffic flow. It’s so demanding that even our fastest supercomputers can’t do that in a strict sense, in real time. That means one needs to make simplifications. But in principle, what one is trying to do is to impose an optimal traffic light control top-down on the city. The supercomputer believes to know what is best for all the cars, and that is imposed on the system.                 

We have developed a different approach where we said: given that there is a large degree of variability in the system, the most important aspect is to have a flexible adaptation to the actual traffic conditions. We came up with a system where traffic flows control the traffic lights. It turns out this makes much better use of scarce resources, such as space and time. It works better for cars, it works better for public transport and for pedestrians and bikers, and it’s good for the environment as well.                 

The age of social innovation

There’s a new kind of socio-inspired technology coming up, now. Society has many wonderful self-organization mechanisms that we can learn from, such as trust, reputation, culture. If we can learn how to implement that in our technological system, that is worth a lot of money; billions of dollars, actually. We think this is the next step after bio-inspired technology.

The next big step is to focus on society. We’ve had an age of physics; we’re now in an age of biology. I think we are entering the age of social innovation as we learn to make sense of this even bigger complexity of society. It’s like a new continent to discover. It’s really fascinating what now becomes understandable with the availability of Big Data about human activity patterns, and it will open a door to a new future.

What will be very important in order to make sense of the complexity of our information society is to overcome the disciplinary silos of science; to think out of the box. Classically we had social sciences, we had economics, we had physics and biology and ecology, and computer science and so on. Now, our project is trying to bring those different fields together, because we’re deeply convinced that without this integration of different scientific perspectives, we cannot anymore make sense of these hyper-connected systems that we have created.                 

For example, computer science requires complexity science and social science to understand those systems that have been created and will be created. Why is this? Because the dense networking and to the complex interaction between the components creates self-organization, and emergent phenomena in those systems. The flash crash is just one example that shows that unexpected things can happen. We know that from many systems.

Complexity theory is very important here, but also social science. And why is that? Because the components of these information communication systems are becoming more and more human-like. They’re communicating with each other. They’re making a picture of the outside world. They’re projecting expectations into the future, and they are taking autonomous decisions. That means if those computers interact with each other, it’s creating an artificial social system in some sense.                 

In the same way, social science will need complexity science and computer science. Social science needs the data that computer science and information communication technology can provide. Now, and even more in the future, those data traces about human activities allow us eventually to detect patterns and kind of laws of human behavior. It will be only possible through the collaboration with computer science to get those data, and to make sense of what is happening actually in society. I don’t need to mention that obviously there are complex dynamics going on in society; that means complexity science is needed for social science as well.

In the same sense, we could say complexity science needs social science and computer science to become practical. To go a step beyond talking about butterfly effects and chaos and turbulence. And to make sure that the thinking of complexity science will pervade our thinking in the natural engineering and social sciences and allow us to understand the real problems of our world. That is kind of the essence: that we need to bring these different scientific fields together. We have actually succeeded to build up these integrated communities in many countries all over the world, ready to go, as soon as money becomes available for that.        

Big Data is not a solution per se. Even the most powerful machine learning algorithm will not be sufficient to make sense of our world, to understand the principles according to which our world is working. This is important to recognize. The great challenge is to marry data with theories, with models. Only then will we be able to make sense of the useful bits of data. It’s like finding a needle in the haystack. The more data you have, the more difficult it may be to find this needle, actually, to a certain extent. And there is this danger of over-fitting, of being distracted from important details. We are certainly already in an age where we’re flooded with information, and our attention span can actually not process all that information. That means there is a danger that this undermines our wisdom, if our attention is attracted by the wrong details of information. So we are confronted with the problem of finding the right institutions and tools and instruments for decision-making.        

The Living Earth Simulator will basically take the data that is gathered by the Internet, search requests, and created by sensor networks, and feed it into big computer simulations that are based on models of social and economic and technological behavior. In this way, we’ll be able to look at what-if scenarios. We hope to get a better understanding, for example, of financial systems and some answers to controversial questions such as how much leverage effect is good? Under what conditions is ‘naked short-selling’ beneficial? When does it destabilize markets? To what extent is high frequency trading good, or it can it also have side effects? All these kinds of questions, which are difficult to answer. Or how to deal best with the situation in Europe, where we have trouble, obviously, in Greece, but also kind of contagious effects on other countries and on the rest of the financial system. It would be very good to have the models and the data that allow us actually to simulate these kinds of scenarios and to take better-informed decisions. (…)

The idea is to have an open platform to create a data and model commons that everybody can contribute to, so people could upload data and models, and others could use that. People would also judge the quality of the data and models and rate them according to their criteria. And we also point out the criteria according to which they’re doing the rating. But in principle, everybody can contribute and everybody can use it. (…)                            

We have much better theories, also, that allows us to make sense of those data. We’re entering into an age where we can understand society and the economy much better, namely as complex self-organizing systems.           

It will be important to guide us into the future because we are creating very powerful systems. Information society will transform our society fundamentally and we shouldn’t just let it happen. We want to understand how that will change our society, and what are the different pathes that our society may take, and decide for the one that we want it to take. For that, we need to have a much better understanding.

Now a lot of social activity data are becoming available through Facebook and Twitter and Google search requests and so on. This is, of course, a huge opportunity for business. Businesses are talking about the new oil, personal data as new asset class. There’s something like a gold rush going on. That also, of course, has huge opportunities for science, eventually we can make sense of complex systems such as our society. There are different perspectives on this. They range from some people who think that information communication technologies eventually will create a God’s-eye view: systems that make sense of all human activities, and the interactions of people, while others are afraid of a Big Brother emerging.                 

The question is how to handle that situation. Some people say we don’t need privacy in society. Society is undergoing a transformation, and privacy is not anymore needed. I don’t actually share this point of view, as a social scientist, because public and private are two sides of the same coin, so they cannot exist without the other side. It is very important, for a society to work, to have social diversity. Today, we know to appreciate biodiversity, and in the same way we need to think about social diversity, because it’s a motor of innovation. It’s also an important factor for societal resilience. The question now is how all those data that we are creating, and also recommender system and personalized services are going to impact people’s decision-making behavior, and society overall.                 

This is what we need to look at now. How is people’s behavior changing through these kinds of data? How are people changing their behavior when they feel they’re being observed? Europe is quite sensitive about privacy. The project we are working on is actually trying to find a balance between the interest of companies and Big Data of governments and individuals. Basically we want to develop technologies that allow us to find this balance, to make sure that all the three perspectives actually are taken into account. That you can make big business, but also at the same time, the individual’s privacy is respected. That individuals have more control over their own data, know what is happening with them, have influence on what is happening with them. (…)           

In some sense, we want to create a new data and model commons, a new kind of language, a new public good that allows people to do new things. (…)

My feeling is that actually business will be made on top of this sea of data that’s being created. At the moment data is kind of the valuable resource, right? But in the future, it will probably be a cheap resource, or even a free resource to a certain extent, if we learn how to deal with openness of data. The expensive thing will be what we do with the data. That means the algorithms, the models, and theories that allow us to make sense of the data.”

Dirk Helbing, physicist, and professor of sociology at ETH Zurich – Swiss Federal Institute of Technology, in particular for modelling and simulation, A New Kind Of Socio-inspired Technology, Edge Conversation, June 19, 2012. (Illustration: WSF)

See also:

☞ Dirk Helbing, New science and technology to understand and manage our complex world in a more sustainable and resilient way (pdf) (presentation), ETH Zurich
Why does nature so consistently organize itself into hierarchies? Living Cells Show How to Fix the Financial System
Geoffrey West on Why Cities Keep Growing, Corporations and People Always Die, and Life Gets Faster
The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks
Networks tag on Lapidarium notes

Jul
23rd
Mon
permalink

S. Hawking, L. Mlodinow on why is there something rather than nothing and why are the fundamental laws as we have described them

                         

According to the idea of model-dependent realism, our brains interpret the input from our sensory organs by making a model of the outside world. We form mental concepts of our home, trees, other people, the electricity that flows from wall sockets, atoms, molecules, and other universes. These mental concepts are the only reality we can know. There is no modelindependent test of reality. It follows that a well-constructed model creates a reality of its own. An example that can help us think about issues of reality and creation is the Game of Life, invented in 1970 by a young mathematician at Cambridge named John Conway.

The word “game” in the Game of Life is a misleading term. There are no winners and losers; in fact, there are no players. The Game of Life is not really a game but a set of laws that govern a two dimensional universe. It is a deterministic universe: Once you set up a starting configuration, or initial condition, the laws determine what happens in the future.

The world Conway envisioned is a square array, like a chessboard, but extending infinitely in all directions. Each square can be in one of two states: alive (shown in green) or dead (shown in black). Each square has eight neighbors: the up, down, left, and right neighbors and four diagonal neighbors. Time in this world is not continuous but moves forward in discrete steps. Given any arrangement of dead and live squares, the number of live neighbors determine what happens next according to the following laws:

1. A live square with two or three live neighbors survives (survival).
2. A dead square with exactly three live neighbors becomes a live cell (birth).
3. In all other cases a cell dies or remains dead. In the case that a live square has zero or one neighbor, it is said to die of loneliness; if it has more than three neighbors, it is said to die of overcrowding.

That’s all there is to it: Given any initial condition, these laws generate generation after generation. An isolated living square or two adjacent live squares die in the next generation because they don’t have enough neighbors. Three live squares along a diagonal live a bit longer. After the first time step the end squares die, leaving just the middle square, which dies in the following generation. Any diagonal line of squares “evaporates” in just this manner. But if three live squares are placed horizontally in a row, again the center has two neighbors and survives while the two end squares die, but in this case the cells just above and below the center cell experience a birth. The row therefore turns into a column. Similarly, the next generation the column back turns into a row, and so forth. Such oscillating configurations are called blinkers.

If three live squares are placed in the shape of an L, a new behavior occurs. In the next generation the square cradled by the L will give birth, leading to a 2 × 2 block. The block belongs to a pattern type called the still life because it will pass from generation to generation unaltered. Many types of patterns exist that morph in the early generations but soon turn into a still life, or die, or return to their original form and then repeat the process. There are also patterns called gliders, which morph into other shapes and, after a few generations, return to their original form, but in a position one square down along the diagonal. If you watch these develop over time, they appear to crawl along the array. When these gliders collide, curious behaviors can occur, depending on each glider’s shape at the moment of collision.

What makes this universe interesting is that although the fundamental “physics” of this universe is simple, the “chemistry” can be complicated. That is, composite objects exist on different scales. At the smallest scale, the fundamental physics tells us that there are just live and dead squares. On a larger scale, there are gliders, blinkers, and still-life blocks. At a still larger scale there are even more complex objects, such as glider guns: stationary patterns that periodically give birth to new gliders that leave the nest and stream down the diagonal. (…)

If you observed the Game of Life universe for a while on any particular scale, you could deduce laws governing the objects on that scale. For example, on the scale of objects just a few squares across you might have laws such as “Blocks never move,” “Gliders move diagonally,” and various laws for what happens when objects collide. You could create an entire physics on any level of composite objects. The laws would entail entities and concepts that have no place among the original laws. For example, there are no concepts such as “collide” or “move” in the original laws. Those describe merely the life and death of individual stationary squares. As in our universe, in the Game of Life your reality depends on the model you employ.

Conway and his students created this world because they wanted to know if a universe with fundamental rules as simple as the ones they defined could contain objects complex enough to replicate. In the Game of Life world, do composite objects exist that, after merely following the laws of that world for some generations, will spawn others of their kind? Not only were Conway and his students able to demonstrate that this is possible, but they even showed that such an object would be, in a sense, intelligent! What do we mean by that? To be precise, they showed that the huge conglomerations of squares that self-replicate are “universal Turing machines.” For our purposes that means that for any calculation a computer in our physical world can in principle carry out, if the machine were fed the appropriate input—that is, supplied the appropriate Game of Life world environment—then some generations later the machine would be in a state from which an output could be read that would correspond to the result of that computer calculation. (…)

In the Game of Life, as in our world, self-reproducing patterns are complex objects. One estimate, based on the earlier work of mathematician John von Neumann, places the minimum size of a selfreplicating pattern in the Game of Life at ten trillion squares—roughly the number of molecules in a single human cell. One can define living beings as complex systems of limited size that are stable and that reproduce themselves.

The objects described above satisfy the reproduction condition but are probably not stable: A small disturbance from outside would probably wreck the delicate mechanism. However, it is easy to imagine that slightly more complicated laws would allow complex systems with all the attributes of life. Imagine a entity of that type, an object in a Conway-type world. Such an object would respond to environmental stimuli, and hence appear to make decisions. Would such life be aware of itself? Would it be self-conscious? This is a question on which opinion is sharply divided. Some people claim that self-awareness is something unique to humans. It gives them free will, the ability to choose between different courses of action.

How can one tell if a being has free will?

If one encounters an alien, how can one tell if it is just a robot or it has a mind of its own? The behavior of a robot would be completely determined, unlike that of a being with free will. Thus one could in principle detect a robot as a being whose actions can be predicted. (…) This may be impossibly difficult if the being is large and complex. We cannot even solve exactly the equations for three or more particles interacting with each other. Since an alien the size of a human would contain about a thousand trillion trillion particles even if the alien were a robot, it would be impossible to solve the equations and predict what it would do. We would therefore have to say that any complex being has free will—not as a fundamental feature, but as an effective theory, an admission of our inability to do the calculations that would enable us to predict its actions.

The example of Conway’s Game of Life shows that even a very simple set of laws can produce complex features similar to those of intelligent life. There must be many sets of laws with this property. What picks out the fundamental laws (as opposed to the apparent laws) that govern our universe? As in Conway’s universe, the laws of our universe determine the evolution of the system, given the state at any one time. In Conway’s world we are the creators—we choose the initial state of the universe by specifying objects and their positions at the start of the game. (…)

If the total energy of the universe must always remain zero, and it costs energy to create a body, how can a whole universe be created from nothing? That is why there must be a law like gravity. Because gravity is attractive, gravitational energy is negative: One has to do work to separate a gravitationally bound system, such as the earth and moon. This negative energy can balance the positive energy needed to create matter, but it’s not quite that simple. The negative gravitational energy of the earth, for example, is less than a billionth of the positive energy of the matter particles the earth is made of. A body such as a star will have more negative gravitational energy, and the smaller it is (the closer the different parts of it are to each other), the greater this negative gravitational energy will be. But before it can become greater than the positive energy of the matter, the star will collapse to a black hole, and black holes have positive energy. That’s why empty space is stable. Bodies such as stars or black holes cannot just appear out of nothing. But a whole universe can.

Because gravity shapes space and time, it allows space-time to be locally stable but globally unstable. On the scale of the entire universe, the positive energy of the matter can be balanced by the negative gravitational energy, and so there is no restriction on the creation of whole universes. Because there is a law like gravity, the universe can and will create itself from nothing. (…) Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist. It is not necessary to invoke God to light the blue touch paper and set the universe going.

Why are the fundamental laws as we have described them?

The ultimate theory must be consistent and must predict finite results for quantities that we can measure. We’ve seen that there must be a law like gravity, and we saw in Chapter 5 that for a theory of gravity to predict finite quantities, the theory must have what is called supersymmetry between the forces of nature and the matter on which they act. M-theory is the most general supersymmetric theory of gravity. For these reasons M-theory is the only candidate for a complete theory of the universe. If it is finite—and this has yet to be proved—it will be a model of a universe that creates itself. We must be part of this universe, because there is no other consistent model.

M-theory is the unified theory Einstein was hoping to find. The fact that we human beings—who are ourselves mere collections of fundamental particles of nature—have been able to come this close to an understanding of the laws governing us and our universe is a great triumph. But perhaps the true miracle is that abstract considerations of logic lead to a unique theory that predicts and describes a vast universe full of the amazing variety that we see. If the theory is confirmed by observation, it will be the successful conclusion of a search going back more than 3,000 years. We will have found the grand design.”

Stephen Hawking, British theoretical physicist and author, Leonard Mlodinow, The Grand Design, Random House, 2010.

See also:

Stephen Hawking on the universe’s origin
☞ Tim Maudlin, What Happened Before the Big Bang? The New Philosophy of Cosmology
Vlatko Vedral: Decoding Reality: the universe as quantum information
The Concept of Laws. The special status of the laws of mathematics and physics
Raphael Bousso: Thinking About the Universe on the Larger Scales
Lisa Randall on the effective theory

May
27th
Sun
permalink

Science Is Not About Certainty. Science is about overcoming our own ideas and a continuous challenge of common sense

       

“At the core of all well-founded belief lies belief that is unfounded.”

Ludwig Wittgenstein, On Certainty, #253, J. & J. Harper Editions, New York, 1969. 

"The value of philosophy is, in fact, to be sought largely in its very uncertainty. The man who has no tincture of philosophy goes through life imprisoned in the prejudices derived from common sense, from the habitual beliefs of his age or his nation, and from convictions which have grown up in his mind without the co-operation or consent of his deliberate reason. To such a man the world tends to become definite, finite, obvious; common objects rouse no questions, and unfamiliar possibilities are contemptuously rejected. As soon as we begin to philosophize, on the contrary, we find that even the most everyday things lead to problems to which only very incomplete answers can be given.

Philosophy, though unable to tell us with certainty what is the true answer to the doubts it raises, is able to suggest many possibilities which enlarge our thoughts and free them from the tyranny of custom. Thus, while diminishing our feeling of certainty as to what things are, it greatly increases our knowledge as to what they may be; it removes the somewhat arrogant dogmatism of those who have never traveled into the region of liberating doubt, and it keeps alive our sense of wonder by showing familiar things in an unfamiliar aspect.”

Bertrand RussellThe Problems of Philosophy (1912), Cosimo, Inc, 2010, p. 113-114.

We say that we have some theories about science. Science is about hypothetico-deductive methods, we have observations, we have data, data require to be organized in theories.  So then we have theories. These theories are suggested or produced from the data somehow, then checked in terms of the data. Then time passes, we have more data, theories evolve, we throw away a theory, and we find another theory which is better, a better understanding of the data, and so on and so forth. This is a standard idea of how science works, which implies that science is about empirical content, the true interesting relevant content of science is its empirical content. Since theories change, the empirical content is the solid part of what science is. Now, there’s something disturbing, for me as a theoretical scientist, in all this. I feel that something is missing. Something of the story is missing. I’ve been asking to myself what is this thing missing? (…)

This is particularly relevant today in science, and particularly in physics, because if I’m allowed to be polemical, in my field, in fundamental theoretical physics, it is 30 years that we fail. There hasn’t been a major success in theoretical physics in the last few decades, after the standard model, somehow. Of course there are ideas. These ideas might turn out to be right. Loop quantum gravity might turn out to be right, or not. String theory might turn out to be right, or not. But we don’t know, and for the moment, nature has not said yes in any sense.

I suspect that this might be in part because of the wrong ideas we have about science, and because methodologically we are doing something wrong, at least in theoretical physics, and perhaps also in other sciences.

Anaximander. Changing something in the conceptual structure that we have in grasping reality

Let me tell you a story to explain what I mean. The story is an old story about my latest, greatest passion outside theoretical physics: an ancient scientist, or so I would say, even if often je is called a philosopher: Anaximander. I am fascinated by this character, Anaximander. I went into understanding what he did, and to me he’s a scientist. He did something that is very typical of science, and which shows some aspect of what science is. So what is the story with Anaximander? It’s the following, in brief:

Until him, all the civilizations of the planet, everybody around the world, thought that the structure of the world was: the sky over our heads and the earth under our feet. There’s an up and a down, heavy things fall from the up to the down, and that’s reality. Reality is oriented up and down, heaven’s up and earth is down. Then comes Anaximander and says: no, is something else. ‘The earth is a finite body that floats in space, without falling, and the sky is not just over our head; it is all around.’

How he gets it? Well obviously he looks at the sky, you see things going around, the stars, the heavens, the moon, the planets, everything moves around and keeps turning around us. It’s sort of reasonable to think that below us is nothing, so it seems simple to get to this conclusion. Except that nobody else got to this conclusion. In centuries and centuries of ancient civilizations, nobody got there. The Chinese didn’t get there until the 17th century, when Matteo Ricci and the Jesuits went to China and told them. In spite of centuries of Imperial Astronomical Institute which was studying the sky. The Indians only learned this when the Greeks arrived to tell them. The Africans, in America, in Australia… nobody else got to this simple realization that the sky is not just over our head, it’s also under our feet. Why?

Because obviously it’s easy to suggest that the earth sort of floats in nothing, but then you have to answer the question: why doesn’t it fall? The genius of Anaximander was to answer this question. We know his answer, from Aristotle, from other people. He doesn’t answer this question, in fact. He questions this question. He says why should it fall? Things fall toward the earth. Why the earth itself should fall? In other words, he realizes that the obvious generalization from every small heavy object falling, to the earth itself falling, might be wrong. He proposes an alternative, which is that objects fall towards the earth, which means that the direction of falling changes around the earth.

This means that up and down become notions relative to the earth. Which is rather simple to figure out for us now: we’ve learned this idea. But if you think of the difficulty when we were children, to understand how people in Sydney could live upside-down, clearly requires some changing in something structural in our basic language in terms of which we understand the world. In other words, up and down means something different before and after Anaximander’s revolution.

He understands something about reality, essentially by changing something in the conceptual structure that we have in grasping reality. In doing so, he is not doing a theory; he understands something which in some precise sense is forever. It’s some uncovered truth, which to a large extent is a negative truth. He frees ourselves from prejudice, a prejudice that was ingrained in the conceptual structure we had for thinking about space.

Why I think this is interesting?  Because I think that this is what happens at every major step, at least in physics; in fact, I think this is what happened at every step, even not major. When I give a thesis to students, most of the time the problem I give for a thesis is not solved. It’s not solved because the solution of the question, most of the time, is not solving in the question, it’s just questioning the question itself. Is realizing that in the way the problem was formulated, there was some implicit prejudice assumption that was the one to be dropped.   

If this is so, the idea that we have data and theories, and then we have a rational agent that constructs theories from the data using his rationality, his mind, his intelligence, his conceptual structure, and juggles theories and data, doesn’t make any sense, because what is being challenged at every step is not the theory, it’s the conceptual structure used in constructing theories and interpreting the data. In other words, it’s not changing theories that we go ahead, but changing the way we think about the world.

The prototype of this way of thinking, I think the example that makes it more clear, is Einstein's discovery of special relativity. On the one hand there was Newtonian mechanics, which was extremely successful with its empirical content. On the other hand there was Maxwell’s theory, with its empirical content, which was extremely successful, too. But there was a contradiction between the two.

If Einstein had gone to school to learn what science is, if he had read Kuhn, and the philosopher explaining what science is, if he was any one of my colleagues today who are looking for a solution of the big problem of physics today, what would he do?

He would say, okay, the empirical content is the strong part of the theory. The idea in classical mechanics that velocity is relative: forget about it. The Maxwell equations, forget about them. Because this is a volatile part of our knowledge. The theories themselves have to be changed, okay? What we keep solid is the data, and we modify the theory so that it makes sense coherently, and coherently with the data.

That’s not at all what Einstein does. Einstein does the contrary. He takes the theories very seriously. He believes the theory. He says, look, classical mechanics is so successful that when it says that velocity is relative, we should take it seriously, and we should believe it. And the Maxwell equations are so successful that we should believe the Maxwell equations. He has so much trust in the theory itself, in the qualitative content of the theory, that qualitative content that Kuhn says changes all the time, that we learned not to take too seriously, and so much faith in this, confidence in that, that he’s ready to do what? To force coherence between these two, the two theories, by challenging something completely different, which is something that is in our head, which is how we think about time.

He’s changing something in common sense, something about the elementary structure in terms of which we think of the world, on the basis of the trust of the past results in physics. This is exactly the opposite of what is done today in physics. If you read Physical Review today, it’s all about theories that challenge completely and deeply the content of previous theories: so theories in which there is no Lorentz invariance, which are not relativistic, which are not general covariant, quantum mechanics might be wrong…

Every physicist today is immediately ready to say, okay, all of our past knowledge about the world is wrong. Let’s randomly pick some new idea. I suspect that this is not a small component of the long-term lack of success of theoretical physics. You understand something new about the world, either from new data that arrive, or from thinking deeply on what we have already learned about the world. But thinking means also accepting what we’ve learned, challenging what we think, and knowing that in some of the things that we think, there may be something to modify and to change.

Science is not about the data, but about the tools that we use

What are then the aspects of doing science that I think are under-evaluated, and should come up-front? First, science is about constructing visions of the world, about rearranging our conceptual structure, about creating new concepts which were not there before, and even more, about changing, challenging the a-priori that we have. So it’s nothing to do about the assembly of data and the way of organizing the assembly of data. It has everything to do about the way we think, and about our mental vision of the world. Science is a process in which we keep exploring ways of thinking, and changing our image of the world, our vision of the world, to find new ones that work a little bit better.

In doing that, what we have learned in the past is our main ingredient, especially the negative things we have learned. If we have learned that the earth is not flat, there will be no theory in the future in which the earth is ‘flat.’ If we have learned that the earth is not at the center of the universe, that’s forever. We’re not going to go back on this. If you have learned that simultaneity is relative, with Einstein, we’re not going back to absolute simultaneity, like many people think. This means that when an experiment measures neutrinos going faster than light, we should be very suspicious, and of course check and see whether there is something very deep that is happening. But it is absurd that everybody jumps and says okay, Einstein was wrong, just for a little anomaly that shows so. It never works like that in science.

The past knowledge is always with us, and it’s our main ingredient for understanding. The theoretical ideas which are based on ‘let’s imagine that this may happen because why not’ are not taking us anywhere.

I seem to be saying two things that contradict each other. On the one hand, we trust the knowledge, and on the other hand, we are always ready to modify in-depth part of our conceptual structure about the world. There is no contradiction between the two, because the idea of the contradiction comes from what I see as the deepest misunderstanding about science, which is the idea that science is about certainty

Science is not about certainty. Science is about finding the most reliable way of thinking, at the present level of knowledge. Science is extremely reliable; it’s not certain. In fact, not only it’s not certain, but it’s the lack of certainty that grounds it. Scientific ideas are credible not because they are sure, but because they are the ones that have survived all the possible past critiques, and they are the most credible because they were put on the table for everybody’s criticism.

The very expression ‘scientifically proven’ is a contradiction in terms. There is nothing that is scientifically proven. The core of science is the deep awareness that we have wrong ideas, we have prejudices. We have ingrained prejudices. In our conceptual structure for grasping reality there might be something not appropriate, something we may have to revise to understand better. So at any moment, we have a vision of reality that is effective, it’s good, it’s the best we have found so far. It’s the most credible we have found so far, its mostly correct.

But at the same time it’s not taken for certain, and any element of it is a priori open for revision. Why do we have this continuous…? On the one hand, we have this brain, and it has evolved for millions of years. It has evolved for us, for basically running the savannah and run after and eat deer and try not to be eaten by the lions. We have a brain that is tuned to meters and hours, which is not particularly well-tuned to think about atoms and galaxies. So we have to get out of that.  

At the same time I think we have been selected for going out of the forest, perhaps, going out of Africa, for being as smart as possible, as animals that escape lions. This continuous effort that is part of us to change our own way of thinking, to readapt, is a very part of our nature. We are not changing our mind away from nature; it is our natural history that continues to change that.      

If I can make a final comment about this way of thinking about science, or two final comments: One is that science is not about the data. The empirical content of scientific theory is not what is relevant. The data serves to suggest the theory, to confirm the theory, to disconfirm the theory, to prove the theory wrong. But these are the tools that we use. What interests us is the content of the theory. What interests us is what the theory says about the world. General relativity says space-time is curved. The data of general relativity are that Mercury perihelion moves 43 degrees per century, with respect to that computed with Newtonian mechanics.    

Who cares? Who cares about these details? If that was the content of general relativity, general relativity would be boring. General relativity is interesting not because of its data, but because it tells us that as far as we know today, the best way of conceptualizing space-time is as a curved object. It gives us a better way of grasping reality than Newtonian mechanics, because it tells us that there can be black holes, because it tells us there’s a Big Bang. This is the content of the scientific theory.

All living beings on earth have common ancestors. This is a content of scientific theory, not the specific data used to check the theory. So the focus of scientific thinking, I believe, should be on the content of the theory, the past theory, the previous theories, try to see what they hold concretely and what they suggest to us for changing in our conceptual frame themselves.  

Scientific thinking vs religious thinking

The final consideration regards just one comment about this understanding of science and this long conflict that has crossed the centuries between scientific thinking and religious thinking. I think often it is misunderstood. The question is, why can’t we live happily together, and why can’t people pray to their gods and study the universe without this continuous clash? I think that this continuous clash is a little bit unavoidable, for the opposite reason from the one often presented. It’s unavoidable not because science pretends to know the answers. But it’s the other way around, because if scientific thinking is this, then it is a constant reminder to ourselves that we don’t know the answers.

In religious thinking, often this is unacceptable. What is unacceptable is not a scientist that says I know, but it’s a scientist that says I don’t know, and how could you know? Based, at least in many religions, in some religions, or in some ways of being religious, an idea that there should be truth that one can hold and not be questioned. This way of thinking is naturally disturbed by a way of thinking which is based on continuous revision, not of the theories, of even the core ground of the way in which we think.     

The core of science is not certainty, it’s continuous uncertainty

So summarizing, I think science is not about data; it’s not about the empirical content, about our vision of the world. It’s about overcoming our own ideas, and about going beyond common sense continuously. Science is a continuous challenge of common sense, and the core of science is not certainty, it’s continuous uncertainty. I would even say the joy of taking what we think, being aware that in everything we think, there are probably still an enormous amount of prejudices and mistakes, and try to learn to look a little bit larger, knowing that there is always a larger point of view that we’ll expect in the future.    

We are very far from the final theory of the world, in my field, in physics, I think extremely far. Every hope of saying, well we are almost there, we’ve solved all the problems, is nonsense. And we are very wrong when we discard the value of theories like quantum mechanics, general relativity or special relativity, for that matter. And throw them away, trying something else randomly. On the basis of what we know, we should learn something more, and at the same time we should somehow take our vision for what it is, a vision that is the best vision that we have, but then continuous evolving the vision. (…) 

String theory's a beautiful theory. It might work, but I suspect it's not going to work. I suspect it's not going to work because it's not sufficiently grounded in everything we know so far about the world, and especially in what I think or perceive as the main physical content of general relativity.  

String theory’s a big guesswork. I think physics has never been a guesswork; it has been a way of unlearning how to think about something, and learning about how to think a little bit different by reading the novelty into the details of what we already know. Copernicus didn’t have any new data, any major new idea, he just took Ptolemy, in the details of Ptolemy, and he read in the details of Ptolemy the fact that the equants, the epicycles, the deferents were in certain proportions between them, the way to look at the same construction from a slightly different perspective and discover the earth is not the center of the universe.

Einstein, as I said, took seriously Maxwell’s theory and classical mechanics to get special relativity. So loop quantum gravity is an attempt to do the same thing: take seriously general relativity, take seriously quantum mechanics, and out of that, bring them together, even if this means a theory where there’s no time, no fundamental time, so we have rethink the world without basic time. The theory, on the one hand, is very conservative, because it’s based on what we know. But it’s totally radical because it forces us to change something big in our way of thinking.

String theorists think differently. They say well, let’s go out to infinity, where somehow the full covariance of general relativity is not there. There we know what is time, we know what is space, because we’re at asymptotic distances, at large distances. The theory’s wilder, more different, more new, but in my opinion, it’s more based on the old conceptual structure. It’s attached to the old conceptual structure, and not attached to the novel content of the theories that have proven empirically successful. That’s how my way of reading science matches with the specifics of the research work that I do, and specifically of loop quantum gravity.

Of course we don’t know. I want to be very clear. I think that string theory’s a great attempt to go ahead, done by great people. My only polemical attitude with string theory is when I hear, but I hear less and less now, when I hear ‘oh, we know the solution already, certain it’s string theory.’ That’s certainly wrong and false. What is true is that that’s a good set of ideas; loop quantum gravity is another good set of ideas. We have to wait and see which one of the theories turns out to work, and ultimately to be empirically confirmed.    

Should a scientist think about philosophy, or not?

This may take me to another point, which is should a scientist think about philosophy, or not? It’s sort of the fashion today to discard philosophy, to say now we have science, we don’t need philosophy. I find this attitude very naïve for two reasons. One is historical. Just look back. Heisenberg would have never done quantum mechanics without being full of philosophy. Einstein would have never done relativity without having read all the philosophers and have a head full of philosophy. Galileo would never have done what he had done without having a head full of Plato. Newton thought of himself as a philosopher, and started by discussing this with Descartes, and had strong philosophical ideas.

But even Maxwell, Boltzmann, I mean, all the major steps of science in the past were done by people who were very aware of methodological, fundamental, even metaphysical questions being posed. When Heisenberg does quantum mechanics, he is in a completely philosophical mind. He says in classical mechanics there’s something philosophically wrong, there’s not enough emphasis on empiricism. It is exactly this philosophical reading of him that allows him to construct this fantastically new physical theory, scientific theory, which is quantum mechanics.  

             
Paul Dirac and Richard Feynman. From The Strangest Man. Photograph by A. John Coleman, courtesy AIP Emilio Segre Visual Archives, Physics Today collection

The divorce between this strict dialogue between philosophers and scientists is very recent, and somehow it’s after the war, in the second half of the 20th century. It has worked because in the first half of the 20thcentury, people were so smart. Einstein and Heisenberg and Dirac and company put together relativity and quantum theory and did all the conceptual work. The physics of the second half of the century has been, in a sense, a physics of application of the great ideas of the people of the ’30s, of the Einsteins and the Heisenbergs.

When you want to apply thes ideas, when you do atomic physics, you need less conceptual thinking. But now we are back to the basics, in a sense. When we do quantum gravity it’s not just application. I think that the scientists who say I don’t care about philosophy, it’s not true they don’t care about philosophy, because they have a philosophy. They are using a philosophy of science. They are applying a methodology. They have a head full of ideas about what is the philosophy they’re using; just they’re not aware of them, and they take them for granted, as if this was obvious and clear. When it’s far from obvious and clear. They are just taking a position without knowing that there are many other possibilities around that might work much better, and might be more interesting for them.

I think there is narrow-mindedness, if I might say so, in many of my colleague scientists that don’t want to learn what is being said in the philosophy of science. There is also a narrow-mindedness in a lot of probably areas of philosophy and the humanities in which they don’t want to learn about science, which is even more narrow-minded. Somehow cultures reach, enlarge. I’m throwing down an open door if I say it here, but restricting our vision of reality today on just the core content of science or the core content of humanities is just being blind to the complexity of reality that we can grasp from a number of points of view, which talk to one another enormously, and which I believe can teach one another enormously.”

Carlo Rovelli, Italian theoretical physicist, working on quantum gravity and on foundations of spacetime physics. He is professor of physics at the University of the Mediterranean in Marseille, France and member of the Intitut Universitaire de France. To see the whole video and read the transcript, click Science Is Not About Certainty: A Philosophy Of Physics, Edge, May 24, 2012. (Illustration source)

See also:

Raphael Bousso: Thinking About the Universe on the Larger Scales
David Deutsch: A new way to explain explanation
Galileo and the relationship between the humanities and the sciences
The Relativity of Truth - a brief résumé, Lapidarium notes
Philosophy vs science: which can answer the big questions of life?
☞ ‘Cognition, perception, relativity’ tag on Lapidarium notes

Apr
15th
Sun
permalink

How liberal and conservative brains are wired differently. Liberals and conservatives don’t just vote differently, they think differently

           

"There’s now a large body of evidence showing that those who opt for the political left and those who opt for the political right tend to process information in divergent ways and to differ on any number of psychological traits.

Perhaps most important, liberals consistently score higher on a personality measure called “openness to experience,” one of the “Big Five” personality traits, which are easily assessed through standard questionnaires. That means liberals tend to be the kind of people who want to try new things, including new music, books, restaurants and vacation spots — and new ideas.

“Open people everywhere tend to have more liberal values,” said psychologist Robert McCrae, who conducted voluminous studies on personality while at the National Institute on Aging at the National Institutes of Health.

Conservatives, in contrast, tend to be less open — less exploratory, less in need of change — and more “conscientious,” a trait that indicates they appreciate order and structure in their lives. This gels nicely with the standard definition of conservatism as resistance to change — in the famous words of William F. Buckley Jr., a desire to stand “athwart history, yelling ‘Stop!’ ” (…)

We see the consequences of liberal openness and conservative conscientiousness everywhere — and especially in the political battle over facts. (…)

Compare this with a different irrationality: refusing to admit that humans are a product of evolution, a chief point of denial for the religious right. In a recent poll, just 43 percent of tea party adherents accepted the established science here. Yet unlike the vaccine issue, this denial is anything but new and trendy; it is well over 100 years old. The state of Tennessee is even hearkening back to the days of the Scopes “Monkey” Trial, more than 85 years ago. It just passed a bill that will weaken the teaching of evolution.

Such are some of the probable consequences of openness, or the lack thereof. (…)

Now consider another related trait implicated in our divide over reality: the “need for cognitive closure.” This describes discomfort with uncertainty and a desire to resolve it into a firm belief. Someone with a high need for closure tends to seize on a piece of information that dispels doubt or ambiguity, and then freeze, refusing to consider new information. Those who have this trait can also be expected to spend less time processing information than those who are driven by different motivations, such as achieving accuracy.

A number of studies show that conservatives tend to have a greater need for closure than do liberals, which is precisely what you would expect in light of the strong relationship between liberalism and openness. “The finding is very robust,” explained Arie Kruglanski, a University of Maryland psychologist who has pioneered research in this area and worked to develop a scale for measuring the need for closure.

The trait is assessed based on responses to survey statements such as “I dislike questions which could be answered in many different ways” and “In most social conflicts, I can easily see which side is right and which is wrong.” (…)

Anti-evolutionists have been found to score higher on the need for closure. And in the global-warming debate, tea party followers not only strongly deny the science but also tend to say that they “do not need any more information” about the issue.

I’m not saying that liberals have a monopoly on truth. Of course not. They aren’t always right; but when they’re wrong, they are wrong differently.

When you combine key psychological traits with divergent streams of information from the left and the right, you get a world where there is no truth that we all agree upon. We wield different facts, and hold them close, because we truly experience things differently. (…)”

Chris Mooney, science and political journalist, author of four books, including the New York Times bestselling The Republican War on Science and the forthcoming The Republican Brain: The Science of Why They Deny Science and Reality (April 2012), Liberals and conservatives don’t just vote differently. They think differently, The Washington Post, April 13, 2012. (Illustration: Koren Shadmi for The Washington Post)

See also:

Political science: why rejecting expertise has become a campaign strategy, Lapidarium notes
Cognitive and Social Consequences of the Need for Cognitive Closure, European Review of Social Psychology
☞ Antonio Chirumbolo, The relationship between need for cognitiveclosure and political orientation: the mediating role of authoritarianism, Department of Social and Developmental Psychology, University of Rome ‘La Sapienza’
Paul Nurse, Stamp out anti-science in US politics, New Scientist, 14 Sept 2011
☞ Chris Mooney, Why Republicans Deny Science: The Quest for a Scientific Explanation, The Huffington Post, Jan 11, 2012
☞ John Allen Paulos, Why Don’t Americans Elect Scientists?, NYTimes, Feb 13, 2012.
Study: Conservatives’ Trust in Science Has Fallen Dramatically Since Mid-1970s, American Sociological Association, March 29, 2012.
Why people believe in strange things, Lapidarium notes

Mar
26th
Mon
permalink

Science historian George Dyson: Unravelling the digital code
      
                                                     George Dyson (Photo: Wired)

"It was not made for those who sell oil or sardines."

— G. W. Leibniz, ca. 1674, on his calculating machine

A universe of self-replicating code

Digital organisms, while not necessarily any more alive than a phone book, are strings of code that replicate and evolve over time. Digital codes are strings of binary digits — bits. Google is a fantastically large number, so large it is almost beyond comprehension, distributed and replicated across all kinds of hosts. When you click on a link, you are replicating the string of code that it links to. Replication of code sequences isn’t life, any more than replication of nucleotide sequences is, but we know that it sometimes leads to life.

Q [Kevin Kelly]: Are we in that digital universe right now, as we talk on the phone?

George Dyson: Sure. You’re recording this conversation using a digital recorder — into an empty matrix of addresses on a microchip that is being filled up at 44 kilobytes per second. That address space full of numbers is the digital universe.

Q: How fast is this universe expanding?

G.D.: Like our own universe at the beginning, it’s more exploding than expanding. We’re all so immersed in it that it’s hard to perceive. Last time I checked, the digital universe was expanding at the rate of five trillion bits per second in storage and two trillion transistors per second on the processing side. (…)

Q: Where is this digital universe heading?

G.D.: This universe is open to the evolution of all kinds of things. It’s cycling faster and faster. Even with Google and YouTube and Facebook, we can’t consume it all. And we aren’t aware what this space is filling up with. From a human perspective, computers are idle 99 per cent of the time. While they’re waiting for us to come up with instructions, computation is happening without us, as computers write instructions for each other. As Turing showed, this space can’t be supervised. As the digital universe expands, so does this wild, undomesticated side.”

— George Dyson interviewed by Kevin Kelly in Science historian George Dyson: Unravelling the digital code, Wired, Mar 5, 2012.

"Just as we later worried about recombinant DNA, what if these things escaped? What would they do to the world? Could this be the end of the world as we know it if these self-replicating numerical creatures got loose?

But, we now live in a world where they did get loose—a world increasingly run by self-replicating strings of code. Everything we love and use today is, in a lot of ways, self-reproducing exactly as Turing, von Neumann, and Barricelli prescribed. It’s a very symbiotic relationship: the same way life found a way to use the self-replicating qualities of these polynucleotide molecules to the great benefit of life as a whole, there’s no reason life won’t use the self-replicating abilities of digital code, and that’s what’s happening. If you look at what people like Craig Venter and the thousand less-known companies are doing, we’re doing exactly that, from the bottom up. (…)

What’s, in a way, missing in today’s world is more biology of the Internet. More people like Nils Barricelli to go out and look at what’s going on, not from a business or what’s legal point of view, but just to observe what’s going on.

Many of these things we read about in the front page of the newspaper every day, about what’s proper or improper, or ethical or unethical, really concern this issue of autonomous self-replicating codes. What happens if you subscribe to a service and then as part of that service, unbeknownst to you, a piece of self-replicating code inhabits your machine, and it goes out and does something else? Who is responsible for that? And we’re in an increasingly gray zone as to where that’s going. (…)

Why is Apple one of the world’s most valuable companies? It’s not only because their machines are so beautifully designed, which is great and wonderful, but because those machines represent a closed numerical system. And they’re making great strides in expanding that system. It’s no longer at all odd to have a Mac laptop. It’s almost the normal thing.

But I’d like to take this to a different level, if I can change the subject… Ten or 20 years ago I was preaching that we should look at digital code as biologists: the Darwin Among the Machines stuff. People thought that was crazy, and now it’s firmly the accepted metaphor for what’s going on. And Kevin Kelly quoted me in Wired, he asked me for my last word on what companies should do about this. And I said, “Well, they should hire more biologists.”

But what we’re missing now, on another level, is not just biology, but cosmology. People treat the digital universe as some sort of metaphor, just a cute word for all these products. The universe of Apple, the universe of Google, the universe of Facebook, that these collectively constitute the digital universe, and we can only see it in human terms and what does this do for us?

We’re missing a tremendous opportunity. We’re asleep at the switch because it’s not a metaphor. In 1945 we actually did create a new universe. This is a universe of numbers with a life of their own, that we only see in terms of what those numbers can do for us. Can they record this interview? Can they play our music? Can they order our books on Amazon? If you cross the mirror in the other direction, there really is a universe of self-reproducing digital code. When I last checked, it was growing by five trillion bits per second. And that’s not just a metaphor for something else. It actually is. It’s a physical reality.

We’re still here at the big bang of this thing, and we’re not studying it enough. Who’s the cosmologist really looking at this in terms of what it might become in 10,000 years? What’s it going to be in 100 years? Here we are at the very beginning and we just may simply not be asking the right questions about what’s going on. Try looking at it from the other side, not from our side as human beings. Scientists are the people who can do that kind of thing. You can look at viruses from the point of view of a virus, not from the point of view of someone getting sick.

Very few people are looking at this digital universe in an objective way. Danny Hillis is one of the few people who is. His comment, made exactly 30 years ago in 1982, was that "memory locations are just wires turned sideways in time". That’s just so profound. That should be engraved on the wall. Because we don’t realize that there is this very different universe that does not have the same physics as our universe. It’s completely different physics. Yet, from the perspective of that universe, there is physics, and we have almost no physicists looking at it, as to what it’s like. And if we want to understand the sort of organisms that would evolve in that totally different universe, you have to understand the physics of the world in which they are in.  It’s like looking for life on another planet. Danny has that perspective. Most people say just, “well, a wire is a wire. It’s not a memory location turned sideways in time.” You have to have that sort of relativistic view of things.

We are still so close to the beginning of this explosion that we are still immersed in the initial fireball. Yet, in that short period of time, for instance, it was not long ago that to transfer money electronically you had to fill out paper forms on both ends and then wait a day for your money to be transferred. And, in a very few years, it’s a dozen years or so, most of the money in the world is moving electronically all the time.

The best example of this is what we call the flash crash of May 6th, two years ago, when suddenly, the whole system started behaving unpredictably. Large amounts of money were lost in milliseconds, and then the money came back, and we quietly (although the SEC held an investigation) swept it under the rug and just said, “well, it recovered. Things are okay.” But nobody knows what happened, or most of us don’t know.

There was a great Dutch documentary—Money and Speed: Inside the Black Box—where they spoke to someone named Eric Scott Hunsader who actually had captured the data on a much finer time scale, and there was all sorts of very interesting stuff going on. But it’s happening so quickly that it’s below what our normal trading programs are able to observe, they just aren’t accounting for those very fast things. And this could be happening all around us—not just in the world of finance. We would not necessarily even perceive it, that there’s a whole world of communication that’s not human communication. It’s machines communicating with machines. And they may be communicating money, or information that has other meaning—but if it is money, we eventually notice it. It’s just the small warm pond sitting there waiting for the spark.

It’s an unbelievably interesting time to be a digital biologist or a digital physicist, or a digital chemist. A good metaphor is chemistry. We’re starting to address code by template, rather than by numerical location—the way biological molecules do.

We’re living in a completely different world. The flash crash was an example: you could have gone out for a cup of coffee and missed the whole thing, and come back and your company lost a billion dollars and got back 999 million, while you were taking your lunch break. It just happened so fast, and it spread so quickly.

So, yes, the fear scenario is there, that some malevolent digital virus could bring down the financial system. But on the other hand, the miracle of this flash crash was not that it happened, but that it recovered so quickly. Yet, in those milliseconds, somebody made off with a lot of money. We still don’t know who that was, and maybe we don’t want to know.

The reason we’re here today (surrounded by this expanding digital universe) is because in 1936, or 1935, this oddball 23-year-old undergraduate student, Alan Turing, developed this theoretical framework to understand a problem in mathematical logic, and the way he solved that problem turned out to establish the model for all this computation. And I believe we wold have arrived here, sooner or later, without Alan Turing or John von Neumann, but it was Turing who developed the one-dimensional model, and von Neumann who developed the two-dimensional implementation, for this increasingly three-dimensional digital universe in which everything we do is immersed. And so, the next breakthrough in understanding will also I think come from some oddball. It won’t be one of our great, known scientists. It’ll be some 22-year-old kid somewhere who makes more sense of this.

But, we’re going back to biology, and of course, it’s impossible not to talk about money, and all these other ways that this impacts our life as human beings. What I was trying to say is that this digital universe really is so different that the physics itself is different. If you want to understand what types of life-like or self-reproducing forms would develop in a universe like that, you actually want to look at the sort of physics and chemistry of how that universe is completely different from ours. An example is how not only its time scale but how time operates is completely different, so that things can be going on in that world in microseconds that suddenly have a real effect on ours.

Again, money is a very good example, because money really is a sort of a gentlemen’s agreement to agree on where the money is at a given time. Banks decide, well, this money is here today and it’s there tomorrow. And when it’s being moved around in microseconds, you can have a collapse, where suddenly you hit the bell and you don’t know where the money is. And then everybody’s saying, “Where’s the money? What happened to it?” And I think that’s what happened. And there are other recent cases where it looks like a huge amount of money just suddenly disappeared, because we lost the common agreement on where it is at an exact point in time. We can’t account for those time periods as accurately as the computers can.

One number that’s interesting, and easy to remember, was in the year 1953, there were 53 kilobytes of high-speed memory on planet earth. This is random access high-speed memory. Now you can buy those 53 kilobytes for an immeasurably small, thousandth of one cent or something. If you draw the graph, it’s a very nice, clean graph. That’s sort of Moore’s Law; that it’s doubling. It has a doubling time that’s surprisingly short, and no end in sight, no matter what the technology does. We’re doubling the number of bits in a extraordinarily short time.

And we have never seen that. Or I mean, we have seen numbers like that, in epidemics or chain reactions, and there’s no question it’s a very interesting phenomenon. But still, it’s very hard not to just look at it from our point of view. What does it mean to us? What does it mean to my investments? What does it mean to my ability to have all the music I want on my iPhone? That kind of thing. But there’s something else going on. We’re seeing a fraction of one percent of it, and there’s this other 99.99 percent that people just aren’t looking at.

The beginning of this was driven by two problems. The problem of nuclear weapons design, and the problem of code breaking were the two drivers of the dawn of this computational universe. There were others, but those were the main ones.

What’s the driver today? You want one word? It’s advertising. And, you may think advertising is very trivial, and of no real importance, but I think it’s the driver. If you look at what most of these codes are doing, they’re trying to get the audience, trying to deliver the audience. The money is flowing as advertising.

And it is interesting that Samuel Butler imagined all this in 1863, and then in his book Erewhon. And then 1901, before he died, he wrote a draft for “Erewhon Revisited.” In there, he called out advertising, saying that advertising would be the driving force of these machines evolving and taking over the world. Even then at the close of 19th century England, he saw advertising as the way we would grant power to the machines.

If you had to say what’s the most powerful algorithm set loose on planet earth right now? Originally, yes, it was the Monte Carlo code for doing neutron calculations. Now it’s probably the AdWords algorithm. And the two are related: if you look at the way AdWords works, it is a Monte Carlo process. It’s a sort of statistical sampling of the entire search space, and a monetizing of it, which as we know, is a brilliant piece of work. And that’s not to diminish all the other great codes out there.

We live in a world where we measure numbers of computers in billions, and numbers of what we call servers, which are the equivalent of in the old days, of what would be called mainframes. Those are in the millions, hundreds of millions.

Two of the pioneers of this—to single out only two pioneers—were John Von Neumann and Alan Turing. If they were here today Turing would be 100. Von Neumann would be 109. I think they would understand what’s going on immediately—it would take them a few minutes, if not a day, to figure out, to understand what was going on. And, they both died working on biology, and I think they would be immediately fascinated by the way biological code and digital code are now intertwined. Von Neumann’s consuming passion at the end was self-reproducing automata. And Alan Turing was interested in the question of how molecules could self-organize to produce organisms.

They would be, on the other hand, astonished that we’re still running their machines, that we don’t have different computers. We’re still just running your straight Von Neumann/Turing machine with no real modification. So they might not find our computers all that interesting, but they would be diving into the architecture of the Internet, and looking at it.

In both cases, they would be amazed by the direct connection between the code running on computers and the code running in biology—that all these biotech companies are directly reading and writing nucleotide sequences in and out of electronic memory, with almost no human intervention. That’s more or less completely mechanized now, so there’s direct translation, and once you translate to nucleotides, it’s a small step, a difficult step, but, an inevitable step to translate directly to proteins. And that’s Craig Venter’s world, and it’s a very, very different world when we get there.

The question of how and when humans are going to expand into the universe, the space travel question, is, in my view, almost rendered obsolete by this growth of a digitally-coded biology, because those digital organisms—maybe they don’t exist now, but as long as the system keeps going, they’re inevitable—can travel at the speed of light. They can propagate. They’re going to be so immeasurably far ahead that maybe humans will be dragged along with it.

But while our digital footprint is propagating at the speed of light, we’re having very big trouble even getting to the eleven kilometers per second it takes to get into lower earth orbit. The digital world is clearly winning on that front. And that’s for the distant future. But it changes the game of launching things, if you no longer have to launch physical objects, in order to transmit life.”

George Dyson, author and historian of technology whose publications broadly cover the evolution of technology in relation to the physical environment and the direction of society, A universe of self-replicating code, Edge, Mar 26, 2012.

See also:

Jameson Dungan on information and synthetic biology
Vlatko Vedral: Decoding Reality: the universe as quantum information
Rethinking “Out of Africa: A Conversation with Christopher Stringer (2011)
A Short Course In Synthetic Genomics, The Edge Master Class with George Church & Craig Venter (2009)
Eat Me Before I Eat You! A New Foe For Bad Bugs: A Conversation with Kary Mullis (2010)
Mapping The Neanderthal Genome. A Conversation with Svante Pääbo (2009)
Engineering Biology”: A Conversation with Drew Endy (2008)
☞ “Life: A Gene-Centric View A Conversation in Munich with Craig Venter & Raichard Dawkins (2008)
Ants Have Algorithms: A Talk with Ian Couzin (2008)
Life: What A Concept, The Edge Seminar, Freeman Dyson, J. Craig Venter, George Church, Dimitar Sasselov, Seth Lloyd, Robert Shapiro (2007)
Code II J. Doyne Farmer v. Charles Simonyi (1998)
Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

Mar
3rd
Sat
permalink

Beauty, Charm, and Strangeness: Art and Science as Metaphor

      

Science and art are different ways of looking at the same thing, namely, the world. (…)

The fact is, science is not making this new landscape, but discovering it. Einstein remarked more than once how strange it is that reality, as we know it, keeps proving itself amenable to the rules of man-made science. It certainly is strange; indeed, so strange, that perhaps it should make us a little suspicious. More than one philosopher has conjectured that our thought extends only as far as our capacity to express it. So too it is possible that what we consider reality is only that stratum of the world that we have the faculties to comprehend. For instance, I am convinced that quantum theory flouts commonsense logic only because commonsense logic has not yet been sufficiently expanded. (…)

I am not arguing that art is greater than science, more universal in its concerns, and wiser in its sad recognition of the limits of human knowledge. What I am proposing is that despite the profound differences between them, at an essential level art and science are so nearly alike as to be indistinguishable. (…)

The critic Frank Kermode has argued, persuasively, I believe, that one of art’s greatest attractions is that it offers “the sense of an ending.” The sense of completeness that is projected by the work of art is to be found nowhere else in our lives. We cannot remember our birth, and we shall not know our death; in between is the ramshackle circus of our days and doings. But in a poem, a picture, or a sonata, the curve is completed. This is the triumph of form. It is a deception, but one that we desire, and require.

The trick that art performs is to transform the ordinary into the extraordinary and back again in the twinkling of a metaphor. Here is [the poet] Wallace Stevens, in lines from his poem Notes Toward a Supreme Fiction (1942):

"You must become an ignorant man again

And see the sun again with an ignorant eye

And see it clearly in the idea of it.”

— Wallace Stevens, Collected Poetry and Prose (Library of America, 1997), p329. (…)

This is the project that all artists are embarked upon: to subject mundane reality to such intense, passionate, and unblinking scrutiny that it becomes transformed into something rich and strange while yet remaining solidly, stolidly, itself. Is the project of pure science any different?

When Johannes Kepler recognized that the planets move in elliptical orbits and not in perfect circles, as received wisdom had for millennia held they must do, he added infinitely to the richness of man’s life and thought. When Copernicus posited the horrifying notion that not the Earth but the sun is the center of our world, he literally put man in his place, and he did it for the sake of neither good nor ill, but for the sake of demonstrating how things are. (…)

In the 1970s, when quantum theory began employing such terms as “beauty,” “charm,” and “strangeness” to signify the various properties of quarks, a friend turned to me and said: “You know, they’re waiting for you to give them the words.” I saw what he meant, but he was not quite right: Science does not need art to supply its metaphors. Art and science are alike in their quest to reveal the world. Rainer Maria Rilke spoke for both the artist and the scientist when he said:

"Are we, perhaps, here just for saying: House, Bridge, Fountain, Gate, Jug, Fruit tree, Window,—possibly: Pillar, Tower?…but for saying, remember, oh, for such saying as never the things themselves hoped so intensely to be."

Rilke Poems (Knopf, 1996), p. 201 (stanza 2, lines 15 to 19).

John Banville, Irish novelist, adapter of dramas, and screenwriter, Beauty, Charm, and Strangeness: Science as Metaphor, Science, 3 July 1998. (Illustration: Greg Mort, Stewardship III, (2004)

See also:

Art and Science tag on Lapidarium
Art and Science tag on Lapidarium notes

Jan
22nd
Sun
permalink

What Happened Before the Big Bang? The New Philosophy of Cosmology

    

Tim Maudlin: “There are problems that are fairly specific to cosmology. Standard cosmology, or what was considered standard cosmology twenty years ago, led people to the conclude that the universe that we see around us began in a big bang, or put another way, in some very hot, very dense state. And if you think about the characteristics of that state, in order to explain the evolution of the universe, that state had to be a very low entropy state, and there’s a line of thought that says that anything that is very low entropy is in some sense very improbable or unlikely. And if you carry that line of thought forward, you then say “Well gee, you’re telling me the universe began in some extremely unlikely or improbable state” and you wonder is there any explanation for that. Is there any principle that you can use to account for the big bang state?

This question of accounting for what we call the “big bang state” — the search for a physical explanation of it — is probably the most important question within the philosophy of cosmology, and there are a couple different lines of thought about it. One that’s becoming more and more prevalent in the physics community is the idea that the big bang state itself arose out of some previous condition, and that therefore there might be an explanation of it in terms of the previously existing dynamics by which it came about. There are other ideas, for instance that maybe there might be special sorts of laws, or special sorts of explanatory principles, that would apply uniquely to the initial state of the universe.

One common strategy for thinking about this is to suggest that what we used to call the whole universe is just a small part of everything there is, and that we live in a kind of bubble universe, a small region of something much larger. And the beginning of this region, what we call the big bang, came about by some physical process, from something before it, and that we happen to find ourselves in this region because this is a region that can support life. The idea being that there are lots of these bubble universes, maybe an infinite number of bubble universes, all very different from one another. Part of the explanation of what’s called the anthropic principle says, “Well now, if that’s the case, we as living beings will certainly find ourselves in one of those bubbles that happens to support living beings.” That gives you a kind of account for why the universe we see around us has certain properties. (…)

Newton would call what he was doing natural philosophy, that’s actually the name of his book: Mathematical Principles of Natural Philosophy." Philosophy, traditionally, is what everybody thought they were doing. It’s what Aristotle thought he was doing when he wrote his book called Physics. So it’s not as if there’s this big gap between physical inquiry and philosophical inquiry. They’re both interested in the world on a very general scale, and people who work in the foundations of physics, that is, the group that works on the foundations of physics, is about equally divided between people who live in philosophy departments, people who live in physics departments, and people who live in mathematics departments.

Q: In May of last year Stephen Hawking gave a talk for Google in which he said that philosophy was dead, and that it was dead because it had failed to keep up with science, and in particular physics. Is he wrong or is he describing a failure of philosophy that your project hopes to address?

Maudlin: Hawking is a brilliant man, but he’s not an expert in what’s going on in philosophy, evidently. Over the past thirty years the philosophy of physics has become seamlessly integrated with the foundations of physics work done by actual physicists, so the situation is actually the exact opposite of what he describes. I think he just doesn’t know what he’s talking about. I mean there’s no reason why he should. Why should he spend a lot of time reading the philosophy of physics? I’m sure it’s very difficult for him to do. But I think he’s just … uninformed. (…)

Q: Do you think that physics has neglected some of these foundational questions as it has become, increasingly, a kind of engine for the applied sciences, focusing on the manipulation, rather than say, the explanation, of the physical world? 

Maudlin: Look, physics has definitely avoided what were traditionally considered to be foundational physical questions, but the reason for that goes back to the foundation of quantum mechanics. The problem is that quantum mechanics was developed as a mathematical tool. Physicists understood how to use it as a tool for making predictions, but without an agreement or understanding about what it was telling us about the physical world. And that’s very clear when you look at any of the foundational discussions. This is what Einstein was upset about; this is what Schrodinger was upset about.

Quantum mechanics was merely a calculational technique that was not well understood as a physical theory. Bohr and Heisenberg tried to argue that asking for a clear physical theory was something you shouldn’t do anymore. That it was something outmoded. And they were wrong, Bohr and Heisenberg were wrong about that. But the effect of it was to shut down perfectly legitimate physics questions within the physics community for about half a century. And now we’re coming out of that, fortunately.

Q And what’s driving the renaissance?

Maudlin: Well, the questions never went away. There were always people who were willing to ask them. Probably the greatest physicist in the last half of the twentieth century, who pressed very hard on these questions, was John Stewart Bell. So you can’t suppress it forever, it will always bubble up. It came back because people became less and less willing to simply say, “Well, Bohr told us not to ask those questions,” which is sort of a ridiculous thing to say.

Q: Are the topics that have scientists completely flustered especially fertile ground for philosophers? For example I’ve been doing a ton of research for a piece about the James Webb Space Telescope, the successor to the Hubble Space Telescope, and none of the astronomers I’ve talked to seem to have a clue as to how to use it to solve the mystery of dark energy. Is there, or will there be, a philosophy of dark energy in the same way that a body of philosophy seems to have flowered around the mysteries of quantum mechanics?

Maudlin: There will be. There can be a philosophy of anything really, but it’s perhaps not as fancy as you’re making it out. The basic philosophical question, going back to Plato, is “What is x?” What is virtue? What is justice? What is matter? What is time? You can ask that about dark energy - what is it? And it’s a perfectly good question.

There are different ways of thinking about the phenomena which we attribute to dark energy. Some ways of thinking about it say that what you’re really doing is adjusting the laws of nature themselves. Some other ways of thinking about it suggest that you’ve discovered a component or constituent of nature that we need to understand better, and seek the source of. So, the question — What is this thing fundamentally? — is a philosophical question, and is a fundamental physical question, and will lead to interesting avenues of

Q: One example of philosophy of cosmology that seems to have trickled out to the layman is the idea of fine tuning - the notion that in the set of all possible physics, the subset that permits the evolution of life is very small, and that from this it is possible to conclude that the universe is either one of a large number of universes, a multiverse, or that perhaps some agent has fine tuned the universe with the expectation that it generate life. Do you expect that idea to have staying power, and if not what are some of the compelling arguments against it?

Maudlin: A lot of attention has been given to the fine tuning argument. Let me just say first of all, that the fine tuning argument as you state it, which is a perfectly correct statement of it, depends upon making judgments about the likelihood, or probability of something. Like, “how likely is it that the mass of the electron would be related to the mass of the proton in a certain way?” Now, one can first be a little puzzled by what you mean by “how likely” or “probable” something like that is. You can ask how likely it is that I’ll roll double sixes when I throw dice, but we understand the way you get a handle on the use of probabilities in that instance. It’s not as clear how you even make judgments like that about the likelihood of the various constants of nature (an so on) that are usually referred to in the fine tuning argument.

Now let me say one more thing about fine tuning. I talk to physicists a lot, and none of the physicists I talk to want to rely on the fine tuning argument to argue for a cosmology that has lots of bubble universes, or lots of worlds. What they want to argue is that this arises naturally from an analysis of the fundamental physics, that the fundamental physics, quite apart from any cosmological considerations, will give you a mechanism by which these worlds will be produced, and a mechanism by which different worlds will have different constants, or different laws, and so on.  If that’s true, then if there are enough of these worlds, it will be likely that some of them have the right combination of constants to permit life. But their arguments tend not to be “we have to believe in these many worlds to solve the fine tuning problem,” they tend to be “these many worlds are generated by physics we have other reasons for believing in.”

If we give up on that, and it turns out there aren’t these many worlds, that physics is unable to generate them, then it’s not that the only option is that there was some intelligent designer. It would be a terrible mistake to think that those are the only two ways things could go. You would have to again think hard about what you mean by probability, and about what sorts of explanations there might be. Part of the problem is that right now there are just way too many freely adjustable parameters in physics. Everybody agrees about that. There seem to be many things we call constants of nature that you could imagine setting at different values, and most physicists think there shouldn’t be that many, that many of them are related to one another.

Physicists think that at the end of the day there should be one complete equation to describe all physics, because any two physical systems interact and physics has to tell them what to do. And physicists generally like to have only a few constants, or parameters of nature. This is what Einstein meant when he famously said he wanted to understand what kind of choices God had —using his metaphor— how free his choices were in creating the universe, which is just asking how many freely adjustable parameters there are. Physicists tend to prefer theories that reduce that number, and as you reduce it, the problem of fine tuning tends to go away. But, again, this is just stuff we don’t understand well enough yet.

Q: I know that the nature of time is considered to be an especially tricky problem for physics, one that physicists seem prepared, or even eager, to hand over to philosophers. Why is that?

Maudlin: That’s a very interesting question, and we could have a long conversation about that. I’m not sure it’s accurate to say that physicists want to hand time over to philosophers. Some physicists are very adamant about wanting to say things about it; Sean Carroll for example is very adamant about saying that time is real. You have others saying that time is just an illusion, that there isn’t really a direction of time, and so forth. I myself think that all of the reasons that lead people to say things like that have very little merit, and that people have just been misled, largely by mistaking the mathematics they use to describe reality for reality itself. If you think that mathematical objects are not in time, and mathematical objects don’t change — which is perfectly true — and then you’re always using mathematical objects to describe the world, you could easily fall into the idea that the world itself doesn’t change, because your representations of it don’t.

There are other, technical reasons that people have thought that you don’t need a direction of time, or that physics doesn’t postulate a direction of time. My own view is that none of those arguments are very good. To the question as to why a physicist would want to hand time over to philosophers, the answer would be that physicists for almost a hundred years have been dissuaded from trying to think about fundamental questions. I think most physicists would quite rightly say “I don’t have the tools to answer a question like ‘what is time?’ - I have the tools to solve a differential equation.” The asking of fundamental physical questions is just not part of the training of a physicist anymore.

Q: I recently came across a paper about Fermi’s Paradox and Self-Replicating Probes, and while it had kind of a science fiction tone to it, it occurred to me as I was reading it that philosophers might be uniquely suited to speculating about, or at least evaluating the probabilistic arguments for the existence of life elsewhere in the universe. Do you expect philosophers of cosmology to enter into those debates, or will the discipline confine itself to issues that emerge directly from physics?

Maudlin: This is really a physical question. If you think of life, of intelligent life, it is, among other things, a physical phenomenon — it occurs when the physical conditions are right. And so the question of how likely it is that life will emerge, and how frequently it will emerge, does connect up to physics, and does connect up to cosmology, because when you’re asking how likely it is that somewhere there’s life, you’re talking about the broad scope of the physical universe. And philosophers do tend to be pretty well schooled in certain kinds of probabilistic analysis, and so it may come up. I wouldn’t rule it in or rule it out.

I will make one comment about these kinds of arguments which seems to me to somehow have eluded everyone. When people make these probabilistic equations, like the Drake Equation, which you’re familiar with — they introduce variables for the frequency of earth-like planets, for the evolution of life on those planets, and so on. The question remains as to how often, after life evolves, you’ll have intelligent life capable of making technology.

What people haven’t seemed to notice is that on earth, of all the billions of species that have evolved, only one has developed intelligence to the level of producing technology. Which means that kind of intelligence is really not very useful. It’s not actually, in the general case, of much evolutionary value. We tend to think, because we love to think of ourselves, human beings, as the top of the evolutionary ladder, that the intelligence we have, that makes us human beings, is the thing that all of evolution is striving toward. But what we know is that that’s not true.

Obviously it doesn’t matter that much if you’re a beetle, that you be really smart. If it were, evolution would have produced much more intelligent beetles. We have no empirical data to suggest that there’s a high probability that evolution on another planet would lead to technological intelligence. There is just too much we don’t know.”

Tim Maudlin, (B.A. Yale, Physics and Philosophy; Ph.D. Pittsburgh, History and Philosophy of Science), interviewed by Ross Andersen, What Happened Before the Big Bang? The New Philosophy of Cosmology, The Atlantic, Jan 2012.

Illustrations: 1 - Cambridge Digital Gallery Newton Collection, 2 - Aristotle, Ptolemy, and Copernicus discussing astronomy, Published in 1632, Library of Congress.

See also:

The Concept of Laws. The special status of the laws of mathematics and physics
Raphael Bousso: Thinking About the Universe on the Larger Scales
Stephen Hawking on the univers’s origin
Universe tag on Lapidarium notes
Universe tag on Lapidarium

Jan
17th
Tue
permalink

The Rise of Complexity. Scientists replicate key evolutionary step in life on earth

                        
         Green cells are undergoing cell death, a cellular division-of-labor—fostering new life.

More than 500 million years ago, single-celled organisms on Earth’s surface began forming multi-cellular clusters that ultimately became plants and animals. (…)

The yeast “evolved” into multi-cellular clusters that work together cooperatively, reproduce and adapt to their environment—in essence, they became precursors to life on Earth as it is today. (…)

The finding that the division-of-labor evolves so quickly and repeatedly in these ‘snowflake’ clusters is a big surprise. (…) The first step toward multi-cellular complexity seems to be less of an evolutionary hurdle than theory would suggest.” (…)

"To understand why the world is full of , including humans, we need to know how one-celled organisms made the switch to living as a group, as multi-celled organisms.” (…)

"This study is the first to experimentally observe that transition," says Scheiner, "providing a look at an event that took place hundreds of millions of years ago." (…)

The scientists chose Brewer’s yeast, or Saccharomyces cerevisiae, a species of yeast used since ancient times to make bread and beer because it is abundant in nature and grows easily.

They added it to nutrient-rich culture media and allowed the cells to grow for a day in test tubes.

Then they used a centrifuge to stratify the contents by weight.

As the mixture settled, cell clusters landed on the bottom of the tubes faster because they are heavier. The biologists removed the clusters, transferred them to fresh media, and agitated them again.

                   
    First steps in the transition to multi-cellularity: ‘snowflake’ yeast with dead cells stained red.

Sixty cycles later, the clusters—now hundreds of cells—looked like spherical snowflakes.

Analysis showed that the clusters were not just groups of random cells that adhered to each other, but related cells that remained attached following cell division.

That was significant because it meant that they were genetically similar, which promotes cooperation. When the clusters reached a critical size, some cells died off in a process known as apoptosis to allow offspring to separate.

The offspring reproduced only after they attained the size of their parents. (…)

                       
     Multi-cellular yeast individuals containing central dead cells, which promote reproduction.

"A cluster alone isn’t multi-cellular," William Ratcliff says. "But when cells in a cluster cooperate, make sacrifices for the common good, and adapt to change, that’s an evolutionary transition to multi-cellularity."

In order for multi-cellular organisms to form, most cells need to sacrifice their ability to reproduce, an altruistic action that favors the whole but not the individual. (…)

For example, all cells in the human body are essentially a support system that allows sperm and eggs to pass DNA along to the next generation.

Thus multi-cellularity is by its nature very cooperative.

"Some of the best competitors in nature are those that engage in cooperation, and our experiment bears that out. (…)

Evolutionary biologists have estimated that multi-cellularity evolved independently in about 25 groups.”

Scientists replicate key evolutionary step in life on earth, Physorg, Jan 16, 2012.

Evolution: The Rise of Complexity

"Let’s rewind time back about 3.5 billion years. Our beloved planet looks nothing like the lush home we know today – it is a turbulent place, still undergoing the process of formation. Land is a fluid concept, consisting of molten lava flows being created and destroyed by massive volcanoes. The air is thick with toxic gasses like methane and ammonia which spew from the eruptions. Over time, water vapor collects, creating our first weather events, though on this early Earth there is no such thing as a light drizzle. Boiling hot acid rain pours down on the barren land for millions of years, slowly forming bubbling oceans and seas. Yet in this unwelcoming, violent landscape, life begins.

The creatures which dared to arise are called cyanobacteria, or blue-green algae. They were the pioneers of photosynthesis, transforming the toxic atmosphere by producing oxygen and eventually paving the way for the plants and animals of today. But what is even more incredible is that they were the first to do something extraordinary – they were the first cells to join forces and create multicellular life. (…)

William Ratcliff and his colleagues at the University of Minnesota. In a PNAS paper published online this week, they show how multicellular yeast can arise in less than two months in the lab. (…)

All of their cultures went from single cells to snowflake-like clumps in less than 60 days. “Although known transitions to complex multicellularity, with clearly differentiated cell types, occurred over millions of years, we have shown that the first crucial steps in the transition from unicellularity to multicellularity can evolve remarkably quickly under appropriate selective conditions,” write the authors. These clumps weren’t just independent cells sticking together for the sake of it – they acted as rudimentary multicellular creatures. They were formed not by random cells attaching but by genetically identical cells not fully separating after division. Furthermore, there was division of labor between cells. As the groups reached a certain size, some cells underwent programmed cell death, providing places for daughter clumps to break from. Since individual cells acting as autonomous organisms would value their own survival, this intentional culling suggests that the cells acted instead in the interest of the group as a whole organism.

Given how easily multicellular creatures can arise in test tubes, it might then come as no surprise that multicellularity has arisen at least a dozen times in the history of life, independently in bacteria, plants and of course, animals, beginning the evolutionary tree that we sit atop today. Our evolutionary history is littered with leaps of complexity. While such intricacies might seem impossible, study after study has shown that even the most complex structures can arise through the meandering path of evolution. In Evolution’s Witness, Ivan Schwab explains how one of the most complex organs in our body, our eyes, evolved. (…)

Eyes are highly intricate machines that require a number of parts working together to function. But not even the labyrinthine structures in the eye present an insurmountable barrier to evolution.

Our ability to see began to evolve long before animals radiated. Visual pigments, like retinal, are found in all animal lineages, and were first harnessed by prokaryotes to respond to changes in light more than 2.5 billion years ago. But the first complex eyes can be found about 540 million years ago, during a time of rapid diversification colloquially referred to as the Cambrian Explosion. It all began when comb jellies, sponges and jellyfish, along with clonal bacteria, were the first to group photoreceptive cells and create light-sensitive ‘eyespots’. These primitive visual centers could detect light intensity, but lacked the ability to define objects. That’s not to say, though, that eyespots aren’t important – eyespots are such an asset that they arose independently in at least 40 different lineages. But it was the other invertebrate lineages that would take the simple eyespot and turn it into something incredible.

According to Schwab, the transition from eyespot to eye is quite small. “Once an eyespot is established, the ability to recognize spatial characteristics – our eye definition – takes one of two mechanisms: invagination (a pit) or evagination (a bulge).” Those pits or bulges can then be focused with any clear material forming a lens (different lineages use a wide variety of molecules for their lenses). Add more pigments or more cells, and the vision becomes sharper. Each alteration is just a slight change from the one before, a minor improvement well within bounds of evolution’s toolkit, but over time these small adjustments led to intricate complexity.

In the Cambrian, eyes were all the rage. Arthropods were visual trendsetters, creating compound eyes by using the latter approach, that of bulging, then combining many little bulges together. One of the era’s top predators, Anomalocaris, had over 16,000 lenses! So many creatures arose with eyes during the Cambrian that Andrew Parker, a visiting member of the Zoology Department at the University of Oxford, believes that the development of vision was the driver behind the evolutionary explosion. His ‘Light-Switch’ hypothesis postulates that vision opened the doors for animal innovation, allowing rapid diversification in modes and mechanisms for a wide set of ecological traits. Even if eyes didn’t spur the Cambrian explosion, their development certainly irrevocably altered the course of evolution.

                          
                     Fossilized compound eyes from Cambrian arthropods (Lee et al. 2011)

Our eyes, as well as those of octopuses and fish, took a different approach than those of the arthropods, putting photo receptors into a pit, thus creating what is referred to as a camera-style eye. In the fossil record, eyes seem to emerge from eyeless predecessors rapidly, in less than 5 million years. But is it really possible that an eye like ours arose so suddenly? Yes, say biologists Dan-E. Nilsson and Susanne Pelger. They calculated a pessimistic guess as to how long it would take for small changes – just 1% improvements in length, depth, etc per generation – to turn a flat eyespot into an eye like our own. Their conclusion? It would only take about 400,000 years – a geological instant.

How does complexity arise in the first place

But how does complexity arise in the first place? How did cells get photoreceptors, or any of the first steps towards innovations such as vision? Well, complexity can arise a number of ways.

Each and every one of our cells is a testament to the simplest way that complexity can arise: have one simple thing combine with a different one. The powerhouses of our cells, called mitochondria, are complex organelles that are thought to have arisen in a very simple way. Some time around 3 billion years ago, certain bacteria had figured out how to create energy using electrons from oxygen, thus becoming aerobic. Our ancient ancestors thought this was quite a neat trick, and, as single cells tend to do, they ate these much smaller energy-producing bacteria. But instead of digesting their meal, our ancestors allowed the bacteria to live inside them as an endosymbiont, and so the deal was struck: our ancestor provides the fuel for the chemical reactions that the bacteria perform, and the bacteria, in turn, produces ATP for both of them. Even today we can see evidence of this early agreement – mitochondria, unlike other organelles, have their own DNA, reproduce independently of the cell’s reproduction, and are enclosed in a double membrane (the bacterium’s original membrane and the membrane capsule used by our ancestor to engulf it).

Over time the mitochondria lost other parts of their biology they didn’t need, like the ability to move around, blending into their new home as if they never lived on their own. The end result of all of this, of course, was a much more complex cell, with specialized intracellular compartments devoted to different functions: what we now refer to as a eukaryote.

Complexity can arise within a cell, too, because our molecular machinery makes mistakes. On occasion, it duplicates sections of DNA, entire genes, and even whole chromosomes, and these small changes to our genetic material can have dramatic effects. We saw how mutations can lead to a wide variety of phenotypic traits when we looked at how artificial selection has shaped dogs. These molecular accidents can even lead to complete innovation, like the various adaptations of flowering plants that I talked about in my last Evolution post. And as these innovations accumulate, species diverge, losing the ability to reproduce with each other and filling new roles in the ecosystem. While the creatures we know now might seem unfathomably intricate, they are the product of billions of years of slight variations accumulating.

Of course, while I focused this post on how complexity arose, it’s important to note that more complex doesn’t necessarily mean better. While we might notice the eye and marvel at its detail, success, from the viewpoint of an evolutionary lineage, isn’t about being the most elaborate. Evolution only leads to increases in complexity when complexity is beneficial to survival and reproduction.

Indeed, simplicity has its perks: the more simple you are, the faster you can reproduce, and thus the more offspring you can have. Many bacteria live happy simple lives, produce billions of offspring, and continue to thrive, representatives of lineages that have survived billions of years. Even complex organisms may favor less complexity – parasites, for example, are known for their loss of unnecessary traits and even whole organ systems, keeping only what they need to get inside and survive in their host. Darwin referred to them as regressive for seemingly violating the unspoken rule that more complex arises from less complex, not the other way around. But by not making body parts they don’t need, parasites conserve energy, which they can invest in other efforts like reproduction.

When we look back in an attempt to grasp evolution, it may instead be the lack of complexity, not the rise of it, that is most intriguing.”

See also:

Scientists recreate evolution of complexity using ‘molecular time travel’
Nature Has A Tendency To Reduce Complexity
Emergence and Complexity - prof. Robert Sapolsky’s lecture, Stanford University (video)

Jan
13th
Fri
permalink

Can A Scientist Define “Life”?

"Defining life poses a challenge that’s downright philosophical. (…) When Portland State University biologist Radu Popa was working on a book about defining life, he decided to count up all the definitions that scientists have published in books and scientific journals. Some scientists define life as something capable of metabolism. Others make the capacity to evolve the key distinction. Popa gave up counting after about 300 definitions.

Things haven’t gotten much better in the years since Popa published Between Necessity and Probability: Searching for the Definition and Origin of Life in 2004. Scientists have unveiled even more definitions, yet none of them have been widely embraced. But now Edward Trifonov, a biologist at the University of Haifa in Israel (…) analyzed the linguistic structure of 150 definitions of life, grouping similar words into categories. He found that he could sum up what they all have in common in three words. Life, Trifonov declares, is simply self-reproduction with variations.

Trifonov argues that this minimal definition is useful because it encompasses both life as we know it and life as we may discover it to be. And as scientists tinker with self-replicating molecules, they may be able to put his definition to the test. It may be possible for them to create a system of molecules that meets the requirements. If it fails to come “alive,” it will show that the definition was missing something crucial about life. (…)

A number of the scientists who responded to Trifonov felt that his definition was missing one key feature or another, such as metabolism, a cell, or information. Eugene Koonin, a biologist at the National Center for Biotechnology Information, thinks that Trifonov’s definition is missing error correction. He argues that “self-reproduction with variation” is redundant, since the laws of thermodynamics ensure that error-free replication is impossible. “The problem is the exact opposite,” Koonin observes: if life replicates with too many errors, it stops replicating. He offers up an alternative: life requires “replications with an error rate below the sustainability threshold.”

Jack Szostak, a Nobel-prize winning Harvard biologist, simply rejects the search for any definition of life. “Attempts to define life are irrelevant to scientific efforts to understand the origin of life,” he writes (article PDF).

Szostak himself has spent two decades tinkering with biological molecules to create simple artificial life. Instead of using DNA to store genetic information and proteins to carry out chemical reactions, Szostak hopes to create cells that only contain single-stranded RNA molecules. Like many researchers, Szostak suspects that RNA-based life preceded DNA-based life. It may have even been the first kind of life on Earth, even if it cannot be found on the planet today.

Life, Szostak suspects, arose through a long series of steps, as small molecules began interacting with each other, replicating, getting enveloped into cells, and so on. Once there were full-blown cells that could grow, divide, and evolve, no one would deny that life had come to exist on Earth. But it’s pointless to try to find the precise point along the path where life suddenly sprang into being and met an arbitrary definition. “None of this matters, however, in terms of the fundamental scientific questions concerning the transitions leading from chemistry to biology,” says Szostak.

It’s conceivable that Mars has Earth-like life, either because one planet infected the other, or because chemistry became biology along the same path on both of them. In either case, Curiosity [rover] may be able to do some good science when it arrives at Mars this summer. But if it’s something fundamentally different, even the most sophisticated machines may not be able to help us until we come to a decision about what we’re looking for in the first place.”

Carl Zimmer, popular science writer and blogger, Can A Scientist Define “Life”?, Txchnologist, Jan 10, 2012. (Illustration: Russell Kightley)

Jan
8th
Sun
permalink

Scientists recreate evolution of complexity using ‘molecular time travel’  

    

Much of what living cells do is carried out by “molecular machines” – physical complexes of specialized proteins working together to carry out some biological function. (…)

In a study published early online on January 8, in Nature, a team of scientists from the University of Chicago and the University of Oregon demonstrate how just a few small, high-probability mutations increased the complexity of a molecular machine more than 800 million years ago. By biochemically resurrecting ancient genes and testing their functions in modern organisms, the researchers showed that a new component was incorporated into the machine due to selective losses of function rather than the sudden appearance of new capabilities.

"Our strategy was to use ‘molecular time travel’ to reconstruct and experimentally characterize all the proteins in this molecular machine just before and after it increased in complexity," said the study’s senior author Joe Thornton, PhD, professor of human genetics and & ecology at the University of Chicago, professor of biology at the University of Oregon, and an Early Career Scientist of the Howard Hughes Medical Institute.

"By reconstructing the machine’s components as they existed in the deep past," Thornton said, "we were able to establish exactly how each protein’s function changed over time and identify the specific genetic mutations that caused the machine to become more elaborate." (…)

To understand how the ring increased in complexity, Thornton and his colleagues “resurrected” the ancestral versions of the ring proteins just before and just after the third subunit was incorporated. To do this, the researchers used a large cluster of computers to analyze the gene sequences of 139 modern-day ring proteins, tracing evolution backwards through time along the Tree of Life to identify the most likely ancestral sequences. They then used biochemical methods to synthesize those ancient genes and express them in modern yeast cells. (…)

Thornton’s research group has helped to pioneer this molecular time-travel approach for single genes; this is the first time it has been applied to all the components in a .

The group found that the third component of the ring in Fungi originated when a gene coding for one of the subunits of the older two-protein ring was duplicated, and the daughter genes then diverged on their own evolutionary paths.

The pre-duplication ancestor turned out to be more versatile than either of its descendants: expressing the ancestral gene rescued modern yeast that otherwise failed to grow because either or both of the descendant ring protein genes had been deleted. In contrast, each resurrected gene from after the duplication could only compensate for the loss of a single ring protein gene.

The researchers concluded that the functions of the ancestral protein were partitioned among the duplicate copies, and the increase in complexity was due to complementary loss of ancestral functions rather than gaining new ones. By cleverly engineering a set of ancestral proteins fused to each other in specific orientations, the group showed that the duplicated proteins lost their capacity to interact with some of the other ring proteins. Whereas the pre-duplication ancestor could occupy five of the six possible positions within the ring, each duplicate gene lost the capacity to fill some of the slots occupied by the other, so both became obligate components for the complex to assemble and function.

"It’s counterintuitive but simple: complexity increased because protein functions were lost, not gained," Thornton said. "Just as in society, complexity increases when individuals and institutions forget how to be generalists and come to depend on specialists with increasingly narrow capacities." (…)

"The mechanisms for this increase in complexity are incredibly simple, common occurrences," Thornton said. "Gene duplications happen frequently in cells, and it’s easy for errors in copying to DNA to knock out a protein’s ability to interact with certain partners. It’s not as if evolution needed to happen upon some special combination of 100 mutations that created some complicated new function."

Thornton proposes that the accumulation of simple, degenerative changes over long periods of times could have created many of the complex molecular machines present in organisms today. Such a mechanism argues against the intelligent design concept of “irreducible complexity,” the claim that molecular machines are too complicated to have formed stepwise through evolution.

"I expect that when more studies like this are done, a similar dynamic will be observed for the evolution of many molecular complexes," Thornton said.

"These really aren’t like precision-engineered machines at all," he added. "They’re groups of molecules that happen to stick to each other, cobbled together during evolution by tinkering, degradation, and good luck, and preserved because they helped our ancestors to survive."

Scientists recreate evolution of complexity using ‘molecular time travel’, Physorg, Jan 8m, 2011. (Illustration: Oak Ridge National Laboratory)

See also:

Nature Has A Tendency To Reduce Complexity
The Rise of Complexity. Scientists replicate key evolutionary step in life on earth
The genes are so different, the scientists argue, that giant viruses represent a fourth domain of life
Uncertainty principle: How evolution hedges its bets
Culture gene coevolution of individualism - collectivism
Genetics tag at Lapidarium notes

Dec
27th
Tue
permalink

'To understand is to perceive patterns'

                  

"Everything we care about lies somewhere in the middle, where pattern and randomness interlace."

James Gleick, The Information: A History, a Theory, a Flood, Pantheon, 2011

"Humans are pattern-seeking story-telling animals, and we are quite adept at telling stories about patterns, whether they exist or not."

Michael Shermer

"The pattern, and it alone, brings into being and causes to pass away and confers purpose, that is to say, value and meaning, on all there is. To understand is to perceive patterns. (…) To make intelligible is to reveal the basic pattern.”

Isaiah Berlin, British social and political theorist, philosopher and historian, (1909-1997), The proper study of mankind: an anthology of essays, Chatto & Windus, 1997, p. 129.

"One of the most wonderful things about the emerging global superbrain is that information is overflowing on a scale beyond what we can wrap our heads around. The electronic, collective, hive mind that we know as the Internet produces so much information that organizing this data — and extracting meaning from it — has become the conversation of our time.

Sanford Kwinter’s Far From Equilibrium tackles everything from technology to society to architecture under the thesis that creativity, catharsis, transformation and progressive breakthroughs occur far from equilibrium. So even while we may feel overwhelmed and intimidated by the informational overload and radical transformations of our times, we should, perhaps, take refuge in knowing that only good can come from this. He writes:

“(…) We accurately think of ourselves today not only as citizens of an information society, but literally as clusters of matter within an unbroken informational continuum: "We are all," as the great composer Karlheinz Stockhausen once said, "transistors, in the literal sense. We send, receive and organize [and] so long as we are vital, our principle work is to capture and artfully incorporate the signals that surround us.” (…)

Clay Shirky often refers to the “Cognitive Surplus,” the overflowing output of the billion of minds participating in the electronic infosphere. A lot of this output is silly, but a lot of it is meaningful and wonderful. The key lies in curation; which is the result of pattern-recognition put into practice. (…)

Matt Ridley’s TED Talk, “When Ideas Have Sex” points to this intercourse of information and how it births new thought-patterns. Ideas, freed from the confines of space and time by the invisible, wireless metabrain we call The Internet, collide with one another and explode into new ideas; accelerating the collective intelligence of the species. Creativity thrives when minds come together. The last great industrial strength creative catalyst was the city: It is no coincidence than when people migrate to cities in large numbers, creativity and innovation thrives.  

Now take this very idea and apply it to the web:  the web  essentially is a planetary-scale nervous system where individual minds take on the role of synapses, firing electrical pattern-signals to one another at light speed — the net effect being an astonishing increase in creative output. (…)

Ray Kurzweil too, expounds on this idea of the power of patterns:

“I describe myself as a patternist, and believe that if you put matter and energy in just the right pattern you create something that transcends it. Technology is a good example of that: you put together lenses and mechanical parts and some computers and some software in just the right combination and you create a reading machine for the blind. It’s something that transcends the semblance of parts you’ve put together. That is the nature of technology, and it’s the nature of the human brain.

Biological molecules put in a certain combination create the transcending properties of human intelligence; you put notes and sounds together in just the rightcombination, and you create a Beethoven symphony or a Beatles song. So patterns have a power that transcends the parts of that pattern.”

R. Buckminster Fuller refers to us as “pattern integrities.” “Understanding order begins with understanding patterns,” he was known to say E.J. White, who worked with Fuller, says that:

“For Fuller, the thinking process is not a matter of putting anything into the brain or taking anything out; he defines thinking as the dismissal of irrelevancies, as the definition of relationships” — in other words, thinking is simultaneously a form of filtering out the data that doesn’t fit while highlighting the things that do fit together… We dismiss whatever is an “irrelevancy” and retain only what fits, we form knowledge by ‘connecting the dots’… we understand things by perceiving patterns — we arrive at conclusions when we successfully reveal these patterns. (…)

Fuller’s primary vocation is as a poet. All his disciplines and talents — architect, engineer, philosopher, inventor, artist, cartographer, teacher — are just so many aspects of his chief function as integrator… the word “poet" is a very general term for a person who puts things together in an era of great specialization when most people are differentiating or taking things apart… For Fuller, the stuff of poetry is the patterns of human behavior and the environment, and the interacting hierarchies of physics and design and industry. This is why he can describe Einstein and Henry Ford as the greatest poets of the 20th century.” (…)

In a recent article in Reality Sandwich, Simon G Powell proposed that patterned self-organization is a default condition of the universe: 

“When you think about it, Nature is replete with instances of self-organization. Look at how, over time, various exquisitely ordered patterns crystallise out of the Universe. On a macroscopic scale you have stable and enduring spherical stars, solar systems, and spiral galaxies. On a microscopic scale you have atomic and molecular forms of organization. And on a psychological level, fed by all this ambient order and pattern, you have consciousness which also seems to organise itself into being (by way of the brain). Thus, patterned organisation of one form or another is what nature is proficient at doing over time

This being the case, is it possible that the amazing synchronicities and serendipities we experience when we’re doing what we love, or following our passions — the signs we pick up on when we follow our bliss- represent an emerging ‘higher level’ manifestation of self-organization? To make use of an alluring metaphor, are certain events and cultural processes akin to iron filings coming under the organising influence of a powerful magnet? Is serendipity just the playing out on the human level of the same emerging, patterned self-organization that drives evolution?

Barry Ptolemy's film Transcendent Man reminds us that the universe has been unfolding in patterns of greater complexity since the beginning of time. Says Ptolemy:

First of all we are all patterns of information. Second, the universe has been revealing itself as patterns of information of increasing order since the big bang. From atoms, to molecules, to DNA, to brains, to technology, to us now merging with that technology. So the fact that this is happening isn’t particularly strange to a universe which continues to evolve and unfold at ever accelerating rates.”

Jason Silva, Connecting All The Dots - Jason Silva on Big think, Imaginary Fundation, Dec 2010

"Networks are everywhere. The brain is a network of nerve cells connected by axons, and cells themselves are networks of molecules connected by biochemical reactions. Societies, too, are networks of people linked by friendships, familial relationships and professional ties. On a larger scale, food webs and ecosystems can be represented as networks of species. And networks pervade technology: the Internet, power grids and transportation systems are but a few examples. Even the language we are using to convey these thoughts to you is a network, made up of words connected by syntactic relationships.”

'For decades, we assumed that the components of such complex systems as the cell, the society, or the Internet are randomly wired together. In the past decade, an avalanche of research has shown that many real networks, independent of their age, function, and scope, converge to similar architectures, a universality that allowed researchers from different disciplines to embrace network theory as a common paradigm.”

Albert-László Barabási , physicist, best known for his work in the research of network theory, and Eric Bonabeau, Scale-Free Networks, Scientific American, April 14, 2003.

Coral reefs are sometimes called “the cities of the sea”, and part of the argument is that we need to take the metaphor seriously: the reef ecosystem is so innovative because it shares some defining characteristics with actual cities. These patterns of innovation and creativity are fractal: they reappear in recognizable form as you zoom in and out, from molecule to neuron to pixel to sidewalk. Whether you’re looking at original innovations of carbon-based life, or the explosion of news tools on the web, the same shapes keep turning up. (…) When life gets creative, it has a tendency to gravitate toward certain recurring patterns, whether those patterns are self-organizing, or whether they are deliberately crafted by human agents.”

— Steven Johnson, author of Where Good Ideas Come From, cited by Jason Silva

"Network systems can sustain life at all scales, whether intracellularly or within you and me or in ecosystems or within a city. (…) If you have a million citizens in a city or if you have 1014 cells in your body, they have to be networked together in some optimal way for that system to function, to adapt, to grow, to mitigate, and to be long term resilient."

Geoffrey West, British theoretical physicist, The sameness of organisms, cities, and corporations: Q&A with Geoffrey West, TED, 26 July 2011.

“Recognizing this super-connectivity and conductivity is often accompanied by blissful mindbody states and the cognitive ecstasy of multiple “aha’s!” when the patterns in the mycelium are revealed. That Googling that has become a prime noetic technology (How can we recognize a pattern and connect more and more, faster and faster?: superconnectivity and superconductivity) mirrors the increased speed of connection of thought-forms from cannabis highs on up. The whole process is driven by desire not only for these blissful states in and of themselves, but also as the cognitive resource they represent.The devices of desire are those that connect,” because as Johnson says “chance favors the connected mind”.

Google and the Myceliation of Consciousness, Reality Sandwich, 10-11-2007

Jason Silva, Venezuelan-American television personality, filmmaker, gonzo journalist and founding producer/host for Current TV, To understand is to perceive patterns, Dec 25, 2011 (Illustration: Color Blind Test)

[This note will be gradually expanded]

See also:

The sameness of organisms, cities, and corporations: Q&A with Geoffrey West, TED, 26 July 2011.
☞ Albert-László Barabási and Eric Bonabeau, Scale-Free Networks, Scientific American, April 14, 2003.
Google and the Myceliation of Consciousness, Reality Sandwich, 10.11.2007
The Story of Networks, Lapidarium notes
Geoffrey West on Why Cities Keep Growing, Corporations and People Always Die, and Life Gets Faster
☞ Manuel Lima, visualcomplexity.com, A visual exploration on mapping complex networks
Constructal theory, Wiki
☞ A. Bejan, Constructal theory of pattern formation (pdf), Duke University
Pattern recognition, Wiki
Patterns tag on Lapidarium
Patterns tag on Lapidarium notes