Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Jul
1st
Mon
permalink

Why It’s Good To Be Wrong. David Deutsch on Fallibilism

image

"That human beings can be mistaken in anything they think or do is a proposition known as fallibilism. (…)

The trouble is that error is a subject where issues such as logical paradox, self-reference, and the inherent limits of reason rear their ugly heads in practical situations, and bite.

Paradoxes seem to appear when one considers the implications of one’s own fallibility: A fallibilist cannot claim to be infallible even about fallibilism itself. And so, one is forced to doubt that fallibilism is universally true. Which is the same as wondering whether one might be somehow infallible—at least about some things. For instance, can it be true that absolutely anything that you think is true, no matter how certain you are, might be false?

What? How might we be mistaken that two plus two is four? Or about other matters of pure logic? That stubbing one’s toe hurts? That there is a force of gravity pulling us to earth? Or that, as the philosopher René Descartes argued, “I think, therefore I am”?

When fallibilism starts to seem paradoxical, the mistakes begin. We are inclined to seek foundations—solid ground in the vast quicksand of human opinion—on which one can try to base everything else. Throughout the ages, the false authority of experience and the false reassurance of probability have been mistaken for such foundations: “No, we’re not always right,” your parents tell you, “just usually.” They have been on earth longer and think they have seen this situation before. But since that is an argument for “therefore you should always do as we say,” it is functionally a claim of infallibility after all. Moreover, look more closely: It claims literal infallibility too. Can anyone be infallibly right about the probability that they are right? (…)

The fact is, there’s nothing infallible about “direct experience” (…). Indeed, experience is never direct. It is a sort of virtual reality, created by our brains using sketchy and flawed sensory clues, given substance only by fallible expectations, explanations, and interpretations. Those can easily be more mistaken than the testimony of the passing hobo. If you doubt this, look at the work of psychologists Christopher Chabris and Daniel Simons, and verify by direct experience the fallibility of your own direct experience. Furthermore, the idea that your reminiscences are infallible is also heresy by the very doctrine that you are faithful to

I’ll tell you what really happened. You witnessed a dress rehearsal. The real ex cathedra ceremony was on the following day. In order not to make the declaration a day early, they substituted for the real text (which was about some arcane theological issue, not gravity) a lorem-ipsum-type placeholder that they deemed so absurd that any serious listener would immediately realize that that’s what it was. 

And indeed, you did realize this; and as a result, you reinterpreted your “direct experience,” which was identical to that of witnessing an ex cathedra declaration, as not being one. Precisely by reasoning that the content of the declaration was absurd, you concluded that you didn’t have to believe it. Which is also what you would have done if you hadn’t believed the infallibility doctrine.

You remain a believer, serious about giving your faith absolute priority over your own “unaided” reason (as reason is called in these contexts). But that very seriousness has forced you to decide first on the substance of the issue, using reason, and only then whether to defer to the infallible authority. This is neither fluke nor paradox. It is simply that if you take ideas seriously, there is no escape, even in dogma and faith, from the obligation to use reason and to give it priority over dogma, faith, and obedience. (…)

It is hard to contain reason within bounds. If you take your faith sufficiently seriously you may realize that it is not only the printers who are fallible in stating the rules for ex cathedra, but also the committee that wrote down those rules. And then that nothing can infallibly tell you what is infallible, nor what is probable. It is precisely because you, being fallible and having no infallible access to the infallible authority, no infallible way of interpreting what the authority means, and no infallible means of identifying an infallible authority in the first place, that infallibility cannot help you before reason has had its say. 

A related useful thing that faith tells you, if you take it seriously enough, is that the great majority of people who believe something on faith, in fact believe falsehoods. Hence, faith is insufficient for true belief. As the Nobel-Prize-winning biologist Peter Medawar said: “the intensity of the conviction that a hypothesis is true has no bearing on whether it is true or not.”

You know that Medawar’s advice holds for all ideas, not just scientific ones, and, by the same argument, to all the other diverse things that are held up as infallible (or probable) touchstones of truth: holy books; the evidence of the senses; statements about who is probably right; even true love. (…)

This logic of fallibility, discovered and rediscovered from time to time, has had profound salutary effects in the history of ideas. Whenever anything demands blind obedience, its ideology contains a claim of infallibility somewhere; but wherever someone believes seriously enough in that infallibility, they rediscover the need for reason to identify and correctly interpret the infallible source. Thus the sages of ancient Judaism were led, by the assumption of the Bible’s infallibility, to develop their tradition of critical discussion. And in an apparently remote application of the same logic, the British constitutional doctrine of “parliamentary sovereignty” was used by 20th-century judges such as Lord Denning to develop an institution of judicial review similar to that which, in the United States, had grown out of the opposite doctrine of “separation of powers.”

Fallibilism has practical consequences for the methodology and administration of science, and in government, law, education, and every aspect of public life. The philosopher Karl Poppe elaborated on many of these. He wrote:

The question about the sources of our knowledge … has always been asked in the spirit of: ‘What are the best sources of our knowledge—the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist—no more than ideal rulers—and that all ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’

It’s all about error. We used to think that there was a way to organize ourselves that would minimize errors. This is an infallibilist chimera that has been part of every tyranny since time immemorial, from the “divine right of kings” to centralized economic planning. And it is implemented by many patterns of thought that protect misconceptions in individual minds, making someone blind to evidence that he isn’t Napoleon, or making the scientific crank reinterpret peer review as a conspiracy to keep falsehoods in place. (…)

Popper’s answer is: We can hope to detect and eliminate error if we set up traditions of criticism—substantive criticism, directed at the content of ideas, not their sources, and directed at whether they solve the problems that they purport to solve. Here is another apparent paradox, for a tradition is a set of ideas that stay the same, while criticism is an attempt to change ideas. But there is no contradiction. Our systems of checks and balances are steeped in traditions—such as freedom of speech and of the press, elections, and parliamentary procedures, the values behind concepts of contract and of tort—that survive not because they are deferred to but precisely because they are not: They themselves are continually criticized, and either survive criticism (which allows them to be adopted without deference) or are improved (for example, when the franchise is extended, or slavery abolished). Democracy, in this conception, is not a system for enforcing obedience to the authority of the majority. In the bigger picture, it is a mechanism for promoting the creation of consent, by creating objectively better ideas, by eliminating errors from existing ones.

“Our whole problem,” said the physicist John Wheeler, “is to make the mistakes as fast as possible.” This liberating thought is more obviously true in theoretical physics than in situations where mistakes hurt. A mistake in a military operation, or a surgical operation, can kill. But that only means that whenever possible we should make the mistakes in theory, or in the laboratory; we should “let our theories die in our place,” as Popper put it. But when the enemy is at the gates, or the patient is dying, one cannot confine oneself to theory. We should abjure the traditional totalitarian assumption, still lurking in almost every educational system, that every mistake is the result of wrongdoing or stupidity. For that implies that everyone other than the stupid and the wrongdoers is infallible. Headline writers should not call every failed military strike “botched;” courts should not call every medical tragedy malpractice, even if it’s true that they “shouldn’t have happened” in the sense that lessons can be learned to prevent them from happening again. “We are all alike,” as Popper remarked, “in our infinite ignorance.” And this is a good and hopeful thing, for it allows for a future of unbounded improvement.

Fallibilism, correctly understood, implies the possibility, not the impossibility, of knowledge, because the very concept of error, if taken seriously, implies that truth exists and can be found. The inherent limitation on human reason, that it can never find solid foundations for ideas, does not constitute any sort of limit on the creation of objective knowledge nor, therefore, on progress. The absence of foundation, whether infallible or probable, is no loss to anyone except tyrants and charlatans, because what the rest of us want from ideas is their content, not their provenance: If your disease has been cured by medical science, and you then become aware that science never proves anything but only disproves theories (and then only tentatively), you do not respond “oh dear, I’ll just have to die, then.” (…)

The theory of knowledge is a tightrope that is the only path from A to B, with a long, hard drop for anyone who steps off on one side into “knowledge is impossible, progress is an illusion” or on the other side into “I must be right, or at least probably right.” Indeed, infallibilism and nihilism are twins. Both fail to understand that mistakes are not only inevitable, they are correctable (fallibly). Which is why they both abhor institutions of substantive criticism and error correction, and denigrate rational thought as useless or fraudulent. They both justify the same tyrannies. They both justify each other.

I must now apologize for trying to trick you earlier: All the ideas that I suggested we might know infallibly are in fact falsehoods. “Two plus two” of course isn’t “four” as you’d discover if you wrote “2+2” in an arithmetic test when asked to add two and two. If we were infallible about matters of pure logic, no one would ever fail a logic test either. Stubbing your toe does not always hurt if you are focused on some overriding priority like rescuing a comrade in battle. And as for knowing that “I” exist because I think—note that your knowledge that you think is only a memory of what you did think, a second or so ago, and that can easily be a false memory. (For discussions of some fascinating experiments demonstrating this, see Daniel Dennett’s book Brainstorms.) Moreover, if you think you are Napoleon, the person you think must exist because you think, doesn’t exist.

And the general theory of relativity denies that gravity exerts a force on falling objects. The pope would actually be on firm ground if he were to concur with that ex cathedra. Now, are you going to defer to my authority as a physicist about that? Or decide that modern physics is a sham? Or are you going to decide according to whether that claim really has survived all rational attempts to refute it?”

David Deutsch, a British physicist at the University of Oxford. He is a non-stipendiary Visiting Professor in the Department of Atomic and Laser Physics at the Centre for Quantum Computation (CQC) in the Clarendon Laboratory of the University of Oxford, Why It’s Good To Be Wrong, Nautilus, 2013. (Illustration by Gérard DuBois)

See also:

David Deutsch: A new way to explain explanation, TED, 2009
David Deutsch on knowledge as crafted self-similarity
David Deutsch on Artificial Intelligence

Apr
4th
Thu
permalink

Philosophers and the age of their influential contributions

image(source)

Mar
27th
Wed
permalink

Hilary Putnam - ‘A philosopher in the age of science’

                  

"Imagine two scientists are proposing competing theories about the motion of the moon. One scientist argues that the moon orbits the earth at such and such a speed due to the effects of gravity and other Newtonian forces. The other, agreeing to the exact same observations, argues that behind Newtonian forces there are actually undetectable space-aliens who are using sophisticated tractor beams to move every object in the universe. No amount of observation will resolve this conflict. They agree on every observation and measurement. One just has a more baroque theory than the other. Reasonably, most of us think the simpler theory is better.

But when we ask why this theory is better, we find ourselves resorting to things that are patently non-factual. We may argue that theories which postulate useless entities are worse than simpler ones—citing the value of simplicity. We may argue that the space-alien theory contradicts too many other judgements—citing the value of coherence. We can give a whole slew of reasons why one theory is better than another, but there is no rulebook out there for scientists to point to which resolves the matter objectively. Even appeals to the great pragmatic value of the first theory or arguments that point out the lack of explanatory and predictive power of the space-alien theory, are still appeals to a value. No amount of observation will tell you why being pragmatic makes one theory better—it is something for which you have to argue. No matter what kind of fact we are trying to establish, it is going to be inextricably tied to the values we hold. (…)

In [Hilary Putnam’s] view, there is no reason to suppose that a complete account of reality can be given using a single set of concepts. That is, it is not possible to reduce all types of explanation to one set of objective concepts. Suppose I say, “Keith drove like a maniac” and you ask me why. We would usually explain the event in terms of value-laden concepts like intention, emotion, and so on—“Keith was really stressed out”—and this seems to work perfectly fine. Now we can also take the exact same event and describe it using an entirely different set of scientific concepts— say “there was a chain of electrochemical reactions from this brain to this foot” or “there was x pressure on the accelerator which caused y torque on the wheels.” These might be true descriptions, but they simply don’t give us the whole or even a marginally complete picture of Keith driving like a maniac. We could describe every single relevant physical detail of that event and still have no explanation. Nor, according to Putnam, should we expect there to be. The full scope of reality is simply too complex to be fully described by one method of explanation.

The problem with all of this, and one that Putnam has struggled with, is what sort of picture of reality we are left with once we accept these three central arguments: the collapse of the fact-value dichotomy, the truth of semantic externalism and conceptual relativity. (…)

We could—like Putnam before the 1970s—become robust realists and simply accept that values and norms are no less a part of the world than elementary particles and mathematical objects. We could—like Putnam until the 1990s—become “internal realists” and, in a vaguely Kantian move define reality in terms of mind-dependent concepts and idealised rational categories. Or we could adopt Putnam’s current position—a more modest realism which argues that there is a mind-independent world out there and that it is compatible with our ordinary human values. Of course Putnam has his reasons for believing what he does now, and they largely derive from his faith in our ability to represent reality correctly. But the strength of his arguments convincing us to be wary of the scientific stance leave us with little left of trust in it.”

, A philosopher in the age of science, Prospect, March, 14, 2013. [Hilary Putnam — American philosopher, mathematician and computer scientist who has been a central figure in analytic philosophy since the 1960s, currently Cogan University Professor Emeritus at Harvard University.]

Feb
3rd
Sun
permalink

'Elegance,' 'Symmetry,' and 'Unity': Is Scientific Truth Always Beautiful?

                   image

"Today the grandest quest of physics is to render compatible the laws of quantum physics—how particles in the subatomic world behave—with the rules that govern stars and planets. That’s because, at present, the formulas that work on one level implode into meaninglessness at the other level. This is deeply ungainly, and significant when the two worlds collide, as occurs in black holes. The quest to unify quantum physics (micro) and general relativity (macro) has spawned heroic efforts, the best-known candidate for a grand unifying concept presently being string theory. String theory proposes that subatomic particles are not particles at all but closed or open vibrating strings, so tiny, a hundred billion billion times shorter than an atomic nucleus’s diameter, that no human instrument can detect them. It’s the “music of the spheres”—think vibrating harp strings—made literal.

A concept related to string theory is “supersymmetry.” Physicists have shown that at extremely high energy levels, similar to those that existed a micro-blink after the big bang, the strength of the electromagnetic force, and strong and weak nuclear forces (which work only on subatomic levels), come tantalizingly close to converging. Physicists have conceived of scenarios in which the three come together precisely, an immensely intellectually and aesthetically pleasing accomplishment. But those scenarios imply the existence of as-yet-undiscovered “partners” for existing particles: The electron would be joined by a “selectron,” quarks by “squarks,” and so on. There was great hope that the $8-billion Large Hadron Collider would provide indirect evidence for these theories, but so far it hasn’t. (…)

[Marcelo Gleiser]: “We look out in the world and we see a very complicated pattern of stuff, and the notion of symmetry is an important way to make sense of the mess. The sun and moon are not perfect spheres, but that kind of approximation works incredibly well to simulate the behavior of these bodies.”

But the idea that what’s beautiful is true and that “symmetry rules,” as Gleiser puts it, “has been catapulted to an almost religious notion in the sciences,” he says. In his own book A Tear at the Edge of Creation (Free Press), Gleiser made a case for the beauty inherent in asymmetry—in the fact that neutrinos, the most common particles in the universe, spin only in one direction, for example, or that amino acids can be produced in laboratories in “left-handed” or “right-handed” forms, but only the “left-handed” form appears in nature. These are nature’s equivalent of Marilyn Monroe’s mole, attractive because of their lopsidedness, and Orrell also makes use of those examples.

But Weinberg, the Nobel-winning physicist at the University of Texas at Austin, counters: “Betting on beauty works remarkably well.” The Large Hadron Collider’s failure to produce evidence of supersymmetry is “disappointing,” he concedes, but he notes that plenty of elegant theories have waited years, even decades, for confirmation. Copernicus’s theory of a Sun-centered universe was developed entirely without experiment—he relied on Ptolemy’s data—and it was eventually embraced precisely because his description of planetary motion was simply more economical and elegant than those of his predecessors; it turned out to be true.

Closer to home, Weinberg says his own work on the weak nuclear force and electromagnetism had its roots in remarkably elegant, purely abstract theories of researchers who came before him, theories that, at first, seemed to be disproved by evidence but were too elegant to stop thinking about. (…)

To Orrell, it’s not just that many scientists are too enamored of beauty; it’s that their notion of beauty is ossified. It is “kind of clichéd,” Orrell says. “I find things like perfect symmetry uninspiring.” (In fairness, the Harvard theoretical physicist Lisa Randall has used the early unbalanced sculptures of Richard Serra as an example of how the asymmetrical can be as fascinating as the symmetrical, in art as in physics. She finds this yin-yang tension perfectly compatible with modern theorizing.)

Orrell also thinks it is more useful to study the behavior of complex systems rather than their constituent elements. (…)

Outside of physics, Orrell reframes complaints about “perfect-model syndrome” in aesthetic terms. Classical economists, for instance, treat humans as symmetrical in terms of what motivates decision-making. In contrast, behavioral economists are introducing asymmetry into that field by replacing Homo economicus with a quirkier, more idiosyncratic and human figure—an aesthetic revision, if you like. (…)

The broader issue, though, is whether science’s search for beautiful, enlightening patterns has reached a point of diminishing returns. If science hasn’t yet hit that point, might it be approaching it? The search for symmetry in nature has had so many successes, observes Stephon Alexander, a Dartmouth physicist, that “there is a danger of forgetting that nature is the one that decides where that game ends.”

Christopher Shea, American writer and editor, Is Scientific Truth Always Beautiful?, The Chronicle of Higher Education, Jan 28, 2013.

The Asymmetry of Life

                 image
                                     Image courtesy of Ben Lansky

"Look into a mirror and you’ll simultaneously see the familiar and the alien: an image of you, but with left and right reversed.

Left-right inequality has significance far beyond that of mirror images, touching on the heart of existence itself. From subatomic physics to life, nature prefers asymmetry to symmetry. There are no equal liberties when neutrinos and proteins are concerned. In the case of neutrinos, particles that spill out of the sun’s nuclear furnace and pass through you by the trillions every second, only leftward-spinning ones exist. Why? No one really knows.

Proteins are long chains of amino acids that can be either left- or right-handed. Here, handedness has to do with how these molecules interact with polarized light, rotating it either to the left or to the right. When synthesized in the lab, amino acids come out fifty-fifty. In living beings, however, all proteins are made of left-handed amino acids. And all sugars in RNA and DNA are right-handed. Life is fundamentally asymmetric.

Is the handedness of life, its chirality (think chiromancer, which means “palm reader”), linked to its origins some 3.5 billion years ago, or did it develop after life was well on its way? If one traces life’s origins from its earliest stages, it’s hard to see how life began without molecular building blocks that were “chirally pure,” consisting solely of left- or right-handed molecules. Indeed, many models show how chirally pure amino acids may link to form precursors of the first protein-like chains. But what could have selected left-handed over right-handed amino acids?

My group’s research suggests that early Earth’s violent environmental upheavals caused many episodes of chiral flip-flopping. The observed left-handedness of terrestrial amino acids is probably a local fluke. Elsewhere in the universe, perhaps even on other planets and moons of our solar system, amino acids may be right-handed. But only sampling such material from many different planetary platforms will determine whether, on balance, biology is lefthanded, right-handed, or ambidextrous.”

Marcelo Gleiser, The Asymmetry of Life, § SEEDMAGAZINE, Sep 7, 2010.

"One of the deepest consequences of symmetries of any kind is their relationship with conservation laws. Every symmetry in a physical system, be it balls rolling down planes, cars moving on roads, planets orbiting the Sun, a photon hitting an electron, or the expanding Universe, is related to a conserved quantity, a quantity that remains unchanged in the course of time. In particular, external (spatial and temporal) symmetries are related to the conservation of momentum and energy, respectively: the total energy and momentum of a system that is temporally and spatially symmetric remains unchanged.

The elementary particles of matter live in a reality very different from ours. The signature property of their world is change: particles can morph into one another, changing their identities. […] One of the greatest triumphs of twentieth-century particle physics was the discovery of the rules dictating the many metamorphoses of matter particles and the symmetry principles behind them. One of its greatest surprises was the realization that some of the symmetries are violated and that these violations have very deep consequences. (…) p.27

Even though matter and antimatter appear in equal footing on the equations describing relativistic particles, antimatter occurs only rarely. […] Somehow, during its infancy, the cosmos selected matter over antimatter. This imperfection is the single most important factor dictating our existence. (…)

Back to the early cosmos: had there been an equal quantity of antimatter particles around, they would have annihilated the corresponding particles of matter and all that would be left would be lots of gamma-ray radiation and some leftover protons and antiprotons in equal amounts. Definitely not our Universe. The tiny initial excess of matter particles is enough to explain the overwhelming excess of matter over antimatter in today’s Universe. The existence of mattter, the stuff we and everything else are made of, depends on a primordial imperfection, the matter-antimatter asymmetry. (…) p.29.

We have seen how the weak interactions violate a series of internal symmetries: charge conjugation, parity, and even the combination of the two. The consequences of these violations are deeply related to our existence: they set the arrow of time at the microscopic level, providing a viable mechanism to generate the excess of matter over antimatter. […] The message from modern particle physics and cosmology is clear: we are the products of imperfections in Nature. (…)

It is not symmetry and perfection that should be our guiding principle, as it has been for millennia. We don’t have to look for the mind of God in Nature and try to express it through our equations. The science we create is just that, our creation. Wonderful as it is, it is always limited, it is always constrained by what we know of the world. […] The notion that there is a well-defined hypermathematical structure that determines all there is in the cosmos is a Platonic delusion with no relationship to physical reality. (…) p. 35.

The critics of this idea miss the fact that a meaningless cosmos that produced humans (and possibly other intelligences) will never be meaningless to them (or to the other intelligences). To exist in a purposeless Universe is even more meaningful than to exist as the result of some kind of mysterious cosmic plan. Why? Because it elevates the emergence of life and mind to a rare event, as opposed to a ubiquitous and premeditated one. For millennia, we believed that God (or gods) protected us from extinction, that we were chosen to be here and thus safe from ultimate destruction. […]

When science proposes that the cosmos has a sense of purpose where in life is a premeditated outcome of natural events, a similar safety blanket mechanism is at play: if life fails here, it will succeed elsewhere. We don’t really need to preserve it. To the contrary, I will argue that unless we accept our fragility and cosmic loneliness, we will never act to protect what we have. (…)

The laws of physics and the laws of chemistry as presently understood have nothing to say about the emergence of life. As Paul Davies remarked in Cosmic Jackpot, notions of a life principle suffer from being teleologic, explaining life as the end goal, a purposeful cosmic strategy. The human mind, of course, would be the crown jewel of such creative drive. Once again we are “chosen” ones, a dangerous proposal. […] Arguments shifting the “mind of God” to the “mind of the cosmos” perpetuate our obsession with the notion of Oneness. Our existence need not be planned to be meaningful.” (…) p.49.

Unified theories, life principles, and self-aware universes are all expressions of our need to find a connection between who we are and the world we live in. I do not question the extreme importance of understanding the connection between man and the cosmos. But I do question that it has to derive from unifying principles. (…) p.50.

My point is that there is no Final Truth to be discovered, no grand plan behind creation. Science advances as new theories engulf or displace old ones. The growth is largely incremental, punctuated by unexpected, worldview-shattering discoveries about the workings of Nature. […]

Once we understand that science is the creation of human minds and not the pursuit of some divine plan (even if metaphorically) we shift the focus of our search for knowledge from the metaphysical to the concrete. (…) p.51.

For a clever fish, water is “just right“ for it to swim in. Had it been too cold, it would freeze; too hot, it would boil. Surely the water temperature had to be just right for the fish to exist. “I’m very important. My existence cannot be an accident,” the proud fish would conclude. Well, he is not very important. He is just a clever fish. The ocean temperature is not being controlled with the purpose of making it possible for it to exist. Quite the opposite: the fish is fragile. A sudden or gradual temperature swing would kill it, as any trout fisherman knows. We so crave for meaningful connections that we see them even when they are not there.

We are soulful creatures in a harsh cosmos. This, to me, is the essence of the human predicament. The gravest mistake we can make is to think that the cosmos has plans for us, that we are somehow special from a cosmic perspective. (…) p.52

We are witnessing the greatest mass extinction since the demise of the dinosaurs 65 million years ago. The difference is that for the first time in history, humans, and not physical causes, are the perpetrators. […] Life recovered from the previous five mass extinctions because the physical causes eventually ceased to act. Unless we understand what is happening and start acting toghether as a species we may end up carving the path toward our own destruction. (…)” p.56

Marcelo Gleiser is the Appleton Professor of Natural Philosophy at Dartmouth College, A Tear at the Edge of Creation, Free Press, 2010.

See also:

Symmetry in Physics - Bibliography - PhilPapers
The Concept of Laws. The special status of the laws of mathematics and physics, Lapidarium notes
Universe tag on Lapidarium notes

Jan
27th
Sun
permalink

Daniel C. Dennett on an attempt to understand the mind; autonomic neurons, culture and computational architecture

image

"What Darwin and Turing had both discovered, in their different ways, was the existence of competence without comprehension."

— Daniel C. Dennett, What Darwin’s theory of evolution teaches us about Alan Turing and artificial intelligence, Lapidarium

"I’m trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine. This is called homuncular functionalism, because you take the whole person. You break the whole person down into two or three or four or seven sub persons that are basically agents. They’re homunculi, and this looks like a regress, but it’s only a finite regress, because you take each of those in turn and you break it down into a group of stupider, more specialized homunculi, and you keep going until you arrive at parts that you can replace with a machine, and that’s a great way of thinking about cognitive science. It’s what good old-fashioned AI tried to do and still trying to do.

The idea is basically right, but when I first conceived of it, I made a big mistake. I was at that point enamored of the McCulloch-Pitts logical neuron. McCulloch and Pitts had put together the idea of a very simple artificial neuron, a computational neuron, which had multiple inputs and a single branching output and a threshold for firing, and the inputs were either inhibitory or excitatory. They proved that in principle a neural net made of these logical neurons could compute anything you wanted to compute. So this was very exciting. It meant that basically you could treat the brain as a computer and treat the neuron as a sort of basic switching element in the computer, and that was certainly an inspiring over-simplification. Everybody knew is was an over-simplification, but people didn’t realize how much, and more recently it’s become clear to me that it’s a dramatic over-simplification, because each neuron, far from being a simple logical switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.

The question is, what happens to your ideas about computational architecture when you think of individual neurons not as dutiful slaves or as simple machines but as agents that have to be kept in line and that have to be properly rewarded and that can form coalitions and cabals and organizations and alliances? This vision of the brain as a sort of social arena of politically warring forces seems like sort of an amusing fantasy at first, but is now becoming something that I take more and more seriously, and it’s fed by a lot of different currents.

Evolutionary biologist David Haig has some lovely papers on intrapersonal conflicts where he’s talking about how even at the level of the genetics, even at the level of the conflict between the genes you get from your mother and the genes you get from your father, the so-called madumnal and padumnal genes, those are in opponent relations and if they get out of whack, serious imbalances can happen that show up as particular psychological anomalies.

We’re beginning to come to grips with the idea that your brain is not this well-organized hierarchical control system where everything is in order, a very dramatic vision of bureaucracy. In fact, it’s much more like anarchy with some elements of democracy. Sometimes you can achieve stability and mutual aid and a sort of calm united front, and then everything is hunky-dory, but then it’s always possible for things to get out of whack and for one alliance or another to gain control, and then you get obsessions and delusions and so forth.

You begin to think about the normal well-tempered mind, in effect, the well-organized mind, as an achievement, not as the base state, something that is only achieved when all is going well, but still, in the general realm of humanity, most of us are pretty well put together most of the time. This gives a very different vision of what the architecture is like, and I’m just trying to get my head around how to think about that. (…)

The vision of the brain as a computer, which I still champion, is changing so fast. The brain’s a computer, but it’s so different from any computer that you’re used to. It’s not like your desktop or your laptop at all, and it’s not like your iPhone except in some ways. It’s a much more interesting phenomenon. What Turing gave us for the first time (and without Turing you just couldn’t do any of this) is a way of thinking in a disciplined way about phenomena that have, as I like to say, trillions of moving parts. Until late 20th century, nobody knew how to take seriously a machine with a trillion moving parts. It’s just mind-boggling.

You couldn’t do it, but computer science gives us the ideas, the concepts of levels, virtual machines implemented in virtual machines implemented in virtual machines and so forth. We have these nice ideas of recursive reorganization of which your iPhone is just one example and a very structured and very rigid one at that.

We’re getting away from the rigidity of that model, which was worth trying for all it was worth. You go for the low-hanging fruit first. First, you try to make minds as simple as possible. You make them as much like digital computers, as much like von Neumann machines, as possible. It doesn’t work. Now, we know why it doesn’t work pretty well. So you’re going to have a parallel architecture because, after all, the brain is obviously massively parallel.

It’s going to be a connectionist network. Although we know many of the talents of connectionist networks, how do you knit them together into one big fabric that can do all the things minds do? Who’s in charge? What kind of control system? Control is the real key, and you begin to realize that control in brains is very different from control in computers. Control in your commercial computer is very much a carefully designed top-down thing.

You really don’t have to worry about one part of your laptop going rogue and trying out something on its own that the rest of the system doesn’t want to do. No, they’re all slaves. If they’re agents, they’re slaves. They are prisoners. They have very clear job descriptions. They get fed every day. They don’t have to worry about where the energy’s coming from, and they’re not ambitious. They just do what they’re asked to do and do it brilliantly with only the slightest tint of comprehension. You get all the power of computers out of these mindless little robotic slave prisoners, but that’s not the way your brain is organized.

Each neuron is imprisoned in your brain. I now think of these as cells within cells, as cells within prison cells. Realize that every neuron in your brain, every human cell in your body (leaving aside all the symbionts), is a direct descendent of eukaryotic cells that lived and fended for themselves for about a billion years as free-swimming, free-living little agents. They fended for themselves, and they survived.

They had to develop an awful lot of know-how, a lot of talent, a lot of self-protective talent to do that. When they joined forces into multi-cellular creatures, they gave up a lot of that. They became, in effect, domesticated. They became part of larger, more monolithic organizations. My hunch is that that’s true in general. We don’t have to worry about our muscle cells rebelling against us, or anything like that. When they do, we call it cancer, but in the brain I think that (and this is my wild idea) maybe only in one species, us, and maybe only in the obviously more volatile parts of the brain, the cortical areas, some little switch has been thrown in the genetics that, in effect, makes our neurons a little bit feral, a little bit like what happens when you let sheep or pigs go feral, and they recover their wild talents very fast.

Maybe a lot of the neurons in our brains are not just capable but, if you like, motivated to be more adventurous, more exploratory or risky in the way they comport themselves, in the way they live their lives. They’re struggling amongst themselves with each other for influence, just for staying alive, and there’s competition going on between individual neurons. As soon as that happens, you have room for cooperation to create alliances, and I suspect that a more free-wheeling, anarchic organization is the secret of our greater capacities of creativity, imagination, thinking outside the box and all that, and the price we pay for it is our susceptibility to obsessions, mental illnesses, delusions and smaller problems.

We got risky brains that are much riskier than the brains of other mammals even, even more risky than the brains of chimpanzees, and that this could be partly a matter of a few simple mutations in control genes that release some of the innate competitive talent that is still there in the genomes of the individual neurons. But I don’t think that genetics is the level to explain this. You need culture to explain it.

'Culture creates a whole new biosphere'

This, I speculate, is a response to our invention of culture; culture creates a whole new biosphere, in effect, a whole new cultural sphere of activity where there’s opportunities that don’t exist for any other brain tissues in any other creatures, and that this exploration of this space of cultural possibility is what we need to do to explain how the mind works.

Everything I just said is very speculative. I’d be thrilled if 20 percent of it was right. It’s an idea, a way of thinking about brains and minds and culture that is, to me, full of promise, but it may not pan out. I don’t worry about that, actually. I’m content to explore this, and if it turns out that I’m just wrong, I’ll say, “Oh, okay. I was wrong. It was fun thinking about it,” but I think I might be right.

I’m not myself equipped to work on a lot of the science; other people could work on it, and they already are in a way. The idea of selfish neurons has already been articulated by Sebastian Seung of MIT in a brilliant keynote lecture he gave at Society for Neuroscience in San Diego a few years ago. I thought, oh, yeah, selfish neurons, selfish synapses. Cool. Let’s push that and see where it leads. But there are many ways of exploring this. One of the still unexplained, so far as I can tell, and amazing features of the brain is its tremendous plasticity.

Mike Merzenich sutured a monkey’s fingers together so that it didn’t need as much cortex to represent two separate individual digits, and pretty soon the cortical regions that were representing those two digits shrank, making that part of the cortex available to use for other things. When the sutures were removed, the cortical regions soon resumed pretty much their earlier dimensions. If you blindfold yourself for eight weeks, as Alvaro Pascual-Leone does in his experiments, you find that your visual cortex starts getting adapted for Braille, for haptic perception, for touch.

The way the brain spontaneously reorganizes itself in response to trauma of this sort, or just novel experience, is itself one of the most amazing features of the brain, and if you don’t have an architecture that can explain how that could happen and why that is, your model has a major defect. I think you really have to think in terms of individual neurons as micro-agents, and ask what’s in it for them?

Why should these neurons be so eager to pitch in and do this other work just because they don’t have a job? Well, they’re out of work. They’re unemployed, and if you’re unemployed, you’re not getting your neuromodulators. If you’re not getting your neuromodulators, your neuromodulator receptors are going to start disappearing, and pretty soon you’re going to be really out of work, and then you’re going to die.

In this regard, I think of John Hollands work on the emergence of order. His example is New York City. You can always find a place where you can get gefilte fish, or sushi, or saddles or just about anything under the sun you want, and you don’t have to worry about a state bureaucracy that is making sure that supplies get through. No. The market takes care of it. The individual web of entrepreneurship and selfish agency provides a host of goods and services, and is an extremely sensitive instrument that responds to needs very quickly.

Until the lights go out. Well, we’re all at the mercy of the power man. I am quite concerned that we’re becoming hyper-fragile as a civilization, and we’re becoming so dependent on technologies that are not as reliable as they should be, that have so many conditions that have to be met for them to work, that we may specialize ourselves into some very serious jams. But in the meantime, thinking about the self-organizational powers of the brain as very much like the self-organizational powers of a city is not a bad idea. It just reeks of over-enthusiastic metaphor, though, and it’s worth reminding ourselves that this idea has been around since Plato.

Plato analogizes the mind of a human being to the state. You’ve got the rulers and the guardians and the workers. This idea that a person is made of lots of little people is comically simpleminded in some ways, but that doesn’t mean it isn’t, in a sense, true. We shouldn’t shrink from it just because it reminds us of simpleminded versions that have been long discredited. Maybe some not so simpleminded version is the truth.

There are a lot of cultural fleas

My next major project will be trying to take another hard look at cultural evolution and look at the different views of it and see if I can achieve a sort of bird’s eye view and establish what role, if any, is there for memes or something like memes and what are the other forces that are operating. We are going to have to have a proper scientific perspective on cultural change. The old-fashioned, historical narratives are wonderful, and they’re full of gripping detail, and they’re even sometimes right, but they only cover a small proportion of the phenomena. They only cover the tip of the iceberg.

Basically, the model that we have and have used for several thousand years is the model that culture consists of treasures, cultural treasures. Just like money, or like tools and houses, you bequeath them to your children, and you amass them, and you protect them, and because they’re valuable, you maintain them and prepare them, and then you hand them on to the next generation and some societies are rich, and some societies are poor, but it’s all goods. I think that vision is true of only the tip of the iceberg.

Most of the regularities in culture are not treasures. It’s not all opera and science and fortifications and buildings and ships. It includes all kinds of bad habits and ugly patterns and stupid things that don’t really matter but that somehow have got a grip on a society and that are part of the ecology of the human species in the same way that mud, dirt and grime and fleas are part of the world that we live in. They’re not our treasures. We may give our fleas to our children, but we’re not trying to. It’s not a blessing. It’s a curse, and I think there are a lot of cultural fleas. There are lots of things that we pass on without even noticing that we’re doing it and, of course, language is a prime case of this, very little deliberate intentional language instruction goes on or has to go on.

Kids that are raised with parents pointing out individual objects and saying, “See, it’s a ball. It’s red. Look, Johnny, it’s a red ball, and this is a cow, and look at the horsy” learn to speak, but so do kids who don’t have that patient instruction. You don’t have to do that. Your kids are going to learn ball and red and horsy and cow just fine without that, even if they’re quite severely neglected. That’s not a nice observation to make, but it’s true. It’s almost impossible not to learn language if you don’t have some sort of serious pathology in your brain.

Compare that with chimpanzees. There are hundreds of chimpanzees who have spent their whole lives in human captivity. They’ve been institutionalized. They’ve been like prisoners, and in the course of the day they hear probably about as many words as a child does. They never show any interest. They never apparently get curious about what those sounds are for. They can hear all the speech, but it’s like the rustling of the leaves. It just doesn’t register on them as worth attention.

But kids are tuned for that, and it might be a very subtle tuning. I can imagine a few small genetic switches, which, if they were just in a slightly different position, would make chimpanzees just as pantingly eager to listen to language as human babies are, but they’re not, and what a difference it makes in their world! They never get to share discoveries the way we do and to share our learning. That, I think, is the single feature about human beings that distinguishes us most clearly from all others: we don’t have to reinvent the wheel. Our kids get the benefit of not just what grandpa and grandma and great grandpa and great grandma knew. They get the benefit of basically what everybody in the world knew in the years when they go to school. They don’t have to invent calculus or long division or maps or the wheel or fire. They get all that for free. It just comes as part of the environment. They get incredible treasures, cognitive treasures, just by growing up. (…)

A lot of naïve thinking by scientists about free will

Moving Naturalism Forward" was a nice workshop that Sean Carroll put together out in Stockbridge a couple of weeks ago, and it was really interesting. I learned a lot. I learned more about how hard it is to do some of these things and that’s always useful knowledge, especially for a philosopher.

If we take seriously, as I think we should, the role that Socrates proposed for us as midwives of thinking, then we want to know what the blockades are, what the imagination blockades are, what people have a hard time thinking about, and among the things that struck me about the Stockbridge conference were the signs of people really having a struggle to take seriously some ideas which I think they should take seriously. (…)

I realized I really have my work cut out for me in a way that I had hoped not to discover. There’s still a lot of naïve thinking by scientists about free will. I’ve been talking about it quite a lot, and I do my best to undo some bad thinking by various scientists. I’ve had some modest success, but there’s a lot more that has to be done on that front. I think it’s very attractive to scientists to think that here’s this several-millennia-old philosophical idea, free will, and they can just hit it out of the ballpark, which I’m sure would be nice if it was true.

It’s just not true. I think they’re well intentioned. They’re trying to clarify, but they’re really missing a lot of important points. I want a naturalistic theory of human beings and free will and moral responsibility as much as anybody there, but I think you’ve got to think through the issues a lot better than they’ve done, and this, happily, shows that there’s some real work for philosophers.

Philosophers have done some real work that the scientists jolly well should know. Here’s an area where it was one of the few times in my career when I wanted to say to a bunch of scientists, “Look. You have some reading to do in philosophy before you hold forth on this. There really is some good reading to do on these topics, and you need to educate yourselves.”

A combination of arrogance and cravenness

The figures about American resistance to evolution are still depressing, and you finally have to realize that there’s something structural. It’s not that people are stupid, and I think it’s clear that people, everybody, me, you, we all have our authorities, our go-to people whose word we trust. If you want to question about the economic situation in Greece, for instance, you need to check it out with somebody whose opinion on that we think is worth taking seriously. We don’t try to work it out for ourselves. We find some expert that we trust, and right around the horn, whatever the issues are, we have our experts, and so a lot of people have as their experts on matters of science, they have their pastors. This is their local expert.

I don’t blame them. I wish they were more careful about vetting their experts and making sure that they found good experts. They wouldn’t choose an investment advisor, I think, as thoughtlessly as they go along with their pastor. I blame the pastors, but where do they get their ideas? Well, they get them from the hierarchies of their churches. Where do they get their ideas? Up at the top, I figure there’s some people that really should be ashamed of themselves. They know better.

They’re lying, and when I get a chance, I try to ask them that. I say, “Doesn’t it bother you that your grandchildren are going to want to know why you thought you had to lie to everybody about evolution?” I mean, really. They’re lies. They’ve got to know that these are lies. They’re not that stupid, and I just would love them to worry about what their grandchildren and great grandchildren would say about how their ancestors were so craven and so arrogant. It’s a combination of arrogance and cravenness.

We now have to start working on that structure of experts and thinking, why does that persist? How can it be that so many influential, powerful, wealthy, in-the-public people can be so confidently wrong about evolutionary biology? How did that happen? Why does it happen? Why does it persist? It really is a bit of a puzzle if you think about how they’d be embarrassed not to know that the world is round. I think that would be deeply embarrassing to be that benighted, and they’d realize it. They’d be embarrassed not to know that HIV is the vector of AIDS. They’d be embarrassed to not understand the way the tides are produced by the gravitational forces of the moon and the sun. They may not know the details, but they know that the details are out there. They could learn them in 20 minutes if they wanted to. How did they get themselves in the position where they could so blithely trust people who they’d never buy stocks and bonds from? They’d never trust a child’s operation to a doctor that was as ignorant and as ideological as these people. It is really strange. I haven’t got to the bottom of that. (…)

This pernicious sort of lazy relativism

[T]here’s a sort of enforced hypocrisy where the pastors speak from the pulpit quite literally, and if you weren’t listening very carefully, you’d think: oh my gosh, this person really believes all this stuff. But they’re putting in just enough hints for the sophisticates in the congregation so that the sophisticates are supposed to understand: Oh, no. This is all just symbolic. This is all just metaphorical. And that’s the way they want it, but of course, they could never admit it. You couldn’t put a little neon sign up over the pulpit that says, “Just metaphor, folks, just metaphor.” It would destroy the whole thing.

You can’t admit that it’s just metaphor even when you insist when anybody asks that it’s just metaphor, and so this professional doubletalk persists, and if you study it for a while the way Linda [pdf] and I have been doing, you come to realize that’s what it is, and that means they’ve lost track of what it means to tell the truth. Oh, there are so many different kinds of truth. Here’s where postmodernism comes back to haunt us. What a pernicious bit of intellectual vandalism that movement was! It gives license to this pernicious sort of lazy relativism.

One of the most chilling passages in that great book by William James, The Varieties of Religious Experience, is where he talks about soldiers in the military: "Far better is it for an army to be too savage, too cruel, too barbarous, thant to possess too much sentimentality and human reasonableness.” This is a very sobering, to me, a very sobering reflection. Let’s talk about when we went into Iraq. There was Rumsfeld saying, “Oh, we don’t need a big force. We don’t need a big force. We can do this on the cheap,” and there were other people, retrospectively we can say they were wiser, who said, “Look, if you’re going to do this at all, you want to go in there with such overpowering, such overwhelming numbers and force that you can really intimidate the population, and you can really maintain the peace and just get the population to sort of roll over, and that way actually less people get killed, less people get hurt. You want to come in with an overwhelming show of force.”

The principle is actually one that’s pretty well understood. If you don’t want to have a riot, have four times more police there than you think you need. That’s the way not to have a riot and nobody gets hurt because people are not foolish enough to face those kinds of odds. But they don’t think about that with regard to religion, and it’s very sobering. I put it this way.

Suppose that we face some horrific, terrible enemy, another Hitler or something really, really bad, and here’s two different armies that we could use to defend ourselves. I’ll call them the Gold Army and the Silver Army; same numbers, same training, same weaponry. They’re all armored and armed as well as we can do. The difference is that the Gold Army has been convinced that God is on their side and this is the cause of righteousness, and it’s as simple as that. The Silver Army is entirely composed of economists. They’re all making side insurance bets and calculating the odds of everything.

Which army do you want on the front lines? It’s very hard to say you want the economists, but think of what that means. What you’re saying is we’ll just have to hoodwink all these young people into some false beliefs for their own protection and for ours. It’s extremely hypocritical. It is a message that I recoil from, the idea that we should indoctrinate our soldiers. In the same way that we inoculate them against diseases, we should inoculate them against the economists’—or philosophers’—sort of thinking, since it might lead to them to think: am I so sure this cause is just? Am I really prepared to risk my life to protect? Do I have enough faith in my commanders that they’re doing the right thing? What if I’m clever enough and thoughtful enough to figure out a better battle plan, and I realize that this is futile? Am I still going to throw myself into the trenches? It’s a dilemma that I don’t know what to do about, although I think we should confront it at least.”

Daniel C. Dennett is University Professor, Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University, The normal well-tempered mind, Edge, Jan 8, 2013.

'The Intentional Stance'

"Dennett favours the theory (first suggested by Richard Dawkins) that our social learning has given us a second information highway (in addition to the genetic highway) where the transmission of variant cultural information (memes) takes place via differential replication. Software viruses, for example, can be understood as memes, and as memes evolve in complexity, so does human cognition: “The mind is the effect, not the cause.” (…)

Daniel Dennett: "Natural selection is not gene centrist and nor is biology all about genes, our comprehending minds are a result of our fast evolving culture. Words are memes that can be spoken and words are the best example of memes. Words have a genealogy and it’s easier to trace the evolution of a single word than the evolution of a language." (…)

I don’t like theory of mind. I coined the phrase The Intentional Stance. [Dennett’s Intentional Stance encompasses attributing feelings, memories and beliefs to others as well as mindreading and predicting what someone will do next.] Do you need a theory to ride a bike? (…)

Riding a bike is a craft – you don’t need a theory. Autistic people might need a theory with which to understand other minds, but the rest of us don’t. If a human is raised without social interaction and without language they would be hugely disabled and probably lacking in empathy.”

Daniel C. Dennett, Daniel Dennett: ‘I don’t like theory of mind’ – interview, The Guardian, 22 March 2013.

See also:

Steven Pinker on the mind as a system of ‘organs of computation’, Lapidarium notes
Quantum minds: Why we think like quarks - ‘To be human is to be quantum’, Lapidarium notes
Human Connectome Project: understanding how different parts of the brain communicate to each other
How Free Is Your Will?, Lapidarium notes
Susan Blackmore on memes and “temes”
Mind & Brain tag on Lapidarium notes

Sep
9th
Sun
permalink

Philosophy vs science: which can answer the big questions of life?

 

"In the eighteenth century, philosophers considered the whole of human knowledge, including science, to be their field and discussed questions such as: did the universe have a beginning? However, in the nineteenth and twentieth centuries, science became too technical and mathematical for the philosophers, or anyone else except a few specialists. Philosophers reduced the scope of their inquiries so much that Wittgenstein, the most famous philosopher of this century, said, “The sole remaining task for philosophy is the analysis of language.” (…)

However, if we do discover a complete theory, it should in time be understandable in broad principle by everyone, not just a few scientists. Then we shall all, philosophers, scientists, and just ordinary people, be able to take part in the discussion of the question of why it is that we and the universe exist.”

Stephen Hawking, A Brief History of Time, Bantam Dell Publishing Group, 1988.

"Science is what you know, philosophy is what you don’t know"
Bertrand Russell

"Every science begins as philosophy and ends as art; it arises in hypothesis and flows into achievement."

Will Durant, American writer, historian, and philosopher (1885-1981), The Pleasures of Philosophy, 1929.

"Getting to your question of morality, for example, science provides the basis for moral decisions, which are sensible only if they are based on reason, which is itself based on empirical evidence. Without some knowledge of the consequences of actions, which must be based on empirical evidence, then I think “reason” alone is impotent. If I don’t know what my actions will produce, then I cannot make a sensible decision about whether they are moral or not. Ultimately, I think our understanding of neurobiology and evolutionary biology and psychology will reduce our understanding of morality to some well-defined biological constructs. (…)

Take homosexuality, for example. Iron age scriptures might argue that homosexuality is “wrong”, but scientific discoveries about the frequency of homosexual behaviour in a variety of species tell us that it is completely natural in a rather fixed fraction of populations and that it has no apparent negative evolutionary impacts. This surely tells us that it is biologically based, not harmful and not innately “wrong”. In fact, I think you actually accede to this point about the impact of science when you argue that our research into non-human cognition has altered our view of ethics. (….)

"Why" questions

The [“why”] question is meaningless. (…) Not only has “why” become “how” but “why” no longer has any useful meaning, given that it presumes purpose for which there is no evidence. (…)

It is not a large leap of the imagination to expect that we will one day be able to break down those social actions, studied on a macro scale, to biological reactions at a micro scale.

In a purely practical sense, this may be computationally too difficult to do in the near future, and maybe it will always be so, but everything I know about the universe makes me timid to use the word always. What isn’t ruled out by the laws of physics is, in some sense, inevitable. So, right now, I cannot imagine that I could computationally determine the motion of all the particles in the room in which I am breathing air, so that I have to take average quantities and do statistics in order to compute physical behaviour. But, one day, who knows? (…)

We won’t really know the answer to whether science can yield a complete picture of reality, good at all levels, unless we try. (…) I continue to be surprised by the progress that is possible by continuing to ask questions of nature and let her answer through experiment. Stars are easier to understand than people, I expect, but that is what makes the enterprise so exciting.

The mysteries are what make life worth living and I would be sad if the day comes when we can no longer find answerable questions that have yet to be answered, and puzzles that can be solved. What surprises me is how we have become victims of our own success, at least in certain areas. When it comes to the universe as a whole, we may be frighteningly close to the limits of empirical inquiry as a guide to understanding. After that, we will have to rely on good ideas alone, and that is always much harder and less reliable.”

Lawrence Krauss, Canadian-American theoretical physicist who is a professor of physics, Foundation Professor of the School of Earth and Space Exploration, and director of the Origins Project at Arizona State University, Philosophy v science: which can answer the big questions of life?, The Observer, 9 Sept 2012

[This post will be gradually expanded…]

See also:

Science Is Not About Certainty. Science is about overcoming our own ideas and a continuous challenge of common sense
David Deutsch: A new way to explain explanation
Galileo and the relationship between the humanities and the sciences
Will Durant, The Pleasures of Philosophy

Jul
21st
Sat
permalink

What Neuroscience Tells Us About Morality: 'Morality is a form of decision-making, and is based on emotions, not logic'

           

Morality is not the product of a mythical pure reason divorced from natural selection and the neural wiring that motivates the animal to sociability. It emerges from the human brain and its responses to real human needs, desires, and social experience; it depends on innate emotional responses, on reward circuitry that allows pleasure and fear to be associated with certain conditions, on cortical networks, hormones and neuropeptides. Its cognitive underpinnings owe more to case-based reasoning than to conformity to rules.”

Patricia Smith Churchland, Canadian-American philosopher and neuroscientist noted for her contributions to neurophilosophy and the philosophy of mind, in John Bickle, The Oxford Handbook of Philosophy and Neuroscience, Chapter 16 "Inference to the best decision", Oxford Handbooks, 2009, p.419.

"Although many philosophers used to dismiss the relevance of neuroscience on grounds that what mattered was “the software, not the hardware”, increasingly philosophers have come to recognize that understanding how the brain works is essential to understanding the mind."

Patricia Smith Churchland, introductory message at her homepage at the University of California, San Diego.

"Morality is a form of decision-making, and is based on emotions, not logic."

Jonah Lehrer, cited in delancey place, 2009

"Philosophers must take account of neuroscience in their investigations.

While [Patricia S.] Churchland's intellectual opponents over the years have suggested that you can understand the “software” of thinking, independently of the “hardware”—the brain structure and neuronal firings—that produced it, she has responded that this metaphor doesn't work with the brain: Hardware and software are intertwined to such an extent that all philosophy must be “neurophilosophy.” There’s no other way.

Churchland, professor emerita of philosophy at the University of California at San Diego, has been best known for her work on the nature of consciousness. But now, with a new book, Braintrust: What Neuroscience Tells Us About Morality (Princeton University Press), she is taking her perspective into fresh terrain: ethics. And the story she tells about morality is, as you’d expect, heavily biological, emphasizing the role of the peptide oxytocin, as well as related neurochemicals.

Oxytocin’s primary purpose appears to be in solidifying the bond between mother and infant, but Churchland argues—drawing on the work of biologists—that there are significant spillover effects: Bonds of empathy lubricated by oxytocin expand to include, first, more distant kin and then other members of one’s in-group. (Another neurochemical, aregenine vasopressin, plays a related role, as do endogenous opiates, which reinforce the appeal of cooperation by making it feel good.)

The biological picture contains other elements, of course, notably our large prefrontal cortexes, which help us to take stock of situations in ways that lower animals, driven by “fight or flight” impulses, cannot. But oxytocin and its cousin-compounds ground the human capacity for empathy. (When she learned of oxytocin’s power, Churchland writes in Braintrust, she thought: “This, perhaps, Hume might accept as the germ of ‘moral sentiment.’”)

From there, culture and society begin to make their presence felt, shaping larger moral systems: tit-for-tat retaliation helps keep freeloaders and abusers of empathic understanding in line. Adults pass along the rules for acceptable behavior—which is not to say “just” behavior, in any transcendent sense—to their children. Institutional structures arise to enforce norms among strangers within a culture, who can’t be expected to automatically trust each other.

These rules and institutions, crucially, will vary from place to place, and over time. “Some cultures accept infanticide for the disabled or unwanted,” she writes, without judgment. “Others consider it morally abhorrent; some consider a mouthful of the killed enemy’s flesh a requirement for a courageous warrior, others consider it barbaric.”

Hers is a bottom-up, biological story, but, in her telling, it also has implications for ethical theory. Morality turns out to be not a quest for overarching principles but rather a process and practice not very different from negotiating our way through day-to-day social life. Brain scans, she points out, show little to no difference between how the brain works when solving social problems and how it works when solving ethical dilemmas. (…)

[Churchland] thinks, with Aristotle’s argument that morality is not about rule-making but instead about the cultivation of moral sentiment through experience, training, and the following of role models. The biological story also confirms, she thinks, David Hume’s assertion that reason and the emotions cannot be disentangled. This view stands in sharp contrast to those philosophers who argue that instinctual reactions must be scrutinized by reason. The villains of her books are philosophical system-builders—whether that means Jeremy Bentham, with his ideas about maximizing aggregate utility (“the greatest good for the greatest number”), or Immanuel Kant, with his categorical imperatives (never lie!), or John Rawls, erector of A Theory of Justice.

Churchland thinks the search for what she invariably calls “exceptionless rules” has deformed modern moral philosophy. “There have been a lot of interesting attempts, and interesting insights, but the target is like perpetual youth or a perpetual-motion machine. You’re not going to find an exceptionless rule,” she says. “What seems more likely is that there is a basic platform that people share and that things shape themselves based on that platform, and based on ecology, and on certain needs and certain traditions.”

The upshot of that approach? “Sometimes there isn’t an answer in the moral domain, and sometimes we have to agree to disagree, and come together and arrive at a good solution about what we will live with.”

Owen Flanagan Jr., a professor of philosophy and neurobiology at Duke University and a friend of Churchland’s, adds, “There’s a long tradition in philosophy that morality is based on rule-following, or on intuitions that only specially positioned people can have. One of her main points is that that is just a completely wrong picture of the genealogical or descriptive story. The first thing to do is to emphasize our continuity with the animals.” In fact, Churchland believes that primates and even some birds have a moral sense, as she defines it, because they, too, are social problem-solvers.

Recognizing our continuity with a specific species of animal was a turning point in her thinking about morality, in recognizing that it could be tied to the hard and fast. “It all changed when I learned about the prairie voles,” she says—surely not a phrase John Rawls ever uttered.

She told the story at the natural-history museum, in late March. Montane voles and prairie voles are so similar “that naifs like me can’t tell them apart,” she told a standing-room-only audience (younger and hipper than the museum’s usual patrons—the word “neuroscience” these days is like catnip). But prairie voles mate for life, and montane voles do not. Among prairie voles, the males not only share parenting duties, they will even lick and nurture pups that aren’t their own. By contrast, male montane voles do not actively parent even their own offspring. What accounts for the difference? Researchers have found that the prairie voles, the sociable ones, have greater numbers of oxytocin receptors in certain regions of the brain. (And prairie voles that have had their oxytocin receptors blocked will not pair-bond.)

"As a philosopher, I was stunned," Churchland said, archly. "I thought that monogamous pair-bonding was something one determined for oneself, with a high level of consideration and maybe some Kantian reasoning thrown in. It turns out it is mediated by biology in a very real way.”

The biologist Sue Carter, now at the University of Illinois at Chicago, did some of the seminal work on voles, but oxytocin research on humans is now extensive as well. In a study of subjects playing a lab-based cooperative game in which the greatest benefits to two players would come if the first (the “investor”) gave a significant amount of money to the second (the “trustee”), subjects who had oxytocin sprayed into their noses donated more than twice as often as a control group, giving nearly one-fifth percent more each time.

Paul Zak, an economist at Claremont Graduate University, was an author of that study, as well as others that Churchland cites. He is working on a book called “The Moral Molecule” and describes himself as “in exactly the same camp” as Churchland.

Oxytocin works on the level of emotion,” he says. “You just get the feeling of right and wrong. It is less precise than a Kantian system, but it’s consistent with our evolved physiology as social creatures.”

The City University of New York Graduate Center philosopher Jesse Prinz, who appeared with Churchland at a Columbia University event the night after her museum lecture, has mostly praise for Churchland’s latest offering. “If you look at a lot of the work that’s been done on scientific approaches to morality—books written for a lay audience—it’s been about evolutionary psychology. And what we get again and again is a story about the importance of evolved tendencies to be altruistic. That’s a report on a particular pattern of behavior, and an evolutionary story to explain the behavior. But it’s not an account of the underlying mechanism. The idea that science has moved to a point where we can see two animals working together toward a collective end and know the brain mechanism that allows that is an extraordinary achievement.”

Nevertheless, he says, how to move from the possibility of collective action to “the specific human institution of moral rules is a bit of connective tissue that she isn’t giving us.”

Indeed, that’s one of the most striking aspects of Braintrust. After Churchland establishes the existence of a platform for moral decision-making, she describes the process through which moral decisions come to be made, but she says little about their content—why one path might be better than another. She offers the following description of a typical “moral” scenario. A farmer sees a deer breaching his neighbor’s fence and eating his apples while the neighbor is away. The farmer will not consult a Kantian rule book before deciding whether to help, she writes, but instead will weigh an array of factors: Would I want my neighbor to help me? Does my culture find such assistance praiseworthy or condescending? Am I faced with any pressing emergencies on my own farm? Churchland describes this process of moral decision-making as being driven by “constraint satisfaction.”

"What exactly constraint satisfaction is in neurobiological terms we do not yet understand,” she writes, “but roughly speaking it involves various factors with various weights and probabilities interacting so as to produce a suitable solution to a question.”

"Various" factors with "various" weights? Is that not a little vague? But Duke’s Owen Flanagan Jr. defends this highly pragmatic view of morality. "Where we get a lot of pushback from philosophers is that they’ll say, ‘If you go this naturalistic route that Flanagan and Churchland go, then you make ethics merely a theory of prudence.’ And the answer is, Yeah, you kind of do that. Morality doesn’t become any different than deciding what kind of bridge to build across a river. The reason we both think it makes sense is that the other stories”—that morality comes from God, or from philosophical intuition—”are just so implausible.”

Flanagan also thinks Churchland’s approach leads to a “more democratic” morality. "It’s ordinary people discussing the best thing to do in a given situation, given all the best information available at the moment." Churchland herself often underscores that democratic impulse, drawing on her own biography. She grew up on a farm, in the Okanagan Valley, in British Columbia. Speaking of her onetime neighbors, she says: "I got as much wisdom from some of those old farmers as I ever got from a seminar on moral philosophy.”

If building a bridge is the topic up for discussion, however, one can assume that most people think getting across the water is a sound idea. Yet mainstream philosophers object that such a sense of shared purpose cannot always be assumed in moral questions—and that therefore the analogy fails. (…)

Kahane says the complexity of human life demands a more intense and systematic analysis of moral questions than the average citizen might be capable of, at least if she’s limited to the basic tool kit of social skills.

Peter Railton, a philosophy professor at the University of Michigan at Ann Arbor, agrees. Our intuitions about how to get along with other people may have been shaped by our interactions within small groups (and between small groups). But we don’t live in small groups anymore, so we need some procedures through which we leverage our social skills into uncharted areas—and that is what the traditional academic philosophers, whom Churchland mostly rejects, work on. What are our obligations to future generations (concerning climate change, say)? What do we owe poor people on the other side of the globe (whom we might never have heard of, in our evolutionary past)?

For a more rudimentary example, consider that evolution quite likely trained us to treat “out groups” as our enemy. Philosophical argument, Railton says, can give reasons why members of the out-group are not, in fact, the malign and unusual creatures that we might instinctively think they are; we can thereby expand our circle of empathy.

Churchland’s response is that someone is indeed likely to have the insight that constant war against the out-group hurts both sides’ interests, but she thinks a politician, an economist, or a farmer-citizen is as likely to have that insight as a professional philosopher. (…)

But isn’t she, right there, sneaking in some moral principles that have nothing to do with oxytocin, namely the primacy of liberty over equality? In our interviews, she described Singer’s worldview as, in an important sense, unnatural. Applying the same standard to distant foreigners as we do to our own kith and kin runs counter to our most fundamental biological impulses.

But Oxford’s Kahane offers a counterargument: “‘Are humans capable of utilitarianism?’ is not a question that is answered by neuroscience,” he says. “We just need to test if people are able to live like that. Science may explain whether it is common for us to do, but that’s very different from saying what our limits are.”

Indeed, Peter Singer lives (more or less) the way he preaches, and chapters of an organization called Giving What We Can, whose members pledge to give a large portion of their earnings to charity, have popped up on several campuses. “If I can prevent hundreds of people from dying while still having the things that make life meaningful to me, that strikes me as a good idea that doesn’t go against ‘paradigmatically good sense’ or anything,” says Nick Beckstead, a fourth-year graduate student in philosophy and a founder of the group’s Rutgers chapter.

Another target in Churchland’s book is Jonathan Haidt, the University of Virginia psychologist who thinks he has identified several universal “foundations” of moral thought: protection of society’s vulnerable; fairness; loyalty to the in-group; respect for authority; and the importance of purity (a sanitary concern that evolves into the cultural ideal of sanctity). That strikes her as a nice list, but no more—a random collection of moral qualities that isn’t at all rooted in biology. During her museum talk, she described Haidt’s theory as a classic just-so story. “Maybe in the 70s, when evolutionary psychology was just becoming a thing, you could get away with saying”—here she adopted a flighty, sing-song voice—’It could have been, out there on the veldt, in Africa, 250,000 years ago that these were traits that were selected,’” she said. “But today you need evidence, actually.” (…)

The element of cultural relativism also remains somewhat mysterious in Churchland’s writings on morality. In some ways, her project dovetails with that of Sam Harris, the “New Atheist” (and neuroscience Ph.D.) who believes reason and neuroscience can replace woolly armchair philosophy and religion as guides to morality. But her defense of some practices of primitive tribes, including infanticide (in the context of scarcity) —as well the seizing of enemy women, in raids, to keep up the stock of mates— as “moral” within their own context, seems the opposite of his approach.

I reminded Churchland, who has served on panels with Harris, that he likes to put academics on the spot by asking if they think such practices as the early 19th-century Hindu tradition of burning widows on their husbands’ funeral pyres was objectively wrong.

So did she think so? First, she got irritated: “I don’t know why you’re asking that.” But, yes, she finally said, she does think that practice objectively wrong. “But frankly I don’t know enough about their values, and why they have that tradition, and I’m betting that Sam doesn’t either.”

"The example I like to use," she said, "rather than using an example from some other culture and just laughing at it, is the example from our own country, where it seems to me that the right to buy assault weapons really does not work for the well-being of most people. And I think that’s an objective matter."

At times, Churchland seems just to want to retreat from moral philosophical debate back to the pure science. “Really,” she said, “what I’m interested in is the biological platform. Then it’s an open question how we attack more complex problems of social life.”

— Christopher Shea writing about Patricia Smith Churchland, Canadian-American philosopher and neuroscientist noted for her contributions to neurophilosophy and the philosophy of mind, in Rule Breaker, The Chronicle of Higher Education, June 12, 2011. (Illustration: attributed to xkcd)

See also:

Jesse Prinz: Morality is a Culturally Conditioned Response
Sam Harris on the ‘selfish gene’ and moral behavior
Sam Harris on the moral formula: How facts inform our ethics
Morality tag on Lapidarium

May
27th
Sun
permalink

Science Is Not About Certainty. Science is about overcoming our own ideas and a continuous challenge of common sense

       

“At the core of all well-founded belief lies belief that is unfounded.”

Ludwig Wittgenstein, On Certainty, #253, J. & J. Harper Editions, New York, 1969. 

"The value of philosophy is, in fact, to be sought largely in its very uncertainty. The man who has no tincture of philosophy goes through life imprisoned in the prejudices derived from common sense, from the habitual beliefs of his age or his nation, and from convictions which have grown up in his mind without the co-operation or consent of his deliberate reason. To such a man the world tends to become definite, finite, obvious; common objects rouse no questions, and unfamiliar possibilities are contemptuously rejected. As soon as we begin to philosophize, on the contrary, we find that even the most everyday things lead to problems to which only very incomplete answers can be given.

Philosophy, though unable to tell us with certainty what is the true answer to the doubts it raises, is able to suggest many possibilities which enlarge our thoughts and free them from the tyranny of custom. Thus, while diminishing our feeling of certainty as to what things are, it greatly increases our knowledge as to what they may be; it removes the somewhat arrogant dogmatism of those who have never traveled into the region of liberating doubt, and it keeps alive our sense of wonder by showing familiar things in an unfamiliar aspect.”

Bertrand RussellThe Problems of Philosophy (1912), Cosimo, Inc, 2010, p. 113-114.

We say that we have some theories about science. Science is about hypothetico-deductive methods, we have observations, we have data, data require to be organized in theories.  So then we have theories. These theories are suggested or produced from the data somehow, then checked in terms of the data. Then time passes, we have more data, theories evolve, we throw away a theory, and we find another theory which is better, a better understanding of the data, and so on and so forth. This is a standard idea of how science works, which implies that science is about empirical content, the true interesting relevant content of science is its empirical content. Since theories change, the empirical content is the solid part of what science is. Now, there’s something disturbing, for me as a theoretical scientist, in all this. I feel that something is missing. Something of the story is missing. I’ve been asking to myself what is this thing missing? (…)

This is particularly relevant today in science, and particularly in physics, because if I’m allowed to be polemical, in my field, in fundamental theoretical physics, it is 30 years that we fail. There hasn’t been a major success in theoretical physics in the last few decades, after the standard model, somehow. Of course there are ideas. These ideas might turn out to be right. Loop quantum gravity might turn out to be right, or not. String theory might turn out to be right, or not. But we don’t know, and for the moment, nature has not said yes in any sense.

I suspect that this might be in part because of the wrong ideas we have about science, and because methodologically we are doing something wrong, at least in theoretical physics, and perhaps also in other sciences.

Anaximander. Changing something in the conceptual structure that we have in grasping reality

Let me tell you a story to explain what I mean. The story is an old story about my latest, greatest passion outside theoretical physics: an ancient scientist, or so I would say, even if often je is called a philosopher: Anaximander. I am fascinated by this character, Anaximander. I went into understanding what he did, and to me he’s a scientist. He did something that is very typical of science, and which shows some aspect of what science is. So what is the story with Anaximander? It’s the following, in brief:

Until him, all the civilizations of the planet, everybody around the world, thought that the structure of the world was: the sky over our heads and the earth under our feet. There’s an up and a down, heavy things fall from the up to the down, and that’s reality. Reality is oriented up and down, heaven’s up and earth is down. Then comes Anaximander and says: no, is something else. ‘The earth is a finite body that floats in space, without falling, and the sky is not just over our head; it is all around.’

How he gets it? Well obviously he looks at the sky, you see things going around, the stars, the heavens, the moon, the planets, everything moves around and keeps turning around us. It’s sort of reasonable to think that below us is nothing, so it seems simple to get to this conclusion. Except that nobody else got to this conclusion. In centuries and centuries of ancient civilizations, nobody got there. The Chinese didn’t get there until the 17th century, when Matteo Ricci and the Jesuits went to China and told them. In spite of centuries of Imperial Astronomical Institute which was studying the sky. The Indians only learned this when the Greeks arrived to tell them. The Africans, in America, in Australia… nobody else got to this simple realization that the sky is not just over our head, it’s also under our feet. Why?

Because obviously it’s easy to suggest that the earth sort of floats in nothing, but then you have to answer the question: why doesn’t it fall? The genius of Anaximander was to answer this question. We know his answer, from Aristotle, from other people. He doesn’t answer this question, in fact. He questions this question. He says why should it fall? Things fall toward the earth. Why the earth itself should fall? In other words, he realizes that the obvious generalization from every small heavy object falling, to the earth itself falling, might be wrong. He proposes an alternative, which is that objects fall towards the earth, which means that the direction of falling changes around the earth.

This means that up and down become notions relative to the earth. Which is rather simple to figure out for us now: we’ve learned this idea. But if you think of the difficulty when we were children, to understand how people in Sydney could live upside-down, clearly requires some changing in something structural in our basic language in terms of which we understand the world. In other words, up and down means something different before and after Anaximander’s revolution.

He understands something about reality, essentially by changing something in the conceptual structure that we have in grasping reality. In doing so, he is not doing a theory; he understands something which in some precise sense is forever. It’s some uncovered truth, which to a large extent is a negative truth. He frees ourselves from prejudice, a prejudice that was ingrained in the conceptual structure we had for thinking about space.

Why I think this is interesting?  Because I think that this is what happens at every major step, at least in physics; in fact, I think this is what happened at every step, even not major. When I give a thesis to students, most of the time the problem I give for a thesis is not solved. It’s not solved because the solution of the question, most of the time, is not solving in the question, it’s just questioning the question itself. Is realizing that in the way the problem was formulated, there was some implicit prejudice assumption that was the one to be dropped.   

If this is so, the idea that we have data and theories, and then we have a rational agent that constructs theories from the data using his rationality, his mind, his intelligence, his conceptual structure, and juggles theories and data, doesn’t make any sense, because what is being challenged at every step is not the theory, it’s the conceptual structure used in constructing theories and interpreting the data. In other words, it’s not changing theories that we go ahead, but changing the way we think about the world.

The prototype of this way of thinking, I think the example that makes it more clear, is Einstein's discovery of special relativity. On the one hand there was Newtonian mechanics, which was extremely successful with its empirical content. On the other hand there was Maxwell’s theory, with its empirical content, which was extremely successful, too. But there was a contradiction between the two.

If Einstein had gone to school to learn what science is, if he had read Kuhn, and the philosopher explaining what science is, if he was any one of my colleagues today who are looking for a solution of the big problem of physics today, what would he do?

He would say, okay, the empirical content is the strong part of the theory. The idea in classical mechanics that velocity is relative: forget about it. The Maxwell equations, forget about them. Because this is a volatile part of our knowledge. The theories themselves have to be changed, okay? What we keep solid is the data, and we modify the theory so that it makes sense coherently, and coherently with the data.

That’s not at all what Einstein does. Einstein does the contrary. He takes the theories very seriously. He believes the theory. He says, look, classical mechanics is so successful that when it says that velocity is relative, we should take it seriously, and we should believe it. And the Maxwell equations are so successful that we should believe the Maxwell equations. He has so much trust in the theory itself, in the qualitative content of the theory, that qualitative content that Kuhn says changes all the time, that we learned not to take too seriously, and so much faith in this, confidence in that, that he’s ready to do what? To force coherence between these two, the two theories, by challenging something completely different, which is something that is in our head, which is how we think about time.

He’s changing something in common sense, something about the elementary structure in terms of which we think of the world, on the basis of the trust of the past results in physics. This is exactly the opposite of what is done today in physics. If you read Physical Review today, it’s all about theories that challenge completely and deeply the content of previous theories: so theories in which there is no Lorentz invariance, which are not relativistic, which are not general covariant, quantum mechanics might be wrong…

Every physicist today is immediately ready to say, okay, all of our past knowledge about the world is wrong. Let’s randomly pick some new idea. I suspect that this is not a small component of the long-term lack of success of theoretical physics. You understand something new about the world, either from new data that arrive, or from thinking deeply on what we have already learned about the world. But thinking means also accepting what we’ve learned, challenging what we think, and knowing that in some of the things that we think, there may be something to modify and to change.

Science is not about the data, but about the tools that we use

What are then the aspects of doing science that I think are under-evaluated, and should come up-front? First, science is about constructing visions of the world, about rearranging our conceptual structure, about creating new concepts which were not there before, and even more, about changing, challenging the a-priori that we have. So it’s nothing to do about the assembly of data and the way of organizing the assembly of data. It has everything to do about the way we think, and about our mental vision of the world. Science is a process in which we keep exploring ways of thinking, and changing our image of the world, our vision of the world, to find new ones that work a little bit better.

In doing that, what we have learned in the past is our main ingredient, especially the negative things we have learned. If we have learned that the earth is not flat, there will be no theory in the future in which the earth is ‘flat.’ If we have learned that the earth is not at the center of the universe, that’s forever. We’re not going to go back on this. If you have learned that simultaneity is relative, with Einstein, we’re not going back to absolute simultaneity, like many people think. This means that when an experiment measures neutrinos going faster than light, we should be very suspicious, and of course check and see whether there is something very deep that is happening. But it is absurd that everybody jumps and says okay, Einstein was wrong, just for a little anomaly that shows so. It never works like that in science.

The past knowledge is always with us, and it’s our main ingredient for understanding. The theoretical ideas which are based on ‘let’s imagine that this may happen because why not’ are not taking us anywhere.

I seem to be saying two things that contradict each other. On the one hand, we trust the knowledge, and on the other hand, we are always ready to modify in-depth part of our conceptual structure about the world. There is no contradiction between the two, because the idea of the contradiction comes from what I see as the deepest misunderstanding about science, which is the idea that science is about certainty

Science is not about certainty. Science is about finding the most reliable way of thinking, at the present level of knowledge. Science is extremely reliable; it’s not certain. In fact, not only it’s not certain, but it’s the lack of certainty that grounds it. Scientific ideas are credible not because they are sure, but because they are the ones that have survived all the possible past critiques, and they are the most credible because they were put on the table for everybody’s criticism.

The very expression ‘scientifically proven’ is a contradiction in terms. There is nothing that is scientifically proven. The core of science is the deep awareness that we have wrong ideas, we have prejudices. We have ingrained prejudices. In our conceptual structure for grasping reality there might be something not appropriate, something we may have to revise to understand better. So at any moment, we have a vision of reality that is effective, it’s good, it’s the best we have found so far. It’s the most credible we have found so far, its mostly correct.

But at the same time it’s not taken for certain, and any element of it is a priori open for revision. Why do we have this continuous…? On the one hand, we have this brain, and it has evolved for millions of years. It has evolved for us, for basically running the savannah and run after and eat deer and try not to be eaten by the lions. We have a brain that is tuned to meters and hours, which is not particularly well-tuned to think about atoms and galaxies. So we have to get out of that.  

At the same time I think we have been selected for going out of the forest, perhaps, going out of Africa, for being as smart as possible, as animals that escape lions. This continuous effort that is part of us to change our own way of thinking, to readapt, is a very part of our nature. We are not changing our mind away from nature; it is our natural history that continues to change that.      

If I can make a final comment about this way of thinking about science, or two final comments: One is that science is not about the data. The empirical content of scientific theory is not what is relevant. The data serves to suggest the theory, to confirm the theory, to disconfirm the theory, to prove the theory wrong. But these are the tools that we use. What interests us is the content of the theory. What interests us is what the theory says about the world. General relativity says space-time is curved. The data of general relativity are that Mercury perihelion moves 43 degrees per century, with respect to that computed with Newtonian mechanics.    

Who cares? Who cares about these details? If that was the content of general relativity, general relativity would be boring. General relativity is interesting not because of its data, but because it tells us that as far as we know today, the best way of conceptualizing space-time is as a curved object. It gives us a better way of grasping reality than Newtonian mechanics, because it tells us that there can be black holes, because it tells us there’s a Big Bang. This is the content of the scientific theory.

All living beings on earth have common ancestors. This is a content of scientific theory, not the specific data used to check the theory. So the focus of scientific thinking, I believe, should be on the content of the theory, the past theory, the previous theories, try to see what they hold concretely and what they suggest to us for changing in our conceptual frame themselves.  

Scientific thinking vs religious thinking

The final consideration regards just one comment about this understanding of science and this long conflict that has crossed the centuries between scientific thinking and religious thinking. I think often it is misunderstood. The question is, why can’t we live happily together, and why can’t people pray to their gods and study the universe without this continuous clash? I think that this continuous clash is a little bit unavoidable, for the opposite reason from the one often presented. It’s unavoidable not because science pretends to know the answers. But it’s the other way around, because if scientific thinking is this, then it is a constant reminder to ourselves that we don’t know the answers.

In religious thinking, often this is unacceptable. What is unacceptable is not a scientist that says I know, but it’s a scientist that says I don’t know, and how could you know? Based, at least in many religions, in some religions, or in some ways of being religious, an idea that there should be truth that one can hold and not be questioned. This way of thinking is naturally disturbed by a way of thinking which is based on continuous revision, not of the theories, of even the core ground of the way in which we think.     

The core of science is not certainty, it’s continuous uncertainty

So summarizing, I think science is not about data; it’s not about the empirical content, about our vision of the world. It’s about overcoming our own ideas, and about going beyond common sense continuously. Science is a continuous challenge of common sense, and the core of science is not certainty, it’s continuous uncertainty. I would even say the joy of taking what we think, being aware that in everything we think, there are probably still an enormous amount of prejudices and mistakes, and try to learn to look a little bit larger, knowing that there is always a larger point of view that we’ll expect in the future.    

We are very far from the final theory of the world, in my field, in physics, I think extremely far. Every hope of saying, well we are almost there, we’ve solved all the problems, is nonsense. And we are very wrong when we discard the value of theories like quantum mechanics, general relativity or special relativity, for that matter. And throw them away, trying something else randomly. On the basis of what we know, we should learn something more, and at the same time we should somehow take our vision for what it is, a vision that is the best vision that we have, but then continuous evolving the vision. (…) 

String theory's a beautiful theory. It might work, but I suspect it's not going to work. I suspect it's not going to work because it's not sufficiently grounded in everything we know so far about the world, and especially in what I think or perceive as the main physical content of general relativity.  

String theory’s a big guesswork. I think physics has never been a guesswork; it has been a way of unlearning how to think about something, and learning about how to think a little bit different by reading the novelty into the details of what we already know. Copernicus didn’t have any new data, any major new idea, he just took Ptolemy, in the details of Ptolemy, and he read in the details of Ptolemy the fact that the equants, the epicycles, the deferents were in certain proportions between them, the way to look at the same construction from a slightly different perspective and discover the earth is not the center of the universe.

Einstein, as I said, took seriously Maxwell’s theory and classical mechanics to get special relativity. So loop quantum gravity is an attempt to do the same thing: take seriously general relativity, take seriously quantum mechanics, and out of that, bring them together, even if this means a theory where there’s no time, no fundamental time, so we have rethink the world without basic time. The theory, on the one hand, is very conservative, because it’s based on what we know. But it’s totally radical because it forces us to change something big in our way of thinking.

String theorists think differently. They say well, let’s go out to infinity, where somehow the full covariance of general relativity is not there. There we know what is time, we know what is space, because we’re at asymptotic distances, at large distances. The theory’s wilder, more different, more new, but in my opinion, it’s more based on the old conceptual structure. It’s attached to the old conceptual structure, and not attached to the novel content of the theories that have proven empirically successful. That’s how my way of reading science matches with the specifics of the research work that I do, and specifically of loop quantum gravity.

Of course we don’t know. I want to be very clear. I think that string theory’s a great attempt to go ahead, done by great people. My only polemical attitude with string theory is when I hear, but I hear less and less now, when I hear ‘oh, we know the solution already, certain it’s string theory.’ That’s certainly wrong and false. What is true is that that’s a good set of ideas; loop quantum gravity is another good set of ideas. We have to wait and see which one of the theories turns out to work, and ultimately to be empirically confirmed.    

Should a scientist think about philosophy, or not?

This may take me to another point, which is should a scientist think about philosophy, or not? It’s sort of the fashion today to discard philosophy, to say now we have science, we don’t need philosophy. I find this attitude very naïve for two reasons. One is historical. Just look back. Heisenberg would have never done quantum mechanics without being full of philosophy. Einstein would have never done relativity without having read all the philosophers and have a head full of philosophy. Galileo would never have done what he had done without having a head full of Plato. Newton thought of himself as a philosopher, and started by discussing this with Descartes, and had strong philosophical ideas.

But even Maxwell, Boltzmann, I mean, all the major steps of science in the past were done by people who were very aware of methodological, fundamental, even metaphysical questions being posed. When Heisenberg does quantum mechanics, he is in a completely philosophical mind. He says in classical mechanics there’s something philosophically wrong, there’s not enough emphasis on empiricism. It is exactly this philosophical reading of him that allows him to construct this fantastically new physical theory, scientific theory, which is quantum mechanics.  

             
Paul Dirac and Richard Feynman. From The Strangest Man. Photograph by A. John Coleman, courtesy AIP Emilio Segre Visual Archives, Physics Today collection

The divorce between this strict dialogue between philosophers and scientists is very recent, and somehow it’s after the war, in the second half of the 20th century. It has worked because in the first half of the 20thcentury, people were so smart. Einstein and Heisenberg and Dirac and company put together relativity and quantum theory and did all the conceptual work. The physics of the second half of the century has been, in a sense, a physics of application of the great ideas of the people of the ’30s, of the Einsteins and the Heisenbergs.

When you want to apply thes ideas, when you do atomic physics, you need less conceptual thinking. But now we are back to the basics, in a sense. When we do quantum gravity it’s not just application. I think that the scientists who say I don’t care about philosophy, it’s not true they don’t care about philosophy, because they have a philosophy. They are using a philosophy of science. They are applying a methodology. They have a head full of ideas about what is the philosophy they’re using; just they’re not aware of them, and they take them for granted, as if this was obvious and clear. When it’s far from obvious and clear. They are just taking a position without knowing that there are many other possibilities around that might work much better, and might be more interesting for them.

I think there is narrow-mindedness, if I might say so, in many of my colleague scientists that don’t want to learn what is being said in the philosophy of science. There is also a narrow-mindedness in a lot of probably areas of philosophy and the humanities in which they don’t want to learn about science, which is even more narrow-minded. Somehow cultures reach, enlarge. I’m throwing down an open door if I say it here, but restricting our vision of reality today on just the core content of science or the core content of humanities is just being blind to the complexity of reality that we can grasp from a number of points of view, which talk to one another enormously, and which I believe can teach one another enormously.”

Carlo Rovelli, Italian theoretical physicist, working on quantum gravity and on foundations of spacetime physics. He is professor of physics at the University of the Mediterranean in Marseille, France and member of the Intitut Universitaire de France. To see the whole video and read the transcript, click Science Is Not About Certainty: A Philosophy Of Physics, Edge, May 24, 2012. (Illustration source)

See also:

Raphael Bousso: Thinking About the Universe on the Larger Scales
David Deutsch: A new way to explain explanation
Galileo and the relationship between the humanities and the sciences
The Relativity of Truth - a brief résumé, Lapidarium notes
Philosophy vs science: which can answer the big questions of life?
☞ ‘Cognition, perception, relativity’ tag on Lapidarium notes

Apr
25th
Wed
permalink

Waking Life animated film focuses on the nature of dreams, consciousness, and existentialism



Waking Life is an American animated film (rotoscoped based on live action), directed by Richard Linklater and released in 2001. The entire film was shot using digital video and then a team of artists using computers drew stylized lines and colors over each frame.

The film focuses on the nature of dreams, consciousness, and existentialism. The title is a reference to philosopher George Santayana's maxim: “Sanity is a madness put to good uses; waking life is a dream controlled.”

Waking Life is about an unnamed young man in a persistent dream-like state that eventually progresses to lucidity. He initially observes and later participates in philosophical discussions of issues such as reality, free will, the relationship of the subject with others, and the meaning of life. Along the way the film touches on other topics including existentialism, situationist politics, posthumanity, the film theory of André Bazin, and lucid dreaming itself. By the end, the protagonist feels trapped by his perpetual dream, broken up only by unending false awakenings. His final conversation with a dream character reveals that reality may be only a single instant which the individual consciousness interprets falsely as time (and, thus, life) until a level of understanding is achieved that may allow the individual to break free from the illusion.

Ethan Hawke and Julie Delpy reprise their characters from Before Sunrise in one scene. (Wiki)

Eamonn Healy speaks about telescopic evolution and the future of humanity

We won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). (…) The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially).

So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the twentieth century saw only about 25 years of progress (again at today’s rate of progress) since we have been speeding up to current rates. So the twenty-first century will see almost a thousand times greater technological change than its predecessor.

Ray Kurzweil, American author, scientist, inventor and futurist, The Law of Accelerating Returns, KurzweilAI, March 7, 2001.

"If we’re looking at the highlights of human development, you have to look at the evolution of the organism and then at the development of its interaction with the environment. Evolution of the organism will begin with the evolution of life perceived through the hominid coming to the evolution of mankind. Neanderthal and Cro-Magnon man. Now, interestingly, what you’re looking at here are three strings: biological, anthropological — development of the cities — and cultural, which is human expression.

Now, what you’ve seen here is the evolution of populations, not so much the evolution of individuals. And in addition, if you look at the time scales that are involved here — two billion years for life, six million years for the hominid, 100,000 years for mankind as we know it — you’re beginning to see the telescoping nature of the evolutionary paradigm. And then when you get to agricultural, when you get to scientific revolution and industrial revolution, you’re looking at 10,000 years, 400 years, 150 years. Uou’re seeing a further telescoping of this evolutionary time. What that means is that as we go through the new evolution, it’s gonna telescope to the point we should be able to see it manifest itself within our lifetime, within this generation.

The new evolution stems from information, and it stems from two types of information: digital and analog. The digital is artificial intelligence. The analog results from molecular biology, the cloning of the organism. And you knit the two together with neurobiology. Before on the old evolutionary paradigm, one would die and the other would grow and dominate. But under the new paradigm, they would exist as a mutually supportive, noncompetitive grouping. Okay, independent from the external.

And what is interesting here is that evolution now becomes an individually centered process, emanating from the needs and desires of the individual, and not an external process, a passive process where the individual is just at the whim of the collective. So, you produce a neo-human, okay, with a new individuality and a new consciousness. But that’s only the beginning of the evolutionary cycle because as the next cycle proceeds, the input is now this new intelligence. As intelligence piles on intelligence, as ability piles on ability, the speed changes. Until what? Until we reach a crescendo in a way could be imagined as an enormous instantaneous fulfillment of human? human and neo-human potential. It could be something totally different. It could be the amplification of the individual, the multiplication of individual existences. Parallel existences now with the individual no longer restricted by time and space.

And the manifestations of this neo-human-type evolution, manifestations could be dramatically counter-intuitive. That’s the interesting part. The old evolution is cold. It’s sterile. It’s efficient, okay? And its manifestations of those social adaptations. We’re talking about parasitism, dominance, morality, okay? Uh, war, predation, these would be subject to de-emphasis. These will be subject to de-evolution. The new evolutionary paradigm will give us the human traits of truth, of loyalty, of justice, of freedom. These will be the manifestations of the new evolution. And that is what we would hope to see from this. That would be nice.”

Eamonn Healy, professor of chemistry at St. Edward’s University in Austin, Texas, where his research focuses on the design of structure-activity probes to elucidate enzymatic activity. He appears in Richard Linklater's 2001 film Waking Life discussing concepts similar to a technological singularity and explaining “telescopic evolution.”, Eamonn Healy speaks about telescopic evolution and the future of humanity from Brandon Sergent, Transcript

See also:

Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

Mar
7th
Wed
permalink

Is The World An Idea?

                              
                                                  Plato, Hulton Archive/Getty Images

Plato was one that made the divide between the world of ideas and the world of the senses explicit. In his famous Allegory of the Cave, he imagined a group of prisoners who had been chained to a cave all their lives; all they could see were shadows projected on a wall, which they conceived as their reality. Unbeknownst to them, a fire behind them illuminated objects and created the shadows they saw, which could be manipulated to deceive them. In contrast, the philosopher could see reality as it truly is, a manifestation of ideas freed from the deception of the senses. In other words, if we want to understand the true nature of reality, we shouldn’t rely on our senses; only ideas are truly pure, freed from the distortions caused by our limited perception of reality.

Plato thus elevated the human mind to a god-like status, given that it can find truth through reason, in particular through the rational construction of ideal “Forms,” which are the essence of all objects we see in reality. For example, all tables share the Form of “tableness,” even if every table is different. The Form is an ideal and, thus, a blueprint of perfection. If I ask you to imagine a circle, the image of a circle you hold in your head is the only perfect circle: any representation of that circle, on paper or on a blackboard, will be imperfect. To Plato, intelligence was the ability to grasp the world of Forms and thus come closer to truth.

Due to its connection with the search for truth, it’s no surprise that Plato’s ideas influenced both scientists and theologians. If the world is made out of Forms, say geometrical forms, reality may be described mathematically, combining the essential forms and their relations to describe the change we see in the world. Thus, by focusing on the essential elements of reality as mathematical objects and their relations we could, perhaps, grasp the ultimate nature of reality and so come closer to timeless truths.

The notion that mathematics is a portal to final truths holds tremendous intellectual appeal and has influenced some of the greatest names in the history of science, from Copernicus, Kepler, Newton, and Einstein to many present-day physicists searching for a final theory of nature based upon a geometrical scaffolding, such as superstring theories. (…)

Taken in context, we can see where modern scientific ideas that relate the ultimate nature of reality to geometry come from. If it’s not God the Geometer anymore, Man the Geometer persists. That this vision offers a major drive to human creativity is undeniable.

We do imagine the universe in our minds, with our minds, and many scientific successes are a byproduct of this vision. Perhaps we should take Nicholas of Cusa,’s advice to heart and remember that whatever we achieve with our minds will be an expression of our own creativity, having little or nothing to do with ultimate truths.”

Marcelo Gleiser, Brazilian Professor of Natural Philosophy, Physics and Astronomy at Dartmouth College, USA, Is The World An Idea?, NPR, March 7, 2012.

See also:

Cognition, perception, relativity tag on Lapidarium notes

Jan
22nd
Sun
permalink

What Happened Before the Big Bang? The New Philosophy of Cosmology

    

Tim Maudlin: “There are problems that are fairly specific to cosmology. Standard cosmology, or what was considered standard cosmology twenty years ago, led people to the conclude that the universe that we see around us began in a big bang, or put another way, in some very hot, very dense state. And if you think about the characteristics of that state, in order to explain the evolution of the universe, that state had to be a very low entropy state, and there’s a line of thought that says that anything that is very low entropy is in some sense very improbable or unlikely. And if you carry that line of thought forward, you then say “Well gee, you’re telling me the universe began in some extremely unlikely or improbable state” and you wonder is there any explanation for that. Is there any principle that you can use to account for the big bang state?

This question of accounting for what we call the “big bang state” — the search for a physical explanation of it — is probably the most important question within the philosophy of cosmology, and there are a couple different lines of thought about it. One that’s becoming more and more prevalent in the physics community is the idea that the big bang state itself arose out of some previous condition, and that therefore there might be an explanation of it in terms of the previously existing dynamics by which it came about. There are other ideas, for instance that maybe there might be special sorts of laws, or special sorts of explanatory principles, that would apply uniquely to the initial state of the universe.

One common strategy for thinking about this is to suggest that what we used to call the whole universe is just a small part of everything there is, and that we live in a kind of bubble universe, a small region of something much larger. And the beginning of this region, what we call the big bang, came about by some physical process, from something before it, and that we happen to find ourselves in this region because this is a region that can support life. The idea being that there are lots of these bubble universes, maybe an infinite number of bubble universes, all very different from one another. Part of the explanation of what’s called the anthropic principle says, “Well now, if that’s the case, we as living beings will certainly find ourselves in one of those bubbles that happens to support living beings.” That gives you a kind of account for why the universe we see around us has certain properties. (…)

Newton would call what he was doing natural philosophy, that’s actually the name of his book: Mathematical Principles of Natural Philosophy." Philosophy, traditionally, is what everybody thought they were doing. It’s what Aristotle thought he was doing when he wrote his book called Physics. So it’s not as if there’s this big gap between physical inquiry and philosophical inquiry. They’re both interested in the world on a very general scale, and people who work in the foundations of physics, that is, the group that works on the foundations of physics, is about equally divided between people who live in philosophy departments, people who live in physics departments, and people who live in mathematics departments.

Q: In May of last year Stephen Hawking gave a talk for Google in which he said that philosophy was dead, and that it was dead because it had failed to keep up with science, and in particular physics. Is he wrong or is he describing a failure of philosophy that your project hopes to address?

Maudlin: Hawking is a brilliant man, but he’s not an expert in what’s going on in philosophy, evidently. Over the past thirty years the philosophy of physics has become seamlessly integrated with the foundations of physics work done by actual physicists, so the situation is actually the exact opposite of what he describes. I think he just doesn’t know what he’s talking about. I mean there’s no reason why he should. Why should he spend a lot of time reading the philosophy of physics? I’m sure it’s very difficult for him to do. But I think he’s just … uninformed. (…)

Q: Do you think that physics has neglected some of these foundational questions as it has become, increasingly, a kind of engine for the applied sciences, focusing on the manipulation, rather than say, the explanation, of the physical world? 

Maudlin: Look, physics has definitely avoided what were traditionally considered to be foundational physical questions, but the reason for that goes back to the foundation of quantum mechanics. The problem is that quantum mechanics was developed as a mathematical tool. Physicists understood how to use it as a tool for making predictions, but without an agreement or understanding about what it was telling us about the physical world. And that’s very clear when you look at any of the foundational discussions. This is what Einstein was upset about; this is what Schrodinger was upset about.

Quantum mechanics was merely a calculational technique that was not well understood as a physical theory. Bohr and Heisenberg tried to argue that asking for a clear physical theory was something you shouldn’t do anymore. That it was something outmoded. And they were wrong, Bohr and Heisenberg were wrong about that. But the effect of it was to shut down perfectly legitimate physics questions within the physics community for about half a century. And now we’re coming out of that, fortunately.

Q And what’s driving the renaissance?

Maudlin: Well, the questions never went away. There were always people who were willing to ask them. Probably the greatest physicist in the last half of the twentieth century, who pressed very hard on these questions, was John Stewart Bell. So you can’t suppress it forever, it will always bubble up. It came back because people became less and less willing to simply say, “Well, Bohr told us not to ask those questions,” which is sort of a ridiculous thing to say.

Q: Are the topics that have scientists completely flustered especially fertile ground for philosophers? For example I’ve been doing a ton of research for a piece about the James Webb Space Telescope, the successor to the Hubble Space Telescope, and none of the astronomers I’ve talked to seem to have a clue as to how to use it to solve the mystery of dark energy. Is there, or will there be, a philosophy of dark energy in the same way that a body of philosophy seems to have flowered around the mysteries of quantum mechanics?

Maudlin: There will be. There can be a philosophy of anything really, but it’s perhaps not as fancy as you’re making it out. The basic philosophical question, going back to Plato, is “What is x?” What is virtue? What is justice? What is matter? What is time? You can ask that about dark energy - what is it? And it’s a perfectly good question.

There are different ways of thinking about the phenomena which we attribute to dark energy. Some ways of thinking about it say that what you’re really doing is adjusting the laws of nature themselves. Some other ways of thinking about it suggest that you’ve discovered a component or constituent of nature that we need to understand better, and seek the source of. So, the question — What is this thing fundamentally? — is a philosophical question, and is a fundamental physical question, and will lead to interesting avenues of

Q: One example of philosophy of cosmology that seems to have trickled out to the layman is the idea of fine tuning - the notion that in the set of all possible physics, the subset that permits the evolution of life is very small, and that from this it is possible to conclude that the universe is either one of a large number of universes, a multiverse, or that perhaps some agent has fine tuned the universe with the expectation that it generate life. Do you expect that idea to have staying power, and if not what are some of the compelling arguments against it?

Maudlin: A lot of attention has been given to the fine tuning argument. Let me just say first of all, that the fine tuning argument as you state it, which is a perfectly correct statement of it, depends upon making judgments about the likelihood, or probability of something. Like, “how likely is it that the mass of the electron would be related to the mass of the proton in a certain way?” Now, one can first be a little puzzled by what you mean by “how likely” or “probable” something like that is. You can ask how likely it is that I’ll roll double sixes when I throw dice, but we understand the way you get a handle on the use of probabilities in that instance. It’s not as clear how you even make judgments like that about the likelihood of the various constants of nature (an so on) that are usually referred to in the fine tuning argument.

Now let me say one more thing about fine tuning. I talk to physicists a lot, and none of the physicists I talk to want to rely on the fine tuning argument to argue for a cosmology that has lots of bubble universes, or lots of worlds. What they want to argue is that this arises naturally from an analysis of the fundamental physics, that the fundamental physics, quite apart from any cosmological considerations, will give you a mechanism by which these worlds will be produced, and a mechanism by which different worlds will have different constants, or different laws, and so on.  If that’s true, then if there are enough of these worlds, it will be likely that some of them have the right combination of constants to permit life. But their arguments tend not to be “we have to believe in these many worlds to solve the fine tuning problem,” they tend to be “these many worlds are generated by physics we have other reasons for believing in.”

If we give up on that, and it turns out there aren’t these many worlds, that physics is unable to generate them, then it’s not that the only option is that there was some intelligent designer. It would be a terrible mistake to think that those are the only two ways things could go. You would have to again think hard about what you mean by probability, and about what sorts of explanations there might be. Part of the problem is that right now there are just way too many freely adjustable parameters in physics. Everybody agrees about that. There seem to be many things we call constants of nature that you could imagine setting at different values, and most physicists think there shouldn’t be that many, that many of them are related to one another.

Physicists think that at the end of the day there should be one complete equation to describe all physics, because any two physical systems interact and physics has to tell them what to do. And physicists generally like to have only a few constants, or parameters of nature. This is what Einstein meant when he famously said he wanted to understand what kind of choices God had —using his metaphor— how free his choices were in creating the universe, which is just asking how many freely adjustable parameters there are. Physicists tend to prefer theories that reduce that number, and as you reduce it, the problem of fine tuning tends to go away. But, again, this is just stuff we don’t understand well enough yet.

Q: I know that the nature of time is considered to be an especially tricky problem for physics, one that physicists seem prepared, or even eager, to hand over to philosophers. Why is that?

Maudlin: That’s a very interesting question, and we could have a long conversation about that. I’m not sure it’s accurate to say that physicists want to hand time over to philosophers. Some physicists are very adamant about wanting to say things about it; Sean Carroll for example is very adamant about saying that time is real. You have others saying that time is just an illusion, that there isn’t really a direction of time, and so forth. I myself think that all of the reasons that lead people to say things like that have very little merit, and that people have just been misled, largely by mistaking the mathematics they use to describe reality for reality itself. If you think that mathematical objects are not in time, and mathematical objects don’t change — which is perfectly true — and then you’re always using mathematical objects to describe the world, you could easily fall into the idea that the world itself doesn’t change, because your representations of it don’t.

There are other, technical reasons that people have thought that you don’t need a direction of time, or that physics doesn’t postulate a direction of time. My own view is that none of those arguments are very good. To the question as to why a physicist would want to hand time over to philosophers, the answer would be that physicists for almost a hundred years have been dissuaded from trying to think about fundamental questions. I think most physicists would quite rightly say “I don’t have the tools to answer a question like ‘what is time?’ - I have the tools to solve a differential equation.” The asking of fundamental physical questions is just not part of the training of a physicist anymore.

Q: I recently came across a paper about Fermi’s Paradox and Self-Replicating Probes, and while it had kind of a science fiction tone to it, it occurred to me as I was reading it that philosophers might be uniquely suited to speculating about, or at least evaluating the probabilistic arguments for the existence of life elsewhere in the universe. Do you expect philosophers of cosmology to enter into those debates, or will the discipline confine itself to issues that emerge directly from physics?

Maudlin: This is really a physical question. If you think of life, of intelligent life, it is, among other things, a physical phenomenon — it occurs when the physical conditions are right. And so the question of how likely it is that life will emerge, and how frequently it will emerge, does connect up to physics, and does connect up to cosmology, because when you’re asking how likely it is that somewhere there’s life, you’re talking about the broad scope of the physical universe. And philosophers do tend to be pretty well schooled in certain kinds of probabilistic analysis, and so it may come up. I wouldn’t rule it in or rule it out.

I will make one comment about these kinds of arguments which seems to me to somehow have eluded everyone. When people make these probabilistic equations, like the Drake Equation, which you’re familiar with — they introduce variables for the frequency of earth-like planets, for the evolution of life on those planets, and so on. The question remains as to how often, after life evolves, you’ll have intelligent life capable of making technology.

What people haven’t seemed to notice is that on earth, of all the billions of species that have evolved, only one has developed intelligence to the level of producing technology. Which means that kind of intelligence is really not very useful. It’s not actually, in the general case, of much evolutionary value. We tend to think, because we love to think of ourselves, human beings, as the top of the evolutionary ladder, that the intelligence we have, that makes us human beings, is the thing that all of evolution is striving toward. But what we know is that that’s not true.

Obviously it doesn’t matter that much if you’re a beetle, that you be really smart. If it were, evolution would have produced much more intelligent beetles. We have no empirical data to suggest that there’s a high probability that evolution on another planet would lead to technological intelligence. There is just too much we don’t know.”

Tim Maudlin, (B.A. Yale, Physics and Philosophy; Ph.D. Pittsburgh, History and Philosophy of Science), interviewed by Ross Andersen, What Happened Before the Big Bang? The New Philosophy of Cosmology, The Atlantic, Jan 2012.

Illustrations: 1 - Cambridge Digital Gallery Newton Collection, 2 - Aristotle, Ptolemy, and Copernicus discussing astronomy, Published in 1632, Library of Congress.

See also:

The Concept of Laws. The special status of the laws of mathematics and physics
Raphael Bousso: Thinking About the Universe on the Larger Scales
Stephen Hawking on the univers’s origin
Universe tag on Lapidarium notes
Universe tag on Lapidarium

Jan
21st
Sat
permalink

'Human beings are learning machines,' says philosopher (nature vs. nurture)

                       

"The point is that in scientific writing (…) suggest a very inflexible view of human nature, that we are determined by our biology. From my perspective the most interesting thing about the human species is our plasticity, our flexibility. (…)

It is striking in general that human beings mistake the cultural for the natural; you see it in many domains. Take moral values. We assume we have moral instincts: we just know that certain things are right and certain things are wrong. When we encounter people whose values differ from ours we think they must be corrupted or in some sense morally deformed. But this is clearly an instance where we mistake our deeply inculcated preferences for natural law. (…)

Q: At what point with morality does biology stop and culture begin?

One important innate contribution to morality is emotions. An aggressive response to an attack is not learned, it is biological. The question is how emotions that are designed to protect each of us as individuals get extended into generalised rules that spread within a group. One factor may be imitation. Human beings are great imitative learners. Rules that spread in a family can be calibrated across a whole village, leading to conformity in the group and a genuine system of morality.

Nativists will say that morality can emerge without instruction. But with innate domains, there isn’t much need for instruction, whereas in the moral domain, instruction is extensive. Kids learn through incessant correction. Between the ages of 2 and 10, parents correct their children’s behaviour every 8 minutes or so of waking life. In due course, our little monsters become little angels, more or less. This gives us reason to think morality is learned.

Q: One of the strongest arguments for innateness comes from linguists such as Noam Chomsky, who argue that humans are born with the basic rules of grammar already in place. But you disagree with them?

Chomsky singularly deserves credit for giving rise to the new cognitive sciences of the mind. He was instrumental in helping us think about the mind as a kind of machine. He has made some very compelling arguments to explain why everybody with an intact brain speaks grammatically even though children are not explicitly taught the rules of grammar.

But over the past 10 years we have started to see powerful evidence that children might learn language statistically, by unconsciously tabulating patterns in the sentences they hear and using these to generalise to new cases. Children might learn language effortlessly not because they possess innate grammatical rules, but because statistical learning is something we all do incessantly and automatically. The brain is designed to pick up on patterns of all kinds.

Q: How hard has it been to put this alternative view on the table, given how Chomskyan thought has dominated the debate in recent years?

Chomsky’s views about language are so deeply ingrained among academics that those who take statistical learning seriously are subject to a kind of ridicule. There is very little tolerance for dissent. This has been somewhat limiting, but there is a new generation of linguists who are taking the alternative very seriously, and it will probably become a very dominant position in the next generation.

Q: You describe yourself as an “unabashed empiricist” who favours nurture over nature. How did you come to this position, given that on many issues the evidence is still not definitive either way?

Actually I think the debate has been settled. You only have to stroll down the street to see that human beings are learning machines. Sure, for any given capacity the debate over biology versus culture will take time to resolve. But if you compare us with other species, our degree of variation is just so extraordinary and so obvious that we know prior to doing any science that human beings are special in this regard, and that a tremendous amount of what we do is as a result of learning. So empiricism should be the default position. The rest is just working out the details of how all this learning takes place.

Q: What are the implications of an empirical understanding of human nature for the way we go about our lives. How should it affect the way we behave?

In general, we need to cultivate a respect for difference. We need to appreciate that people with different values to us are not simply evil or ignorant, and that just like us they are products of socialisation. This should lead to an increase in international understanding and respect. We also need to understand that group differences in performance are not necessarily biologically fixed. For example, when we see women performing less well than men in mathematics, we should not assume that this is because of a difference in biology.

Q: How much has cognitive science contributed to our understanding of what it is to be human, traditionally a philosophical question?

Cognitive science is in the business of settling long-running philosophical debates on human nature, innate knowledge and other issues. The fact that these theories have been churning about for a couple of millennia without any consensus is evidence that philosophical methods are better at posing questions than answering them. Philosophy tells us what is possible, and science tells us what is true.

Cognitive science has transformed philosophy. At the beginning of the 20th century, philosophers changed their methodology quite dramatically by adopting logic. There has been an equally important revolution in 21st-century philosophy in that philosophers are turning to the empirical sciences and to some extent conducting experimental work themselves to settle old questions. As a philosopher, I hardly go a week without conducting an experiment.

My whole working day has changed because of the infusion of science.”

Jesse Prinz is a distinguished professor of philosophy at the City University of New York, specialising in the philosophy of psychology. He is a pioneer in experimental philosophy, using findings from the cognitive sciences, anthropology and other fields to develop empiricist theories of how the mind works. He is the author of The Emotional Construction of Morals (Oxford University Press, 2007), Gut Reactions (OUP, 2004) and Furnishing the Mind (MIT Press, 2002) and Beyond Human Nature: How culture and experience make us who we are, 'Human beings are learning machines,' says philosopher, NewScientist, Jan 20, 2012. (Illustration: Fritz Kahn, British Library)

See also:

Jesse Prinz: Morality is a Culturally Conditioned Response
Human Nature. Sapolsky, Maté, Wilkinson, Gilligan, discuss on human behavior and the nature vs. nurture debate

Dec
27th
Tue
permalink

Do thoughts have a language of their own? The language of thought hypothesis

            
                                      The language of thought drawing by Robert Horvitz

"We dissect nature along lines laid down by our native languages. The categories and types that we isolate from the world of phenomena we do not find there because they stare the observer in the face; on the contrary, the world is presented in a kaleidoscopic flux of impressions which has to be organized by our minds-and this means largely by the linguistic systems of our minds.”

Benjamin Lee Whorf, American linguist (1897-1941), 1956, p. 213, cited in Does language determine thought? Boroditsky’s (2001) research on Chinese speakers’ conception of time (pdf)

"The mind thinks its thoughts in ‘Mentalese,’ codes them in the localnatural language, and then transmits them (say, by speaking them out loud) to the hearer. The hearer has a Cryptographer in his head too, of course, who thereupon proceeds to decode the ‘message.’ In this picture, natural language, far from being essential to thought, is merely a vehicle for the communication of thought.”

Hilary Putnam, American philosopher, mathematician and computer scientist, Representation and reality, A Bradford Book, 1991, p. 10-11.

"According to one school of philosophy, our thoughts have a language-like structure that is independent of natural language: this is what students of language call the language of thought (LOT) hypothesis. According to the LOT hypothesis, it is because human thoughts already have a linguistic structure that the emergence of common, natural languages was possible in the first place. (…)

Many - perhaps most - psychologists end up concluding that ordinary people do not use the rules of logic in everyday life.

There is an alternative way of seeing this: that there is a language of thought, and that it has a more logical form than ordinary natural language. This view has an added bonus: it tells us that, if you want to express yourself more clearly and more effectively in natural language, then you should express yourself in a form that is closer to computational logic - and therefore closer to the language of thought. Dry legalese never looked so good.”

Robert Kowalski, British logician and computer scientist, Do thoughts have a language of their own?, New Scientist, 8 Dec 2011

"In philosophy of mind, the language of thought hypothesis (LOTH) put forward by American philosopher Jerry Fodor describes thoughts as represented in a “language” (sometimes known as mentalese) that allows complex thoughts to be built up by combining simpler thoughts in various ways. In its most basic form the theory states that thought follows the same rules as language: thought has syntax.

Using empirical data drawn from linguistics and cognitive science to describe mental representation from a philosophical vantage-point, the hypothesis states that thinking takes place in a language of thought (LOT): cognition and cognitive processes are only ‘remotely plausible’ when expressed as a system of representations that is “tokened” by a linguistic or semantic structure and operated upon by means of a combinatorial syntax. Linguistic tokens used in mental language describe elementary concepts which are operated upon by logical rules establishing causal connections to allow for complex thought. Syntax as well as semantics have a causal effect on the properties of this system of mental representations.

These mental representations are not present in the brain in the same way as symbols are present on paper; rather, the LOT is supposed to exist at the cognitive level, the level of thoughts and concepts. LOTH has wide-ranging significance for a number of domains in cognitive science. It relies on a version of functionalist materialism, which holds that mental representations are actualized and modified by the individual holding the propositional attitude, and it challenges eliminative materialism and connectionism. It implies a strongly rationalist model of cognition in which many of the fundamentals of cognition are innate. (…)

Some philosophers have argued that our public language is our mental language, that a person who speaks English thinks in English. Others contend that people who do not know a public language (e.g. babies, aphasics) can think, and that therefore some form of mentalese must be present innately. (…)

Tim Crane, in his book The Mechanical Mind, states that, while he agrees with Fodor, his reason is very different. A logical objection challenges LOTH’s explanation of how sentences in natural languages get their meaning. That is the view that “Snow is white” is TRUE if and only if P is TRUE in the LOT, where P means the same thing in LOT as “Snow is white” means in the natural language. Any symbol manipulation is in need of some way of deriving what those symbols mean. If the meaning of sentences is explained in terms of sentences in the LOT, then the meaning of sentences in LOT must get their meaning from somewhere else. There seems to be an infinite regress of sentences getting their meaning. Sentences in natural languages get their meaning from their users (speakers, writers).  Therefore sentences in mentalese must get their meaning from the way in which they are used by thinkers and so on ad infinitum. This regress is often called the homunculus regress.

Daniel Dennett accepts that homunculi may be explained by other homunculi and denies that this would yield an infinite regress of homunculi. Each explanatory homunculus is “stupider” or more basic than the homunculus it explains but this regress is not infinite but bottoms out at a basic level that is so simple that it does not need interpretation. John Searle points out that it still follows that the bottom-level homunculi are manipulating some sorts of symbols.

LOTH implies that the mind has some tacit knowledge of the logical rules of inference and the linguistic rules of syntax (sentence structure) and semantics (concept or word meaning). If LOTH cannot show that the mind knows that it is following the particular set of rules in question then the mind is not computational because it is not governed by computational rules. Also, the apparent incompleteness of this set of rules in explaining behavior is pointed out. Many conscious beings behave in ways that are contrary to the rules of logic. Yet this irrational behavior is not accounted for by any rules, showing that there is at least some behavior that does not act in accordance with this set of rules.”

Wiki

Inner Speech as a Language

"A definition of language is always, implicitly or explicitly, a definition of human beings in the world."

Raymond Williams, Welsh academic, novelist and critic (1921-1988)

"A set (finite or infinite) of sentences, each finite in length and constructed out of a finite set of elements."

Noam Chomsky, American linguist, philosopher, cognitive scientist

"People often talk silently to themselves, engaging in what is called inner speech, internal conversation, inner dialogue, self talk and so on. This seems to be an inherent characteristic of human beings, commented on as early as Plato, who regarded thought as inner speech. The American pragmatists thought the inner dialogue was the defining feature of the self. For them the self is an internal community or network, communicating within itself in a field of meaning.

The idea that ordinary language is the language of thought however is not the only linquistic theory of thought. Since Saint Augustine there has been the idea that thought is itself a language of pure abstractions. This “mental language” as it was called differs from ordinary language by consisting solely of meanings, i.e. as signifieds without signifiers to use Saussure’s language (Ashworth 2003). This hypothesis peaked in the writings of William of Occam and declined when Hobbes introduced a purely computational, hedonistic theory of thought (Normore 2005).

A second competitor to the ordinary language theory of thought is the “mentalese” hypothesis of Noam Chomsky (1968) and Jerry Fodor (1975). This approach, which sometimes uses the computer as a metaphor for the mind, resembles the Scholastic’s theory in envisioning a purely abstract language of thought. Whatever processes of ordinary language might accompany it are viewed as epiphenomenal, gloss or what might be called “fluff.” Ordinary language, according to this view, is a pale shadow of the actual language of thought. In addition mentalese is regarded as both innate and unconscious. It is a faculty that is claimed to be present at birth and one which operates below the awareness of the mind.

There are then three language of thought hypotheses, the ordinary language or inner speech version, the now marginalized Augustine-Occam mental language and the computer-based, Chomsky-Fodor theory of mentalese. There seem to be no comparisons of the Scholastic and the mentalese theories except in Panaccio (1992, pp. 267–272). However there is a vigorous debate between the ordinary language theory and that of mentalese (for two collections see Carruthers and Boucher 1998 and Preston 1997). A major weak spot of mentalese is that, being unconscious, there is no empirical way of verifying it. The weak spot of the inner speech approach is that there are several examples of non-linguistic thought, e.g. in infants, animals, brain damaged people and ordinary people under conditions of high speed thought.

Still, all three of these language of thought hypotheses are alive and under
discussion in contemporary thought. (…) [p.319]

I will argue that inner speech is even more referential than outer speech in some respects, but also even more differential in other respects. In other words its semantic system is polarized between the differential and the referential.

Considering the peculiarities of inner speech, I think its vocabulary would be more differentially defined, i.e. more “structural”, than outer speech. First let me recall the special qualities of inner speech as silent, elliptical, embedded and egocentric. These qualities make it relatively private, both in the words and their meanings. And these privacy walls push things together, creating links and dependencies among the words.

Let us take the analogy of an intimate relationship, one that has some degree of deviance, with consequent secrecy. The mini culture of the relationship tends, due to secrecy, to be cut off from society at large. This culture gets isolated. There is the relationship time, the place, the transportation, the talk, the rituals, etc. The relationship elements are cut off from the outside world, and they inevitably share in that “relationship” feeling. They also imply each other, causally, sequentially, symbolically, etc. The relationship meanings are defined more differentially than, perhaps, items in a less deviant relationship. It is the privacy that melds things
together.

This internal language though is not only solitary and private, it is also much more self styled than outer language. Ordinary language has a smoothed over or idealized version, which Saussure refered to as language or “langue.” And it also has a more stylized, idiosyncratic version. This is its spoken variety, which Saussure referred to as parole or speech. Parole is more heterogeneous than langue, given that the speaking process reflects the unique mentalities of individuals and sub-cultures.

But by the same logic inner speech is even more individualized and heterogeneous than outer speech. Your spoken or outer speech is somewhat different from mine, and both are different from purified or formalized language. But your inner speech, given its elliptical, embedded and egocentric qualities, is even more different from mine, and both are quite different from the outer langue. In other words the gap between outer langue and inner speech is greater than that between outer langue and outer speech.

The peculiarities of inner speech are so stitched into the psyche, so personalitydependent, that they differ considerably from person to person. This does not seem to be primarily a reference-driven variation, for everyone’s inner speech has roughly the same, generic world of reference. The variation in the internal dialogue is largely due to the personal qualities of the speaker, to that person’s particular ego needs and short cuts.

We are little gods in the world of inner speech. We are the only ones, we run the show, we are the boss. This world is almost a little insane, for it lacks the usual social controls, and we can be as bad or as goofy as we want. On the other hand inner speech does have a job to do, it has to steer us through the world. That function sets up outer limits, even though within those limits we have a free rein to construct this language as we like.

There are similarities to the idealist world view in inner speech. The philosophical idealists, especially Berkeley, reduced the outer world to some version of an inner world. They internalized the external, each doing it somewhat differently, as though it were all a dream. For them all speech would be inner, since there is no outer. And since everything would be radiating from the self, everything would be connected via the self.

The Saussurean theory of linguistic differences [pdf], whether Saussure actually held it or not, is very much like idealistic metaphysics. In both cases everything is dangling from the same string. And some kind of self is pulling the string. The late l9th century British idealists thought all of reality was in relationship, and given that they had only an inner world, they referred to these as “internal relations.”

Saussure used this same phrase, internal relations, to refer to the differences among signifiers and signifieds. And whether he was aligning himself with the idealists or not, there is a similarity between his self-enclosed linguistic world and that of the idealists. It is the denial of reference, of an external world, that underlies this similarity. For Saussure this denial is merely a theoretical move, an “as if ” assumption, and not an assertion about the real world. The idealists said there actually was no external world, and Saussure said he would pretend, for methodological reasons, that there was no external world. But regardless of how they get there, they end up in the same place.

If there is no reference, no external world, then the only way language can be defined is internally, by a system of differences. Saussure’s purely differential theory of meaning follows from the loss of the referential. But if there is an external world, even for inner speech, then we are back to the dualistic semantic theory, i.e. to some sort of balance between referential and differential streams.

Although inner speech is not idealism, in some ways it seems to be a more differentially defined universe than outer speech. Linguistic context is even more important than in outer speech. One reason is that meaning is so condensed on the two axes. But a second is that inner language is so pervaded with emotion. We censor our emotions in ordinary interpersonal speech, hiding our fear, our shame, our jealousy, our gloating. It takes a while for little children to learn this, but when they grow up they are all, men and women alike, pretty good at it. Inner speech is another matter, for it is brutally honest. And its emotional life is anything goes. We can scream, whoop and holler to ourselves. Or we can sob on a wailing wall. In fact we probably emote more in inner speech to compensate for the restrictions on outer speech. Emotions pervade large stretches of inner speech, and they heighten the importance of internal relations.

The determinants of meaning in inner speech seem much more stark and unarguable than in outer speech. Inner speech is enclosed within us, and this seems to make it a more dense set of internal relations, both because of the intense privacy and the more spontaneous emotions. In these respects inner speech gives a rich example of Saussure’s differential meaning system.

On the other hand inner speech is also more obviously referential than outer speech. Ordinary speech is quite conventional or arbitrary, and when we say dog or apple pie, the sign has no resemblance to its object. In inner speech, though, the signs are often images of their objects, bearing an iconic or mirroring relation to them. In other words, as mentioned before, there can be a heavy dependency on sensory imagery in forming an internal sentence. (…)

In conclusion Saussure’s theory of semantics works well for some aspects of inner speech and quite poorly for others, i.e. the more referential ones. [signs of external objects, color coordination] (…) On the other hand inner speech is quite different from outer speech, and the Saussurean issues must be handled in special ways. Inner speech is only partially fitting to Saussure’s theories. And new ideas are needed to resolve Saussure’s questions. (…)

Saussure’s binaries were meant to simplify the study of language. The paradigmatic-syntagmatic distinction showed two axes of meaning, and it prepared the way for his differential theory of meaning. The history-systematics distinction was meant to justify the exclusion of history. The speech-language distinction was meant to get rid of speech. And the differential-referential distinction was meant to exclude reference. Saussure’s approach then is largely a pruning device which chopped off many traditional parts of linguistics.

My analysis suggests that this pruning apparatus does not work for inner speech. The two axes are useful but they do not prepare the way for the differential theory of meaning. History cannot be excluded, for it is too important for inner speech. Speech should be restored, and in fact langue applies only weakly to inner speech. And that capstone of Saussure and cultural studies, the differential theory of meaning, does not seem adequate for inner speech. Referential theory is also needed to make sense of its meaning system.

Ethnomethodology

Inner speech then is a distinct variation or dialect of ordinary language, and the characteristics I have pointed out seem to be central to its structure. (…)

Inner speech is quite similar to ethnomethodology in its use of short cuts and normalizing practices. Garfinkel (1967) and Cicourel (1974) discovered ethnomethodology by examining interpersonal or intersubjective communication. A great many economies and condensations of interpersonal conversation are similar to ones we use when we talk to ourselves. If I say to myself “shop on the way home,” this is a condensation of the fairly elaborate shopping list I mentioned earlier, but if I say to my wife “I’ll shop on the way home” she may understand something much like that same, implicit shopping list. In other words we are constantly using “etcetera clauses” to speed up our internal conversations. And, being both communicator and communicatee, we may understand these references even more accurately than we do in social conversations. (…)

The self is also a sort of family gathering with similar problems of maintaining and restoring solidarity. Much inner speech is a kind of Durkheimian self soothing ritual where we try to convince ourselves that everything’s fine, even when it is not. In this way we can comfort ourselves when we are frightened, restore some pride when we are ashamed, or find a silver lining when we are disappointed. Such expressions as “you can do it,” “you’re doing great,” and “this looks harder than it is” give us confidence and energy when the going is tough.

In sum inner speech helps one see the importance of ethnomethods. The fact that we engage in these practices in our deepest privacy shows they are rooted in our psychology as well as in our social life. And the fact that they run parallel in intra- and inter-subjective communication shows them to be a feature of communication as such.

Privacy

In philosophy Wittgenstein provoked a widespread and complex discussion of private language. By this he meant a language that is not only de facto but also inherently private. No one but the private language user would be able to fully understand it, even if the meanings were publically available. To constitute a private language such a tongue would not need to be completely private. If only a single word or sentence were inherently private, it would qualify as a private language in Wittgenstein’s sense.

It seems to me inner speech is clearly a private language, at least in some of its utterances. This language is so rooted in the unique self that an eavesdropper, could there be one, would not fully understand it. It has so much of one’s person in it, a listener would have to be another you to follow it. And if someone invented a window into consciousness, a mind-reading machine, that could invade one’s privacy, would they be able to understand the, now revealed, inner speech? I think not. They might be able to understand most of the words, but the non-linguistic or imagistic elements would be too much a personal script to follow. If this eavesdropper watched you, including your consciousness, for your whole life, had access to your memory and knew your way of combining non-linguistic representations with words, they might have your code, but this is another way of saying they would be another you. In practical terms inner speech would be inaccessible in its meaning even if it were accessible in its signifying forms.

Of course this semantic privacy does not prevent one from describing one’s own inner speech to another, at least to a substantial extent. Something is lost all right in the translation from first to third person representations. When, in footnote 2, I talked about the inner speech cluster I called “Tom,” I obviously left out some of the affect and all of the sensory imagery. But I was still able to communicate the gist of it, in other words to transform first to third person meanings. So even though this is a private language it can to some extent be made public and used for research purposes.

The importance of private language is that it sheds light on what a human being is. We are inherently private animals, and we become more so the more self-aware and internally communicative we are. This zone of privacy may well be the foundation for the moral (and legal) need people have for privacy. In any case the hidden individuality or uniqueness of each human being is closely related to the what the person says to him or her self.

Agency

One of the thorniest problems of the humanities and social sciences is human agency. Humans are the authors of their actions to a great extent, but the way this process works is difficult to understand. I would suggest that inner speech is both the locus and platform for agency.

Charles Sanders Peirce was under the impression that we guide our lives with inner speech. We choose internally in the zone of inner speech, and then we choose externally in the zone of practical action and the outer world. The first choice leads to the second choice. Peirce even thought we could make and break habits by first modelling them in our internal theater. Here we could visualize the performance of a particular action and also choose to perform this action. The visualization and the choice could give the energy for designing and moulding one’s life. (…)

More generally the self directing process, including planning, anticipating, rehearsing, etc. seems to be largely a product of inner speech. This includes both what one will do and how one will do it. Picturing one’s preferred action as the lesser evil or greater good, even if one fudges a bit on the facts, is probably also a powerful way of producing a given action, and possibly even a new habit. (…)

I showed that inner speech does not qualify as a public language, though it has a distinct structural profile as a semi-private language or perhaps as a dialect. This structure suggests the access points or research approaches that this language is amenable to. As examples of how this research might proceed I took a quick look at three issues: ethnomethodology, privacy and agency.”

Norbert Wiley, professor emeritus of Sociology at University of Illinois Urbana-Champaign, Illinois, Visiting Scholar at the University of California, Berkley. He is a prize-winning sociologist who has published on both the history and systematics of theory, to read full essay click Inner Speech as a Language: A Saussurean Inquiry (pdf), Journal for the Theory of Social Behaviour 36:3 0021–8308, 2006.

See also:

The Language of Thought Hypothesis, Stanford Encyclopedia of Philosophy
Private language argument, Wiki
Private Language, Stanford Encyclopedia of Philosophy
☞ Jerry A. Fodor, Why there still has to be a language of thought?
Robert Kowalski, British logician and computer scientist, Do thoughts have a language of their own?, New Scientist, 8 Dec 2011
☞ Jerry A. Fodor, The language of thoughtHarvard University Press, 1975
☞ Ned Block, The Mind as the Software of the Brain, New York University 
Antony, Louise M, What are you thinking? Character and content in the language of thought (pdf)
Ansgar Beckermann, Can there be a language of thought? (pdf) In G. White, B. Smith & R. Casati (eds.), Philosophy and the Cognitive Sciences. Proceedings of the 16th International Wittgenstein Symposium. Hölder-Pichler-Tempsky.
Edouard Machery, You don’t know how you think: Introspection and language of thought, British Journal for the Philosophy of Science 56 (3): 469-485, (2005)
☞ Christopher Bartel, Musical Thought and Compositionality (pdf), King’s College London
Psycholinguistics/Language and Thought, Wikiversity
MindPapers: The Language of Thought - A Bibliography of the Philosophy of Mind and the Science of Consciousness, links Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University

Sue Savage-Rumbaugh on Human Language—Human Consciousness. A personal narrative arises through the vehicle of language, Lapidarium notes
The time machine in our mind. The imagistic mental machinery that allows us to travel through time, Lapidarium notes

Nov
24th
Thu
permalink

Are You Totally Improbable Or Totally Inevitable?

                        

"If we have never been amazed by the very fact that we exist, we are squandering the greatest fact of all."

Will Durant, American writer, historian, and philosopher (1885-1981)

"Not only have you been lucky enough to be attached since time immemorial to a favored evolutionary line, but you have also been extremely — make that miraculously — fortunate in your personal ancestry. Consider the fact that for 3.8 billion years, a period of time older than the Earth’s mountains and rivers and oceans, every one of your forebears on both sides has been attractive enough to find a mate, healthy enough to reproduce, and sufficiently blessed by fate and circumstances to live long enough to do so. Not one of your pertinent ancestors was squashed, devoured, drowned, starved, stuck fast, untimely wounded or otherwise deflected from its life’s quest of delivering a tiny charge of genetic material to the right partner at the right moment to perpetuate the only possible sequence of hereditary combinations that could result — eventually, astoundingly, and all too briefly — in you. (…)

The number of people on whose cooperative efforts your eventual existence depends has risen to approximately 1,000,000,000,000,000,000, which is several thousand times the total number of people who have ever lived. (…)

We are awfully lucky to be here-and by ‘we’ I mean every living thing. To attain any kind of life in this universe of ours appears to be quite an achievement. As humans we are doubly lucky, of course: We enjoy not only the privilege of existence but also the singular ability to appreciate it and even, in a multitude of ways, to make it better. It is a talent we have only barely begun to grasp.”

Bill Bryson, A Short History of Nearly Everything, Black Swan, 2003

“Statistically, the probability of any one of us being here is so small that you’d think the mere fact of existing would keep us all in a contented dazzlement of surprise.”

Lewis Thomas, The Lives of a Cell, Bantam Books, 1984, p. 165.

“Life is one huge lottery where only the winning tickets are visible.”

Jostein Gaarder, The Orange Girl, Orion Publishing, 2004

“’We are the lucky ones for we shall die’, as there is an infinite number of possible forms of DNA all but a few billions of which will never burst into consciousness.”

Frank Close, a noted particle physicist who is currently Professor of Physics at the University of Oxford, The Void, Oxford University Press

"What are the odds that you exist, as you, today? Author Dr Ali Binazir attemps to quantify the probability that you came about and exist as you today, and reveals that the odds of you existing are almost zero.

Think about yourself.
You are here because…
Your dad met your mom.
Then your dad and mom conceived you.
So a particular egg in your mom
Joined a particular sperm from your dad
Which could only happen because not one of your direct ancestors, going all the way back to the beginning of life itself, died before passing on his or her genes…
So what are the chances of you happening?
Of you being here?

Author Ali Binazir did the calculations last spring and decided that the chances of anyone existing are one in 102,685,000. In other words (…) you are totally improbable.

— Robert Krulwich, Are You Totally Improbable Or Totally Inevitable?, NPR, Nov 21, 2011

"First, let’s talk about the probability of your parents meeting.  If they met one new person of the opposite sex every day from age 15 to 40, that would be about 10,000 people. Let’s confine the pool of possible people they could meet to 1/10 of the world’s population twenty years go (one tenth of 4 billion = 400 million) so it considers not just the population of the US but that of the places they could have visited. Half of those people, or 200 million, will be of the opposite sex.  So let’s say the probability of your parents meeting, ever, is 10,000 divided by 200 million:

104/2×108= 2×10-4, or one in 20,000.

Probability of boy meeting girl: 1 in 20,000.

So far, so unlikely.

Now let’s say the chances of them actually talking to one another is one in 10.  And the chances of that turning into another meeting is about one in 10 also.  And the chances of that turning into a long-term relationship is also one in 10.  And the chances of that lasting long enough to result in offspring is one in 2.  So the probability of your parents’ chance meeting resulting in kids is about 1 in 2000.

Probability of same boy knocking up same girl: 1 in 2000.

So the combined probability is already around 1 in 40 million — long but not insurmountable odds.  Now things start getting interesting.  Why?  Because we’re about to deal with eggs and sperm, which come in large numbers.

Each sperm and each egg is genetically unique because of the process of meiosis; you are the result of the fusion of one particular egg with one particular sperm.  A fertile woman has 100,000 viable eggs on average.  A man will produce about 12 trillion sperm over the course of his reproductive lifetime.  Let’s say a third of those (4 trillion) are relevant to our calculation, since the sperm created after your mom hits menopause don’t count.  So the probability of that one sperm with half your name on it hitting that one egg with the other half of your name on it is

1/(100,000)(4 trillion)= 1/(105)(4×1012)= 1 in 4 x 1017, or one in 400 quadrillion.

Probability of right sperm meeting right egg: 1 in 400 quadrillion.

But we’re just getting started.

Because the existence of you here now on planet earth presupposes another supremely unlikely and utterly undeniable chain of events.  Namely, that every one of your ancestors lived to reproductive age – going all the way back not just to the first Homo sapiens, first Homo erectus and Homo habilis, but all the way back to the first single-celled organism.  You are a representative of an unbroken lineage of life going back 4 billion years.

Let’s not get carried away here; we’ll just deal with the human lineage.  Say humans or humanoids have been around for about 3 million years, and that a generation is about 20 years.  That’s 150,000 generations.  Say that over the course of all human existence, the likelihood of any one human offspring to survive childhood and live to reproductive age and have at least one kid is 50:50 – 1 in 2. Then what would be the chance of your particular lineage to have remained unbroken for 150,000 generations?

Well then, that would be one in 2150,000 , which is about 1 in 1045,000– a number so staggeringly large that my head hurts just writing it down. That number is not just larger than all of the particles in the universe – it’s larger than all the particles in the universe if each particle were itself a universe.

Probability of every one of your ancestors reproducing successfully: 1 in 1045,000

But let’s think about this some more.  Remember the sperm-meeting-egg argument for the creation of you, since each gamete is unique?  Well, the right sperm also had to meet the right egg to create your grandparents.  Otherwise they’d be different people, and so would their children, who would then have had children who were similar to you but not quite you.  This is also true of your grandparents’ parents, and their grandparents, and so on till the beginning of time.  If even once the wrong sperm met the wrong egg, you would not be sitting here noodling online reading fascinating articles like this one.  It would be your cousin Jethro, and you never really liked him anyway.

That means in every step of your lineage, the probability of the right sperm meeting the right egg such that the exact right ancestor would be created that would end up creating you is one in 1200 trillion, which we’ll round down to 1000 trillion, or one quadrillion.

So now we must account for that for 150,000 generations by raising 400 quadrillion to the 150,000th power:

[4x1017]150,000 ≈ 102,640,000

That’s a ten followed by 2,640,000 zeroes, which would fill 11 volumes of a book the size of The Tao of Dating with zeroes.

To get the final answer, technically we need to multiply that by the 1045,000 , 2000 and 20,000 up there, but those numbers are so shrimpy in comparison that it almost doesn’t matter.  For the sake of completeness:

(102,640,000)(1045,000)(2000)(20,000) = 4x 102,685,007 ≈ 102,685,000

Probability of your existing at all: 1 in 102,685,000

As a comparison, the number of atoms in the body of an average male (80kg, 175 lb) is 1027.  The number of atoms making up the earth is about 1050. The number of atoms in the known universe is estimated at 1080.

So what’s the probability of your existing?  It’s the probability of 2 million people getting together – about the population of San Diego – each to play a game of dice with trillion-sided dice. They each roll the dice, and they all come up the exact same number – say, 550,343,279,001.”


                                                         Click image to enlarge

— Ali Binazir, What are the chances of your coming into being?, June 15, 2011

A lovely comment by PZ Myers, a biologist and associate professor at the University of Minnesota:

"You are a contingent product of many chance events, but so what? So is everything else in the universe. That number doesn’t make you any more special than a grain of sand on a beach, which also arrived at its precise shape, composition, and location by a series of chance events. (…)

You are one of 7 billion people, occupying an insignificant fraction of the volume of the universe, and you aren’t a numerical miracle at all — you’re actually rather negligible.”

— PZ Myers, A very silly calculation, Pharyngula, Nov 14, 2011

'Life is one huge lottery where only the winning tickets are visible'

   “Thirteen forty-nine,” was the first thing [he] said.
   “The Black Death,” I replied. I had a pretty good knowledge of history, but I had no idea what the Black Death had to do with coincidences.
   ”Okay,” he said, and off he went. “You probably know that half Norway’s population was wiped out during that great plague. But there’s a connection here I haven’t told you about. Did you know that you had thousands of ancestors at that time?” he continued.
   I shook my head in dispair. How could that possibly be?
   ”You have two parents, four grandparents, eight great-grandparents, sixteen great-great grandparents — and so on. If you work it out, right back to 1349 — there are quite a lot.
  “Then came the bubonic plague. Death spread from neighborhood to neighborhood, and the children were hit worst. Whole families died, sometimes one or two family members survived. A lot of your ancestors were children at this time, Hans Thomas. But none of them kicked the bucket.”
  “How can you be so sure about that?” I asked on amazement.
   He took a long drag on his cigarette and said, “Because you’re sitting here  looking out over the Adriatic.

  “The chances of of single ancestor of yours not dying while growing up is one in several billion. Because it isn’t just about the Black Death, you know. Actually all of your ancestors have grown up and had children — even during the worst natural disasters, even when the child mortality rate was enormous. Of course, a lot of them have suffered from illness, but they’ve always pulled through. In a way, you have been a millimeter from death billions of times, Hans Thomas.

Your life on this planet has been threatned by insects, wild animals, meteorites, lightning, sickness, war, flods, fires, poisoning, and attempted murders. In the battle of Stikelstad alone you were injured hundreds of times. Because you must have had ancestors on both sides — yes, really you were fighting against yourself and your chances of being born a thousand years later. You know, the same goes for the last world war. If Grandpa had been shot by good Norwegians during the occupation, then neither you nor I would have been born. The point is, this happened billions of times through history. Each time an arrow rained through the air, your chances of being born have been reduced to the minimum.”

   He continued: “I am talking about one long chain of coincidences. In fact, that chain goes right back to the first living cell, which divided in two, and from there gave birth to everything growing and sprouting on this planet today. The chance of my chain not being broken at one time or another duirng three or four billion years is so little it is almost inconceivable. But I have pulled through, you know. Damned right, I have. In return, I appreciate how fantastically lucky I am to be able to experience this planet this planet together with you. I realize how lucky every single little crawling insect on this planet is.”

   "What about the unlucky ones?" I asked.
   ”They don’t exist! They were never born. Life is one huge lottery where only the winning tickets are visible.”

Jostein Gaarder, The Orange Girl, Orion Publishing, 2004.

(Illustration source)

See also:

☞ Richard Dawkins, Unweaving the Rainbow, Lapidarium notes

Nov
17th
Thu
permalink

Why Man Creates by Saul Bass (1968)

"Whaddaya doin?” ‘I’m painting the ceiling! Whadda you doin?” “I’m painting the floor!” — the exchange between Michaelangelo and da Vinci

Why Man Creates is a 1968 animated short documentary film which discusses the nature of creativity. It was written by Saul Bass and Mayo Simon, and directed by Saul and Elaine Bass.

The movie won the Academy Award for Documentary Short Subject. An abbreviated version of it ran on the first-ever broadcast of CBS’ 60 Minutes, on September 24, 1968.

Why Man Creates focuses on the creative process and the different approaches taken to that process. It is divided into eight sections: The Edifice, Fooling Around, The Process, Judgment, A Parable, Digression, The Search, and The Mark.

In 2002, this film was selected for preservation in the United States National Film Registry by the Library of Congress as being “culturally, historically, or aesthetically significant”.

Summary

The Edifice begins with early humans hunting. They attempt to conquer their prey with stones, but fail, so they begin to use spears and bait. They kill their prey, and it turns into a cave painting, upon which a building begins to be built. Throughout the rest of the section, the camera tracks upward as the edifice grows ever taller.

Early cavemen begin to discover various things such as the lever, the wheel, ladders, agriculture and fire. It then cuts to clips of early societies and civilizations. It depicts the appearance of the first religions and the advent of organized labor. It then cuts to the Great Pyramids at Giza and depicts the creation of writing.

Soon an army begins to move across the screen chanting “BRONZE,” but they are overrun by an army chanting “IRON”. The screen then depicts early cities and civilizations.

This is followed by a black screen with one man in traditional Greek clothing who states, “All was in chaos ‘til Euclid arose and made order.” Next, various Greek achievements in mathematics are depicted as they build Greek columns around which Greeks discuss items, including, “What is the good life and how do you lead it?” “Who shall rule the state?” “The Philosopher King.” “The Aristocrat.” “The People.” “You mean ALL the people?” “What is the nature of the Good? What is the nature of Justice?” “What is Happiness?”

The culture of ancient Greece fades into the armies of Rome. The organized armies surround the great Roman architecture as they chant “Hail Caesar!” A man at a podium states, “Roman Law is now in session!”, and when he bangs his gavel, the architecture collapses. Dark soldiers begin to pop up from the rubble and eventually cover the whole screen with darkness symbolizing the Dark Ages.

The Dark Ages consist of inaudible whisperings and mumblings. At one point, a light clicks on and an Arab mathematician says, “Allah be praised! I’ve discovered the zero.” at which point his colleague responds, “What?” and he says “Nothing, nothing.” Next come cloistered monks who sing, “What is the shape of the Earth? Flat. What happens when you get to the edge? You fall off. Does the earth move? Never.”

Finally the scene brightens and shows a stained glass window. Various scientists open stained glass doors and say things such as, “The Earth moves!” “The Earth is round!” “The blood circulates!” “There are worlds smaller than ours!” “There are worlds larger than ours!” Each time one opens a door, a large, hairy arm slams the door shut. Finally, the stained glass breaks in the wake of the new Enlightenment.

Next, Michelangelo and da Vinci are depicted. The steam engine is invented, and gears and belts begin to cover everything. The light bulb and steam locomotive are created. Darwin is referred to as two men hit each other with their canes arguing whether man is an animal. The telegraph is invented and psychology created. Next, a small creature hops across the screen saying, “I’m a bug, I’m a germ, I’m a bug, I’m a germ… [indrawn breath] Louis Pasteur! I’m not a bug, I’m not a germ…” Great musicians such as Beethoven are depicted. Alfred Nobel invents dynamite.

Next, the cartooning shows the great speeches and documents on government and society from the American Revolution onward with quotes such as “All men are created equal…”, “Life, liberty and the pursuit of happiness”, “And the Government, by the people,…”, etc. and ends with “One World.”

Finally, the building stops and the Wright Brothers' plane lands on top of it. It is quickly covered in more advanced planes, in cars, in televisions, and finally in early computers. At the top is a radioactive atom which envelops a man in smoke. The Edifice ends with that man yelling, “HELP!”

Fooling Around displays a random series of perspectives and the creative ideas which come from them.

The Process displays a man who is making artwork from a series of geometrical figures. Each time he attempts to keep them in place, they move and rearrange themselves. He tries many different approaches to the problem. Finally he accepts a working configuration and calls his wife to look at it. She says, “All it needs is an American flag.”

Judgment is a series of reactions, presumably to the creation from The Process. It displays their criticisms of it, such as “It represents the decline of Western culture…”, and only a very few support it.

A Parable begins at a ping-pong ball factory. Each ball is made in exactly the same way, and machines test them to get rid of anomalies. As the balls are being tested for their bounce levels, one bounces much higher than the rest. It is placed in a chute which leads to a garbage can outside the factory. It proceeds to bounce across town to a park, where it begins to bounce. Quickly, a cluster of ping-pong balls gather around it. It keeps bouncing higher and higher, until it doesn’t come back. It concludes with the comment:
“There are some who say he’s coming back and we have only to wait …
There are some who say he burst up there because ball was not meant to fly …
And there are some who maintain he landed safely in a place where balls bounce high …”

Digression is a very short section in which one snail says to another, “Have you ever thought that radical ideas threaten institutions, then become institutions, and in turn reject radical ideas which threaten institutions?” to which the other snail replies “No.” and the first says dejectedly, “Gee, for a minute I thought I had something.”

The Search shows scientists who have been working for years on projects such as solving world hunger, developing a cure for Cancer, or questioning the origin of the universe. Then it showed a scientist who had worked on a project for 20 years, and it simply didn’t work out. He was asked what he would do with himself, and he replied that he didn’t know. (Note: each of the scientists shown was working on something which still has not been solved to date, even though each one expected solid results in only a few years. This forwards the concept shown in this session far better than the creators could have known in 1968.)

The Mark asks the question, Why does man create? and determines that man creates to simply state, “I Am.” The film ends by displaying “I Am” written in paint on the side of a building.” — (Wiki)