Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Pensieri a caso
Photography
A Box Of Stories
Reading Space
Homepage

Twitter
Facebook

Contact

Archive

Nov
1st
Fri
permalink

Bill Gates: ‘If you think connectivity is the key thing, that’s great. I don’t. The world is not flat and PCs are not, in the hierarchy of human needs’


The internet is not going to save the world, whatever Mark Zuckerberg and Silicon Valley’s tech billionaires believe. (…) But eradicating disease just might.

Bill Gates describes himself as a technocrat. But he does not believe that technology will save the world. Or, to be more precise, he does not believe it can solve a tangle of entrenched and interrelated problems that afflict humanity’s most vulnerable: the spread of diseases in the developing world and the poverty, lack of opportunity and despair they engender. “I certainly love the IT thing,” he says. “But when we want to improve lives, you’ve got to deal with more basic things like child survival, child nutrition.

These days, it seems that every West Coast billionaire has a vision for how technology can make the world a better place. A central part of this new consensus is that the internet is an inevitable force for social and economic improvement; that connectivity is a social good in itself. It was a view that recently led Mark Zuckerberg to outline a plan for getting the world’s unconnected 5 billion people online, an effort the Facebook boss called “one of the greatest challenges of our generation”. But asked whether giving the planet an internet connection is more important than finding a vaccination for malaria, the co-founder of Microsoft and world’s second-richest man does not hide his irritation: “As a priority? It’s a joke.

Then, slipping back into the sarcasm that often breaks through when he is at his most engaged, he adds: “Take this malaria vaccine, [this] weird thing that I’m thinking of. Hmm, which is more important, connectivity or malaria vaccine? If you think connectivity is the key thing, that’s great. I don’t.” (…)

Gates says now. “The world is not flat and PCs are not, in the hierarchy of human needs, in the first five rungs.” (…)

To Diamandis’s argument that there is more good to be done in the world by building new industries than by giving away money, meanwhile, he has a brisk retort: “Industries are only valuable to the degree they meet human needs. There’s not some – at least in my psyche – this notion of, oh, we need new industries. We need children not to die, we need people to have an opportunity to get a good education.” (…)

Gates describes himself as a natural optimist. But he admits that the fight with the US government seriously challenged his belief that the best outcome would always prevail. With a typically generalising sweep across history, he declares that governments have “worked pretty well on balance in playing their role to improve the human condition” and that in the US since 1776, “the government’s played an absolutely central role and something wonderful has happened”. But that doesn’t settle his unease.

“The closer you get to it and see how the sausage is made, the more you go, oh my God! These guys don’t even actually know the budget. It makes you think: can complex, technocratically deep things – like running a healthcare system properly in the US in terms of impact and cost – can that get done? It hangs in the balance.”

It isn’t just governments that may be unequal to the task. On this analysis, the democratic process in most countries is also straining to cope with the problems thrown up by the modern world, placing responsibilities on voters that they can hardly be expected to fulfil. “The idea that all these people are going to vote and have an opinion about subjects that are increasingly complex – where what seems, you might think … the easy answer [is] not the real answer. It’s a very interesting problem. Do democracies faced with these current problems do these things well?.”

An exclusive interview with Bill Gates, The Financial Times, Nov 1, 2013, Photo

Oct
29th
Tue
permalink

Beauty of Mathematics

"Mathematics, rightly viewed, possesses not only truth, but supreme beauty — a beauty cold and austere, without the gorgeous trappings of painting or music."

Betrand Russell, British philosopher, logician, mathematician, historian, and social critic (1872-1970)

By Yann Pineill & Nicolas Lefaucheux, parachutes.tv

Oct
28th
Mon
permalink

Douglas Hofstadter: The Man Who Would Teach Machines to Think

"All the limitative theorems of metamathematics and the theory of computation suggest that once the ability to represent your own structure has reached a certain critical point, that is the kiss of death: it guarantees that you can never represent yourself totally. Gödel’s Incompleteness Theorem, Church’s Undecidability Theorem, Turing’s Halting Theorem, Tarski’s Truth Theorem — all have the flavour of some ancient fairy tale which warns you that “To seek self-knowledge is to embark on a journey which … will always be incomplete, cannot be charted on any map, will never halt, cannot be described.”

Douglas Hofstadter, 1979, cited in Vinod K. Wadhawan, Complexity Explained: 17. Epilogue, Nirmukta, 04 April 2010.

image M. C. Escher, Print Gallery. Hofstadter calls this Escher work a “pictorial parable for Godel’s Incompleteness Theorem.” Why? Look to the center of the painting, is there any way logical way to complete it? — source (Click picture to enlarge)

"On [Douglas] Hofstadter's office wall is a somewhat tattered reproduction of one of his favorite mind-twisting M. C. Escher prints, The Print Gallery.” In it, a boy stands looking at a print depicting a town where a woman looks down from her window at the roof of a gallery in which - yes - the boy stands looking at the print. We appreciate the paradox without being thrown by it, because we are outside looking in. Something like that creates our own unfathomable feelings of self. The self, subject and object, perceiver and perceived, is everywhere in the paradox.

It is a ”circling back,” the Tortoise tells Achilles, ”of a complex representation of the system together with its representations of all the rest of the world.”

”It is just so hard, emotionally,” Achilles tells the Tortoise, ”to acknowledge that a ‘soul’ emerges from so physical a system.” (…)

But philosophers like  [Daniel] Dennett believe, with Hofstadter, that scientific answers can be found without cheating by reducing the question to a manageable scale. (…) [T]he danger of looking only at the lowest biological level is in losing sight of the essential humanity that, in Hofstadter’s view, exists in the pattern and in the paradox. ”There seems to be no alternative to accepting some sort of incomprehensible quality to existence,” as Hofstadter puts it. ”Take your pick.” 

James Gleick, on Douglas R. Hofstadter in Exploring the Labyrinth of the Mind, The New York Times, August 21, 1983.

"In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself. Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.” (…)

“Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines as “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes” and a young man’s style as “hipsterish” and on and on ceaselessly throughout your day. That’s what it means to understand. But how does understanding work? For three decades, Hofstadter and his students have been trying to find out, trying to build “computer models of the fundamental mechanisms of thought.”

“At every moment,” Hofstadter writes in Surfaces and Essences, his latest book (written with Emmanuel Sander), “we are simultaneously faced with an indefinite number of overlapping and intermingling situations.” It is our job, as organisms that want to live, to make sense of that chaos. We do it by having the right concepts come to mind. This happens automatically, all the time. Analogy is Hofstadter’s go-to word. The thesis of his new book, which features a mélange of A’s on its cover, is that analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.

“Look at your conversations,” he says. “You’ll see over and over again, to your surprise, that this is the process of analogy-making.” Someone says something, which reminds you of something else; you say something, which reminds the other person of something else—that’s a conversation. It couldn’t be more straightforward. But at each step, Hofstadter argues, there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.

“Beware,” he writes, “of innocent phrases like ‘Oh, yeah, that’s exactly what happened to me!’ … behind whose nonchalance is hidden the entire mystery of the human mind.” (…)

[Hofstadter] spends most of his time in his study, two rooms on the top floor of his house, carpeted, a bit stuffy, and messier than he would like. His study is the center of his world. He reads there, listens to music there, studies there, draws there, writes his books there, writes his e‑mails there. (Hofstadter spends four hours a day writing e‑mail. “To me,” he has said, “an e‑mail is identical to a letter, every bit as formal, as refined, as carefully written … I rewrite, rewrite, rewrite, rewrite all of my e‑mails, always.”) He lives his mental life there, and it shows. Wall-to-wall there are books and drawings and notebooks and files, thoughts fossilized and splayed all over the room. It’s like a museum for his binges, a scene out of a brainy episode of Hoarders.

“Anything that I think about becomes part of my professional life,” he says. Daniel Dennett, who co-edited The Mind’s I with him, has explained that “what Douglas Hofstadter is, quite simply, is a phenomenologist, a practicing phenomenologist, and he does it better than anybody else. Ever.” He studies the phenomena—the feelings, the inside actions—of his own mind. “And the reason he’s good at it,” Dennett told me, “the reason he’s better than anybody else, is that he is very actively trying to have a theory of what’s going on backstage, of how thinking actually happens in the brain.” (…)

He makes photocopies of his notebook pages, cuts them up with scissors, and stores the errors in filing cabinets and labeled boxes around his study.

For Hofstadter, they’re clues. “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.” Correct speech isn’t very interesting; it’s like a well-executed magic trick—effective because it obscures how it works. What Hofstadter is looking for is “a tip of the rabbit’s ear … a hint of a trap door.

As the wind tunnel was to the Wright brothers, so the computer is to FARG. The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited. In Hofstadter’s view, this is the great opportunity of artificial intelligence. Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why. “I have always felt that the only hope of humans ever coming to fully understand the complexity of their minds,” Hofstadter has written, “is by modeling mental processes on computers and learning from the models’ inevitable failures.” (…)

But very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.

The modern era of mainstream AI—an era of steady progress and commercial success that began, roughly, in the early 1990s and continues to this day—is the long unlikely springtime after a period, known as the AI Winter, that nearly killed off the field.

It came down to a basic dilemma. On the one hand, the software we know how to write is very orderly; most computer programs are organized like a well-run army, with layers of commanders, each layer passing instructions down to the next, and routines that call subroutines that call subroutines. On the other hand, the software we want to write would be adaptable—and for that, a hierarchy of rules seems like just the wrong idea. Hofstadter once summarized the situation by writing, “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”

Machine learning

The “expert systems” that had once been the field’s meal ticket were foundering because of their brittleness. Their approach was fundamentally broken. Take machine translation from one language to another, long a holy grail of AI. The standard attack involved corralling linguists and translators into a room and trying to convert their expertise into rules for a program to follow. The standard attack failed for reasons you might expect: no set of rules can ever wrangle a human language; language is too big and too protean; for every rule obeyed, there’s a rule broken.

If machine translation was to survive as a commercial enterprise—if AI was to survive—it would have to find another way. Or better yet, a shortcut.

The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence. One such device, of course, is the human brain—but the whole point is to avoid grappling with the brain’s complexity. So what you do instead is start with a machine so simple, it almost doesn’t work: a machine, say, that randomly spits out French words for the English words it’s given.

Imagine a box with thousands of knobs on it. Some of these knobs control general settings: given one English word, how many French words, on average, should come out? And some control specific settings: given jump, what is the probability that shot comes next? The question is, just by tuning these knobs, can you get your machine to convert sensible English into sensible French?

It turns out that you can. What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.) You proceed one pair at a time. After you’ve entered a pair, take the English half and feed it into your machine to see what comes out in French. If that sentence is different from what you were expecting—different from the known correct translation—your machine isn’t quite right. So jiggle the knobs and try again. After enough feeding and trying and jiggling, feeding and trying and jiggling again, you’ll get a feel for the knobs, and you’ll be able to produce the correct French equivalent of your English sentence.

By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable result. And the beauty is that you never needed to program the machine explicitly; you never needed to know why the knobs should be twisted this way or that. (…)

Google has projects that gesture toward deeper understanding: extensions of machine learning inspired by brain biology; a “knowledge graph” that tries to map words, like Obama, to people or places or things. But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself. It’s like an enormous Rosetta Stone, the calcified hieroglyphics of minds once at work. (…)

Ever since he was about 15, Hofstadter has read The Catcher in the Rye once every 10 years. In the fall of 2011, he taught an undergraduate seminar called “Why Is J. D. Salinger’s The Catcher in the Rye a Great Novel?” He feels a deep kinship with Holden Caulfield. When I mentioned that a lot of the kids in my high-school class didn’t like Holden—they thought he was a whiner—Hofstadter explained that “they may not recognize his vulnerability.” You imagine him standing like Holden stood at the beginning of the novel, alone on the top of a hill, watching his classmates romp around at the football game below. “I have too many ideas already,” Hofstadter tells me. “I don’t need the stimulation of the outside world.” (…)

“Ars longa, vita brevis,” Hofstadter likes to say. “I just figure that life is short. I work, I don’t try to publicize. I don’t try to fight.”

There’s an analogy he made for me once. Einstein, he said, had come up with the light-quantum hypothesis in 1905. But nobody accepted it until 1923. “Not a soul,” Hofstadter says. “Einstein was completely alone in his belief in the existence of light as particles—for 18 years.

“That must have been very lonely.” “

— James Somers, to read the full article click The Man Who Would Teach Machines to Think, The Atlantic, Oct 23 2013

Douglas Hofstadter, is an American professor of cognitive science whose research focuses on the sense of “I”, consciousness, analogy-making, artistic creation, literary translation, and discovery in mathematics and physics. He is best known for his book Gödel, Escher, Bach: an Eternal Golden Braid, first published in 1979. It won both the Pulitzer Prize for general non-fiction.

See also:

The Mathematical Art of M.C. Escher, Lapidarium notes

Oct
23rd
Wed
permalink

Van Gogh’s Shadow by Luca Agnani (Paintings In Motion)

"Luca Agnani, an Italian designer and animator, has taken the classic works of Vincent Van Gogh, and brought them to life. He’s created a short film called Van Gogh’s Shadow which shows over a dozen of Van Gogh’s paintings suddenly filled with life and movement, perhaps giving us an insight into how the artist may have seen the world he lived in.” source

"To calculate the exact shadows, I tried to understand the position of the sun relative to Arles at different times of the day and, according to my calculations, even the river [in The Langlois Bridge at Arles] should flow in that direction," Agnani told The Creators Project over email. "If the video was projected over his paintings, my interpretations would superimpose perfectly, like a mapping of a framework. (…)

Agnani has become somewhat of a phenomenon over the past few years. Since 2011, the Italian artist’s visual mapping and design projections have transformed the faces of some of Europe’s most celebrated religious structures, including the Sanctuary of San Michele (a piece commissioned by UNESCO) and the Catania Cathedral in Sicily. When French musician Yann Tiersen played Ancona, Italy, he asked Agnani to design a projection for his opening night concert at the Mole Vanvitelliana: an artificial port-island that houses a 19th century Leprosorium-turned-art gallery.” The Creators Project, Aug 8, 2013

Painting

1. Fishing Boats on the Beach
2. Langlois Bridge at Arles, The
3. Farmhouse in Provence
4. White House at Night, The
5. Still Life
6. Evening The Watch (after Millet)
7. View of Saintes-Maries
8. Bedroom
9. Factories at Asnieres Seen
10. White House at Night, The
11. Restaurant
12. First Steps (after Millet)
13. Self-Portrait

Music: Experience - Ludovico Einaudi

Luca Agnani, Van Gogh Shadow, 2013

Oct
20th
Sun
permalink

Alphabet Evolution

                                                          (click image to enlarge gif)                                             source

Sep
29th
Sun
permalink

Kevin Kelly: The Improbable is the New Normal

"The improbable consists of more than just accidents. The internets are also brimming with improbable feats of performance — someone who can run up a side of a building, or slide down suburban roof tops, or stack up cups faster than you can blink. Not just humans, but pets open doors, ride scooters, and paint pictures. The improbable also includes extraordinary levels of super human achievements: people doing astonishing memory tasks, or imitating all the accents of the world. In these extreme feats we see the super in humans.

Every minute a new impossible thing is uploaded to the internet and that improbable event becomes just one of hundreds of extraordinary events that we’ll see or hear about today. The internet is like a lens which focuses the extraordinary into a beam, and that beam has become our illumination. It compresses the unlikely into a small viewable band of everyday-ness. As long as we are online - which is almost all day many days — we are illuminated by this compressed extraordinariness. It is the new normal.

That light of super-ness changes us. We no longer want mere presentations, we want the best, greatest, the most extraordinary presenters alive, as in TED. We don’t want to watch people playing games, we want to watch the highlights of the highlights, the most amazing moves, catches, runs, shots, and kicks, each one more remarkable and improbable than the other.

We are also exposed to the greatest range of human experience, the heaviest person, shortest midgets, longest mustache — the entire universe of superlatives! Superlatives were once rare — by definition — but now we see multiple videos of superlatives all day long, and they seem normal. Humans have always treasured drawings and photos of the weird extremes of humanity (early National Geographics), but there is an intimacy about watching these extremities on video on our phones while we wait at the dentist. They are now much realer, and they fill our heads.

I see no end to this dynamic. Cameras are becoming ubiquitous, so as our collective recorded life expands, we’ll accumulate thousands of videos showing people being struck by lightening. When we all wear tiny cameras all the time, then the most improbable accident, the most superlative achievement, the most extreme actions of anyone alive will be recorded and shared around the world in real time. Soon only the most extraordinary moments of our 6 billion citizens will fill our streams. So henceforth rather than be surrounded by ordinariness we’ll float in extraordinariness. (…)

When the improbable dominates the archive to the point that it seems as if the library contains ONLY the impossible, then these improbabilities don’t feel as improbable. (…)

To the uninformed, the increased prevalence of improbable events will make it easier to believe in impossible things. A steady diet of coincidences makes it easy to believe they are more than just coincidences, right? But to the informed, a slew of improbably events make it clear that the unlikely sequence, the outlier, the black swan event, must be part of the story. After all, in 100 flips of the penny you are just as likely to get 100 heads in a row as any other sequence. But in both cases, when improbable events dominate our view — when we see an internet river streaming nothing but 100 heads in a row — it makes the improbable more intimate, nearer.

I am unsure of what this intimacy with the improbable does to us. What happens if we spend all day exposed to the extremes of life, to a steady stream of the most improbable events, and try to run ordinary lives in a background hum of superlatives? What happens when the extraordinary becomes ordinary?

The good news may be that it cultivates in us an expanded sense of what is possible for humans, and for human life, and so expand us. The bad news may be that this insatiable appetite for supe-superlatives leads to dissatisfaction with anything ordinary.”

Kevin Kelly, is the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, The Improbable is the New Normal, The Technium, 7 Jan, 2013. (Photo source)

Jul
1st
Mon
permalink

Why It’s Good To Be Wrong. David Deutsch on Fallibilism

image

"That human beings can be mistaken in anything they think or do is a proposition known as fallibilism. (…)

The trouble is that error is a subject where issues such as logical paradox, self-reference, and the inherent limits of reason rear their ugly heads in practical situations, and bite.

Paradoxes seem to appear when one considers the implications of one’s own fallibility: A fallibilist cannot claim to be infallible even about fallibilism itself. And so, one is forced to doubt that fallibilism is universally true. Which is the same as wondering whether one might be somehow infallible—at least about some things. For instance, can it be true that absolutely anything that you think is true, no matter how certain you are, might be false?

What? How might we be mistaken that two plus two is four? Or about other matters of pure logic? That stubbing one’s toe hurts? That there is a force of gravity pulling us to earth? Or that, as the philosopher René Descartes argued, “I think, therefore I am”?

When fallibilism starts to seem paradoxical, the mistakes begin. We are inclined to seek foundations—solid ground in the vast quicksand of human opinion—on which one can try to base everything else. Throughout the ages, the false authority of experience and the false reassurance of probability have been mistaken for such foundations: “No, we’re not always right,” your parents tell you, “just usually.” They have been on earth longer and think they have seen this situation before. But since that is an argument for “therefore you should always do as we say,” it is functionally a claim of infallibility after all. Moreover, look more closely: It claims literal infallibility too. Can anyone be infallibly right about the probability that they are right? (…)

The fact is, there’s nothing infallible about “direct experience” (…). Indeed, experience is never direct. It is a sort of virtual reality, created by our brains using sketchy and flawed sensory clues, given substance only by fallible expectations, explanations, and interpretations. Those can easily be more mistaken than the testimony of the passing hobo. If you doubt this, look at the work of psychologists Christopher Chabris and Daniel Simons, and verify by direct experience the fallibility of your own direct experience. Furthermore, the idea that your reminiscences are infallible is also heresy by the very doctrine that you are faithful to

I’ll tell you what really happened. You witnessed a dress rehearsal. The real ex cathedra ceremony was on the following day. In order not to make the declaration a day early, they substituted for the real text (which was about some arcane theological issue, not gravity) a lorem-ipsum-type placeholder that they deemed so absurd that any serious listener would immediately realize that that’s what it was. 

And indeed, you did realize this; and as a result, you reinterpreted your “direct experience,” which was identical to that of witnessing an ex cathedra declaration, as not being one. Precisely by reasoning that the content of the declaration was absurd, you concluded that you didn’t have to believe it. Which is also what you would have done if you hadn’t believed the infallibility doctrine.

You remain a believer, serious about giving your faith absolute priority over your own “unaided” reason (as reason is called in these contexts). But that very seriousness has forced you to decide first on the substance of the issue, using reason, and only then whether to defer to the infallible authority. This is neither fluke nor paradox. It is simply that if you take ideas seriously, there is no escape, even in dogma and faith, from the obligation to use reason and to give it priority over dogma, faith, and obedience. (…)

It is hard to contain reason within bounds. If you take your faith sufficiently seriously you may realize that it is not only the printers who are fallible in stating the rules for ex cathedra, but also the committee that wrote down those rules. And then that nothing can infallibly tell you what is infallible, nor what is probable. It is precisely because you, being fallible and having no infallible access to the infallible authority, no infallible way of interpreting what the authority means, and no infallible means of identifying an infallible authority in the first place, that infallibility cannot help you before reason has had its say. 

A related useful thing that faith tells you, if you take it seriously enough, is that the great majority of people who believe something on faith, in fact believe falsehoods. Hence, faith is insufficient for true belief. As the Nobel-Prize-winning biologist Peter Medawar said: “the intensity of the conviction that a hypothesis is true has no bearing on whether it is true or not.”

You know that Medawar’s advice holds for all ideas, not just scientific ones, and, by the same argument, to all the other diverse things that are held up as infallible (or probable) touchstones of truth: holy books; the evidence of the senses; statements about who is probably right; even true love. (…)

This logic of fallibility, discovered and rediscovered from time to time, has had profound salutary effects in the history of ideas. Whenever anything demands blind obedience, its ideology contains a claim of infallibility somewhere; but wherever someone believes seriously enough in that infallibility, they rediscover the need for reason to identify and correctly interpret the infallible source. Thus the sages of ancient Judaism were led, by the assumption of the Bible’s infallibility, to develop their tradition of critical discussion. And in an apparently remote application of the same logic, the British constitutional doctrine of “parliamentary sovereignty” was used by 20th-century judges such as Lord Denning to develop an institution of judicial review similar to that which, in the United States, had grown out of the opposite doctrine of “separation of powers.”

Fallibilism has practical consequences for the methodology and administration of science, and in government, law, education, and every aspect of public life. The philosopher Karl Poppe elaborated on many of these. He wrote:

The question about the sources of our knowledge … has always been asked in the spirit of: ‘What are the best sources of our knowledge—the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist—no more than ideal rulers—and that all ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’

It’s all about error. We used to think that there was a way to organize ourselves that would minimize errors. This is an infallibilist chimera that has been part of every tyranny since time immemorial, from the “divine right of kings” to centralized economic planning. And it is implemented by many patterns of thought that protect misconceptions in individual minds, making someone blind to evidence that he isn’t Napoleon, or making the scientific crank reinterpret peer review as a conspiracy to keep falsehoods in place. (…)

Popper’s answer is: We can hope to detect and eliminate error if we set up traditions of criticism—substantive criticism, directed at the content of ideas, not their sources, and directed at whether they solve the problems that they purport to solve. Here is another apparent paradox, for a tradition is a set of ideas that stay the same, while criticism is an attempt to change ideas. But there is no contradiction. Our systems of checks and balances are steeped in traditions—such as freedom of speech and of the press, elections, and parliamentary procedures, the values behind concepts of contract and of tort—that survive not because they are deferred to but precisely because they are not: They themselves are continually criticized, and either survive criticism (which allows them to be adopted without deference) or are improved (for example, when the franchise is extended, or slavery abolished). Democracy, in this conception, is not a system for enforcing obedience to the authority of the majority. In the bigger picture, it is a mechanism for promoting the creation of consent, by creating objectively better ideas, by eliminating errors from existing ones.

“Our whole problem,” said the physicist John Wheeler, “is to make the mistakes as fast as possible.” This liberating thought is more obviously true in theoretical physics than in situations where mistakes hurt. A mistake in a military operation, or a surgical operation, can kill. But that only means that whenever possible we should make the mistakes in theory, or in the laboratory; we should “let our theories die in our place,” as Popper put it. But when the enemy is at the gates, or the patient is dying, one cannot confine oneself to theory. We should abjure the traditional totalitarian assumption, still lurking in almost every educational system, that every mistake is the result of wrongdoing or stupidity. For that implies that everyone other than the stupid and the wrongdoers is infallible. Headline writers should not call every failed military strike “botched;” courts should not call every medical tragedy malpractice, even if it’s true that they “shouldn’t have happened” in the sense that lessons can be learned to prevent them from happening again. “We are all alike,” as Popper remarked, “in our infinite ignorance.” And this is a good and hopeful thing, for it allows for a future of unbounded improvement.

Fallibilism, correctly understood, implies the possibility, not the impossibility, of knowledge, because the very concept of error, if taken seriously, implies that truth exists and can be found. The inherent limitation on human reason, that it can never find solid foundations for ideas, does not constitute any sort of limit on the creation of objective knowledge nor, therefore, on progress. The absence of foundation, whether infallible or probable, is no loss to anyone except tyrants and charlatans, because what the rest of us want from ideas is their content, not their provenance: If your disease has been cured by medical science, and you then become aware that science never proves anything but only disproves theories (and then only tentatively), you do not respond “oh dear, I’ll just have to die, then.” (…)

The theory of knowledge is a tightrope that is the only path from A to B, with a long, hard drop for anyone who steps off on one side into “knowledge is impossible, progress is an illusion” or on the other side into “I must be right, or at least probably right.” Indeed, infallibilism and nihilism are twins. Both fail to understand that mistakes are not only inevitable, they are correctable (fallibly). Which is why they both abhor institutions of substantive criticism and error correction, and denigrate rational thought as useless or fraudulent. They both justify the same tyrannies. They both justify each other.

I must now apologize for trying to trick you earlier: All the ideas that I suggested we might know infallibly are in fact falsehoods. “Two plus two” of course isn’t “four” as you’d discover if you wrote “2+2” in an arithmetic test when asked to add two and two. If we were infallible about matters of pure logic, no one would ever fail a logic test either. Stubbing your toe does not always hurt if you are focused on some overriding priority like rescuing a comrade in battle. And as for knowing that “I” exist because I think—note that your knowledge that you think is only a memory of what you did think, a second or so ago, and that can easily be a false memory. (For discussions of some fascinating experiments demonstrating this, see Daniel Dennett’s book Brainstorms.) Moreover, if you think you are Napoleon, the person you think must exist because you think, doesn’t exist.

And the general theory of relativity denies that gravity exerts a force on falling objects. The pope would actually be on firm ground if he were to concur with that ex cathedra. Now, are you going to defer to my authority as a physicist about that? Or decide that modern physics is a sham? Or are you going to decide according to whether that claim really has survived all rational attempts to refute it?”

David Deutsch, a British physicist at the University of Oxford. He is a non-stipendiary Visiting Professor in the Department of Atomic and Laser Physics at the Centre for Quantum Computation (CQC) in the Clarendon Laboratory of the University of Oxford, Why It’s Good To Be Wrong, Nautilus, 2013. (Illustration by Gérard DuBois)

See also:

David Deutsch: A new way to explain explanation, TED, 2009
David Deutsch on knowledge as crafted self-similarity
David Deutsch on Artificial Intelligence

Jun
18th
Tue
permalink

Do Geography and Altitude Shape the Sounds of a Language?

imageLanguages that evolve at high elevations are more likely to include a sound that’s easier to make when the air is thinner, new research shows. (Photo: A. Skomorowska)

"[R]ecently, Caleb Everett, a linguist at the University of Miami, made a surprising discovery that suggests the assortment of sounds in human languages is not so random after all.

When Everett analyzed hundreds of different languages from around the world, as part of a study published today in PLOS ONE, he found that those that originally developed at higher elevations are significantly more likely to include ejective consonants. Moreover, he suggests an explanation that, at least intuitively, makes a lot of sense: The lower air pressure present at higher elevations enables speakers to make these ejective sounds with much less effort. (…)

imageThe origin points of each of the languages studied, with black circles representing those with ejective sounds and empty circles those without. The inset plots by latitude and longitude the high-altitude inhabitable regions, where elevations exceed 1500 meters. (1) North American cordillera, (2) Andes, (3) Southern African plateau, (4) East African rift, (5) Caucasus and Javakheti plateau, (6) Tibetan plateau and adjacent regions. Image via PLOS ONE/Caleb Everett

Everett started out by pulling a geographically diverse sampling of 567 languages from the pool of an estimated 6,909 that are currently spoken worldwide. For each language, he used one location that most accurately represented its point of origin, according to the World Atlas of Linguistic Structures. English, for example, was plotted as originating in England, even though it’s spread widely in the years since. But for most of the languages, making this determination is much less difficult than for English, since they’re typically pretty restricted in terms of geographic scope (the average number of speakers of each languageanalyzedis just 7,000).

He then compared the traits of the 475 languages that do not contain ejective consonants with the 92 that do. The ejective languages were clustered in eight geographic groups that roughly corresponded with five regions of high elevation—the North American Cordillera (which include the Cascades and the Sierra Nevadas), the Andes and the Andean altiplano, the southern African plateau, the plateau of the east African rift and the Caucasus range.

When Everett broke things down statistically, he found that 87 percent of the languages with ejectives were located in or near high altitude regions (defined as places with elevations 1500 meters or greater), compared to just 43 precent of the languages without the sound. Of all languages located far from regions with high elevation, just 4 percent contained ejectives. And when he sliced the elevation criteria more finely—rather than just high altitude versus. low altitude—he found that the odds of a given language containing ejectives kept increasing as the elevation of its origin point also increased:

image

Everett’s explanation for this phenomenon is fairly simple: Making ejective sounds requires effort, but slightly less effort when the air is thinner, as is the case at high altitudes. This is because the sound depends upon the speaker compressing a breath of air and releasing it in a sudden burst that accompanies the sound, and compressing air is easier when it’s less dense to begin with. As a result, over the thousands of years and countless random events that shape the evolution of a language, those that developed at high altitudes became gradually more and more likely to incorporate and retain ejectives. Noticeably absent, however, are ejectives in languages that originate close to the Tibetean and Iranian plateaus, a region known colloquially as the roof of the world.

The finding could prompt linguists to look for other geographically-driven trends in the languages spoken around the world. For instance, there might be sounds that are easier to make at lower elevations, or perhaps drier air could make certain sounds trip off the tongue more readily.”

, Do Geography and Altitude Shape the Sounds of a Language?, Smithsonian, June 12, 2013.

See also:

Ejectives, High Altitudes, and Grandiose Linguistic Hypothese, June 17, 2013
☞ Caleb Everett, Evidence for Direct Geographic Influences on Linguistic Sounds: The Case of Ejectives, PLOS ONE 2013

Abstract:

"We present evidence that the geographic context in which a language is spoken may directly impact its phonological form. We examined the geographic coordinates and elevations of 567 language locations represented in a worldwide phonetic database. Languages with phonemic ejective consonants were found to occur closer to inhabitable regions of high elevation, when contrasted to languages without this class of sounds. In addition, the mean and median elevations of the locations of languages with ejectives were found to be comparatively high.

The patterns uncovered surface on all major world landmasses, and are not the result of the influence of particular language families. They reflect a significant and positive worldwide correlation between elevation and the likelihood that a language employs ejective phonemes. In addition to documenting this correlation in detail, we offer two plausible motivations for its existence.

We suggest that ejective sounds might be facilitated at higher elevations due to the associated decrease in ambient air pressure, which reduces the physiological effort required for the compression of air in the pharyngeal cavity–a unique articulatory component of ejective sounds. In addition, we hypothesize that ejective sounds may help to mitigate rates of water vapor loss through exhaled air. These explications demonstrate how a reduction of ambient air density could promote the usage of ejective phonemes in a given language. Our results reveal the direct influence of a geographic factor on the basic sound inventories of human languages.”

Evolution of Language tested with genetic analysis
Are We “Meant” to Have Language and Music? How Language and Music Mimicked Nature and Transformed Ape to Man
Mark Changizi on How To Put Art And Brain Together

Apr
28th
Sun
permalink

Science and a New Kind of Prediction: An Interview with Stephen Wolfram

                          image

"I think Computation is destined to be the defining idea of our future."

—  Stephen Wolfram in Computing a Theory of Everything

"Better living through data? When a pioneer of data collection and organization turned his analytical tools on himself, he revealed the complexity of automating human judgment and the difficulty of predicting just what is predictable.

So, what can one of the world’s foremost mathematical minds learn about life by examining his own computational data? (…)

Q: In your blog, you write that “in time I’m looking forward to being able to ask Wolfram|Alpha all sorts of things about my life and times—and have it immediately generate reports about them. Not only being able to act as an adjunct to my personal memory, but also to be able to do automatic computational history—explaining how and why things happened—and then making projections and predictions.” What sort of things have you been able to predict based on this data set you released?

Stephen Wolfram: One thing I found out is that I’m much more habitual than I ever imagined. It’s amazing to see oneself turned into full distribution. It got me thinking about lots of different ways that I could improve my life and times with data. What I realized is that one of the more important things is to have quick feedback about what’s going on, so you don’t have to wait for a year to go back and look at what happened. You can just see it quickly.

I was actually embarrassed that I hadn’t had a real-time display of the history of my unread e-mail, as a function of time. We built that after this blog post, and I have found it’s quite amazing. By having this feedback, I’m able to work more efficiently. It’s also telling me things like, Gosh, if I ignore my e-mail for four days, or five days, or ten days, or something, it will get totally out of hand, and it would take me weeks and weeks to recover from that. Those sound like very mundane [insights], but in terms of how one actually spends one’s time, they can be quite significant effects.

Q: Nobel Prize–winning economist Daniel Kahneman tells us that people, even particularly smart people in extremely high-performing situations, will consistently underestimate how much time it takes them to complete a certain task. So now that you’ve been able to rid yourself of subjective bias in terms of how long it takes to complete tasks, it sounds like you’ve actually been able to see efficiency improvements, just based on taking a look at what you can get done, how long it actually takes, versus how long you think and that sort of thing.

Wolfram: I have pretty good metrics now. If I’m going to write out something for some talk I’m going to give, or something like this, I know how long it takes me now to give the talk, or to write it out. I know how long to set aside. I have learned that there’s no point in starting early, because I won’t finish it until just in time anyway. I have to know how long it’s actually going to take to finish so that I can get it done in an efficient way. If I start it too early, it takes me longer. The task expands into the space available, so to speak. (…)

Q: Your seminal book, A New Kind of Science, is ten years old. You recently wrote a blog post on the anniversary. Can you talk a little bit about the future of science?

Wolfram: The main idea of A New Kind of Science was to introduce a new way to model things in the world. Three hundred years ago, there was this big transformation in science when it was realized that one could use math, and the formal structure of math, to talk about the natural world. Using math, one could actually compute what should happen in the world—how planets should move, how comets should move, and all those kinds of things.

That has been the dominant paradigm for the last 300 years for the exact sciences. Essentially it says, Let’s find a math equation that represents what we’re talking about, and let’s use that math equation to predict what a system will do. That paradigm has also been the basis for most of our engineering: Let’s figure out how this bridge should work using calculus equations, and so on. Or, Let’s work out this electric circuit using some other kind of differential equation, or algebraic equation or whatever.

That approach has been pretty successful for lots of things. It’s led to a certain choice of subject matter for science, because the science has tended to choose subject matter where it can be successful.

The same is true with engineering. We’ve pursued the particular directions of engineering because we know how to make them work. My goal was to look at the things that science has not traditionally had so much to say about—typically, systems that are more complex in their behavior, and so on—and to ask what we can do with these.

It’s a great approach, but it’s limited. The question is, what’s the space with all possible models that you can imagine using?

A good way to describe that space is to think about computer programs. Any program is [a set of] defined rules for how a system works. For example, when we look at nature, we would ask what kinds of programs nature is using to do what it does, to grow the biological organisms it grows, how fluids flow the way they do—all those kinds of things.

I’ve discovered that very simple programs can serve as remarkably accurate models for lots of things that happen in nature. In natural science, that gives us a vastly better pool of possible models to use than we had from just math. We then see that these may be good models for how nature works. They tell us something about how nature is so easily able to make all this complicated stuff that would be very hard for us to make if we just imagined that nature worked according to math.

Now we realize that there’s a whole different kind of engineering that we can do, and we can look at all of these possible simple programs and use those to create our engineering systems.

This is different from the traditional approach, where I would say, I know these things that work. I know about levers. I know about pulleys. I know about this. I know about that. Let me incrementally build the system where I, as an engineer, know every step of how the thing is going to work as I construct it.

Q: One of the key themes of A New Kind of Science, and also a key theme in your TED talk, is this notion of irreducibility. There are certain things that can’t really be predicted, no matter what. You can’t model them in advance. They have to be experienced. And I wonder, given the future of digitized knowledge, the exponential growth in structured and unstructured data that we can look forward to over the coming decades, is it possible that the space of irreducible knowledge, of unpredictable knowledge—while it will still always exist—is shrinking? Would this mean that the space of predictable knowledge is in fact growing?

Wolfram: Interesting question. Once we know enough, will we just be able to predict everything? In Wolfram|Alpha, for example, we know how to compute lots of things that you might have imagined weren’t predictable. You have a tree in your backyard. It’s such and such a size right now. How big will it be in 10 years? It’s now more or less predictable.

As we accumulate more data, there will certainly be patterns that can be seen, and things that one can readily see that are predictable. You can expect to have a dashboard—with certain constraints—showing how things are likely to evolve for you. You then get to make decisions: Should I do this? Should I do that?

But some part of the world is never going to be predictable. It just has this kind of computational irreducibility. We just have to watch it unfold, so to speak. There’s no way we can outrun it. I suspect that, in lots of practical situations, things will become a lot more predictable. That’s a big part of what we’re trying to address with Wolfram|Alpha. Take the corpus of knowledge that our civilization has accumulated and set it up so that you can automatically make use of it.

There are three reasons why one can’t predict the things that can’t be predicted. The first reason is not enough underlying data. The second is computational irreducibility—it’s just hard to predict. The third is simply not knowing enough to be able to predict something. You, as an individual, don’t happen to know enough about that particular area to be able to do it. I’m trying to solve that problem.

We’re seeing a transition happening right now, and more and more things can be figured out in an automatic way. We’re seeing computation that is finally impinging on our lives in a very direct way. There are lots of things that used to be up to us to estimate, but now they’re just being computed for us: a camera that auto focuses, for example, or that picks out faces and figures what to do, or automatically clicks the shutter when it sees a smile—those kinds of things. Those are all very human judgment activities, and now they’re automated.

I think this is the trend of technology. It’s the one thing, I suppose, in human history that has actually had a progression: There’s more technology; there are more layers of automation about what we do.”

Stephen Wolfram, renowned British scientist and the chief designer of the Mathematica software application and the Wolfram Alpha answer engine, interviewed by Patrick Tucker in Science and a New Kind of Prediction: An Interview with Stephen Wolfram, IEET, Apr 26, 2013  (Photo source)

Stephen Wolfram: Computing a theory of everything

Stephen Wolfram: Computing a theory of everything, TED, Feb 2010.

See also:

The Rise of Big Data. How It’s Changing the Way We Think About the World
Dirk Helbing on A New Kind Of Socio-inspired Technology
Information tag on Lapidarium notes

Apr
27th
Sat
permalink

The Rise of Big Data. How It’s Changing the Way We Think About the World

       image

"In the third century BC, the Library of Alexandria was believed to house the sum of human knowledge. Today, there is enough information in the world to give every person alive 320 times as much of it as historians think was stored in Alexandria’s entire collection — an estimated 1,200 exabytes’ worth. If all this information were placed on CDs and they were stacked up, the CDs would form five separate piles that would all reach to the moon. (…)

Using big data will sometimes mean forgoting the quest for why in return for knowing what. (…)

There will be a special need to carve out a place for the human: to reserve space for intuition, common sense, and serendipity. (…)

Datafication is not the same as digitization, which takes analog content — books, films, photographs — and converts it into digital information, a sequence of ones and zeros that computers can read. Datafication is a far broader activity: taking all aspects of life and turning them into data. Google’s augmented-reality glasses datafy the gaze. Twitter datafies stray thoughts. LinkedIn datafies professional networks.

Once we datafy things, we can transform their purpose and turn the information into new forms of value. For example, IBM was granted a U.S. patent in 2012 for “securing premises using surface-based computing technology” — a technical way of describing a touch-sensitive floor covering, somewhat like a giant smartphone screen. Datafying the floor can open up all kinds of possibilities. The floor could be able to identify the objects on it, so that it might know to turn on lights in a room or open doors when a person entered. Moreover, it might identify individuals by their weight or by the way they stand and walk. (…)

This misplaced trust in data can come back to bite. Organizations can be beguiled by data’s false charms and endow more meaning to the numbers than they deserve. That is one of the lessons of the Vietnam War. U.S. Secretary of Defense Robert McNamara became obsessed with using statistics as a way to measure the war’s progress. He and his colleagues fixated on the number of enemy fighters killed. Relied on by commanders and published daily in newspapers, the body count became the data point that defined an era. To the war’s supporters, it was proof of progress; to critics, it was evidence of the war’s immorality. Yet the statistics revealed very little about the complex reality of the conflict. The figures were frequently inaccurate and were of little value as a way to measure success. Although it is important to learn from data to improve lives, common sense must be permitted to override the spreadsheets. (…)

Ultimately, big data marks the moment when the “information society” finally fulfills the promise implied by its name. The data take center stage. All those digital bits that have been gathered can now be harnessed in novel ways to serve new purposes and unlock new forms of value. But this requires a new way of thinking and will challenge institutions and identities. In a world where data shape decisions more and more, what purpose will remain for people, or for intuition, or for going against the facts? If everyone appeals to the data and harnesses big-data tools, perhaps what will become the central point of differentiation is unpredictability: the human element of instinct, risk taking, accidents, and even error. If so, then there will be a special need to carve out a place for the human: to reserve space for intuition, common sense, and serendipity to ensure that they are not crowded out by data and machine-made answers.

This has important implications for the notion of progress in society. Big data enables us to experiment faster and explore more leads. These advantages should produce more innovation. But at times, the spark of invention becomes what the data do not say. That is something that no amount of data can ever confirm or corroborate, since it has yet to exist. If Henry Ford had queried big-data algorithms to discover what his customers wanted, they would have come back with “a faster horse,” to recast his famous line. In a world of big data, it is the most human traits that will need to be fostered — creativity, intuition, and intellectual ambition — since human ingenuity is the source of progress.

Big data is a resource and a tool. It is meant to inform, rather than explain; it points toward understanding, but it can still lead to misunderstanding, depending on how well it is wielded. And however dazzling the power of big data appears, its seductive glimmer must never blind us to its inherent imperfections. Rather, we must adopt this technology with an appreciation not just of its power but also of its limitations.”

Kenneth Neil Cukier and Viktor Mayer-Schoenberger, The Rise of Big Data, Foreign Affairs, May/June 2013. (Photo: John Elk)

See also:

Dirk Helbing on A New Kind Of Socio-inspired Technology
Information tag on Lapidarium notes

Apr
4th
Thu
permalink

Philosophers and the age of their influential contributions

image(source)

Mar
27th
Wed
permalink

Hilary Putnam - ‘A philosopher in the age of science’

                  

"Imagine two scientists are proposing competing theories about the motion of the moon. One scientist argues that the moon orbits the earth at such and such a speed due to the effects of gravity and other Newtonian forces. The other, agreeing to the exact same observations, argues that behind Newtonian forces there are actually undetectable space-aliens who are using sophisticated tractor beams to move every object in the universe. No amount of observation will resolve this conflict. They agree on every observation and measurement. One just has a more baroque theory than the other. Reasonably, most of us think the simpler theory is better.

But when we ask why this theory is better, we find ourselves resorting to things that are patently non-factual. We may argue that theories which postulate useless entities are worse than simpler ones—citing the value of simplicity. We may argue that the space-alien theory contradicts too many other judgements—citing the value of coherence. We can give a whole slew of reasons why one theory is better than another, but there is no rulebook out there for scientists to point to which resolves the matter objectively. Even appeals to the great pragmatic value of the first theory or arguments that point out the lack of explanatory and predictive power of the space-alien theory, are still appeals to a value. No amount of observation will tell you why being pragmatic makes one theory better—it is something for which you have to argue. No matter what kind of fact we are trying to establish, it is going to be inextricably tied to the values we hold. (…)

In [Hilary Putnam’s] view, there is no reason to suppose that a complete account of reality can be given using a single set of concepts. That is, it is not possible to reduce all types of explanation to one set of objective concepts. Suppose I say, “Keith drove like a maniac” and you ask me why. We would usually explain the event in terms of value-laden concepts like intention, emotion, and so on—“Keith was really stressed out”—and this seems to work perfectly fine. Now we can also take the exact same event and describe it using an entirely different set of scientific concepts— say “there was a chain of electrochemical reactions from this brain to this foot” or “there was x pressure on the accelerator which caused y torque on the wheels.” These might be true descriptions, but they simply don’t give us the whole or even a marginally complete picture of Keith driving like a maniac. We could describe every single relevant physical detail of that event and still have no explanation. Nor, according to Putnam, should we expect there to be. The full scope of reality is simply too complex to be fully described by one method of explanation.

The problem with all of this, and one that Putnam has struggled with, is what sort of picture of reality we are left with once we accept these three central arguments: the collapse of the fact-value dichotomy, the truth of semantic externalism and conceptual relativity. (…)

We could—like Putnam before the 1970s—become robust realists and simply accept that values and norms are no less a part of the world than elementary particles and mathematical objects. We could—like Putnam until the 1990s—become “internal realists” and, in a vaguely Kantian move define reality in terms of mind-dependent concepts and idealised rational categories. Or we could adopt Putnam’s current position—a more modest realism which argues that there is a mind-independent world out there and that it is compatible with our ordinary human values. Of course Putnam has his reasons for believing what he does now, and they largely derive from his faith in our ability to represent reality correctly. But the strength of his arguments convincing us to be wary of the scientific stance leave us with little left of trust in it.”

, A philosopher in the age of science, Prospect, March, 14, 2013. [Hilary Putnam — American philosopher, mathematician and computer scientist who has been a central figure in analytic philosophy since the 1960s, currently Cogan University Professor Emeritus at Harvard University.]

Mar
3rd
Sun
permalink

Rolf Fobelli: News is to the mind what sugar is to the body

   image

"We humans seem to be natural-born signal hunters, we’re terrible at regulating our intake of information. We’ll consume a ton of noise if we sense we may discover an added ounce of signal. So our instinct is at war with our capacity for making sense.”

Nicholas Carr, A little more signal, a lot more noise, Rough Type, May 30, 2012.

"When people struggle to describe the state that the Internet puts them in they arrive at a remarkably familiar picture of disassociation and fragmentation. Life was once whole, continuous, stable; now it is fragmented, multi-part, shimmering around us, unstable and impossible to fix. The world becomes Keats’s “waking dream,” as the writer Kevin Kelly puts it.”

Adam Gopnik on The Information and How the Internet gets inside us, 2011

"Our brains are wired to pay attention to visible, large, scandalous, sensational, shocking, peoplerelated, story-formatted, fast changing, loud, graphic onslaughts of stimuli. Our brains have limited attention to spend on more subtle pieces of intelligence that are small, abstract, ambivalent, complex, slow to develop and quiet, much less silent. News organizations systematically exploit this bias. News media outlets, by and large, focus on the highly visible. They display whatever information they can convey with gripping stories and lurid pictures, and they systematically ignore the subtle and insidious, even if that material is more important. News grabs our attention; that’s how its business model works. Even if the advertising model didn’t exist, we would still soak up news pieces because they are easy to digest and superficially quite tasty. The highly visible misleads us. (…)

  • Terrorism is overrated. Chronic stress is underrated.
  • The collapse of Lehman Brothers is overrated. Fiscal irresponsibility is underrated.
  • Astronauts are overrated. Nurses are underrated.
  • Britney Spears is overrated. IPCC reports are underrated.
  • Airplane crashes are overrated. Resistance to antibiotics is underrated.

(…)

Afraid you will miss “something important”? From my experience, if something really important happens, you will hear about it, even if you live in a cocoon that protects you from the news. Friends and colleagues will tell you about relevant events far more reliably than any news organization. They will fill you in with the added benefit of meta-information, since they know your priorities and you know how they think. You will learn far more about really important events and societal shifts by reading about them in specialized journals, in-depth magazines or good books and by talking to the people who know. (…)

The more “news factoids” you digest, the less of the big picture you will understand. (…)

Thinking requires concentration. Concentration requires uninterrupted time. News items are like free-floating radicals that interfere with clear thinking. News pieces are specifically engineered to interrupt you. They are like viruses that steal attention for their own purposes. (…)

This is about the inability to think clearly because you have opened yourself up to the disruptive factoid stream. News makes us shallow thinkers. But it’s worse than that. News severely affects memory. (…)

News is an interruption system. It seizes your attention only to scramble it. Besides a lack of glucose in your blood stream, news distraction is the biggest barricade to clear thinking. (…)

In the words of Professor Michael Merzenich (University of California, San Francisco), a pioneer in the field of neuroplasticity: “We are training our brains to pay attention to the crap.” (…)

Good professional journalists take time with their stories, authenticate their facts and try to think things through. But like any profession, journalism has some incompetent, unfair practitioners who don’t have the time – or the capacity – for deep analysis. You might not be able to tell the difference between a polished professional report and a rushed, glib, paid-by-the-piece article by a writer with an ax to grind. It all looks like news.

My estimate: fewer than 10% of the news stories are original. Less than 1% are truly investigative. And only once every 50 years do journalists uncover a Watergate.

Many reporters cobble together the rest of the news from other people’s reports, common knowledge, shallow thinking and whatever the journalist can find on the internet. Some reporters copy from each other or refer to old pieces, without necessarily catching up with any interim corrections. The copying and the copying of the copies multiply the flaws in the stories and their irrelevance. (…)

Overwhelming evidence indicates that forecasts by journalists and by experts in finance, social development, global conflicts and technology are almost always completely wrong. So, why consume that junk?

Did the newspapers predict World War I, the Great Depression, the sexual revolution, the fall of the Soviet empire, the rise of the Internet, resistance to antibiotics, the fall of Europe’s birth rate or the explosion in depression cases? Maybe, you’d find one or two correct predictions in a sea of millions of mistaken ones. Incorrect forecast are not only useless, they are harmful.

To increase the accuracy of your predictions, cut out the news and roll the dice or, if you are ready for depth, read books and knowledgeable journals to understand the invisible generators that affect our world. (…)

I have now gone without news for a year, so I can see, feel and report the effects of this freedom first hand: less disruption, more time, less anxiety, deeper thinking, more insights. It’s not easy, but it’s worth it.”

Table of Contents:

No 1 – News misleads us systematically
No 2 – News is irrelevant
No 3 – News limits understanding
No 4 – News is toxic to your body
No 5 – News massively increases cognitive errors
No 6 – News inhibits thinking
No 7 – News changes the structure of your brain
No 8 – News is costly
No 9 – News sunders the relationship between reputation and achievement
No 10 – News is produced by journalists
No 11 – Reported facts are sometimes wrong, forecasts always
No 12 – News is manipulative
No 13 – News makes us passive
No 14 – News gives us the illusion of caring
No 15 – News kills creativity

Rolf Dobelli, Swiss novelist, writer, entrepreneur and curator of zurich.minds, to read full essay click Avoid News. Towards a Healthy News Diet (pdf), 2010. (Illustration: Information Overload by taylorboren)

See also:

The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks
Nicholas Carr on the evolution of communication technology and our compulsive consumption of information
Does Google Make Us Stupid?
Nicholas Carr on what the internet is doing to our brains?
How the Internet Affects Our Memories: Cognitive Consequences of Having Information at Our Fingertips
☞ Dr Paul Howard-Jones, The impact of digital technologies on human wellbeing (pdf), University of Bristol
William Deresiewicz on multitasking and the value of solitude
Information tag on Lapidarium

Feb
20th
Wed
permalink

Albert Bandura on social learning, the origins of morality, and the impact of technological change on human nature

image

"Technology has changed the speed and the scope of social influence and has really transformed our realities. Social cognitive theory is very compatible with that. Other learning theories were linked to learning by direct experience, but when I look around today, I see that most of our learning is by social modeling and through indirect experiences. Errors can be very costly and you can’t afford to develop our values, our competences, our political systems, our religious systems through trial and error. Modeling shortcuts this process. (…)

With new technologies, we’re essentially transcending our physical environment and more and more of our values and attitudes and behavior are now shaped in the symbolic environment – the symbolic environment is the big one rather than the actual one. The changes are so rapid that there are more and more areas of life now in which the cyber world is really essential. One model can affect millions of people worldwide, it can shape their experiences and behaviors. We don’t have to rely on trial and error.

There’s a new challenge now: When I was growing up, we didn’t have all this technology, so we were heavily involved in personal relationships. Now the cyber world is available, and it’s hard to maintain a balance in the priorities of life. (…)

The internet can provide you with fantastic globalized information – but the problem is this: It undermines our ability for self-regulation or self-management. The first way to undermine productivity is temporizing, namely we’re going to put off what we need to do until tomorrow, when we have the illusion that we’ll have more time. So we’re dragging the stuff with us. But the really big way is detouring, and wireless devices are now giving an infinite detour. They create the illusion of business. I talked to the author of a beststeller and I asked him about his writing style. He said: ‘Well, I have to check my e-mails and then I get down to serious writing, but then I get back to the e-mails.’ The challenge of the cyber world is establishing a balance between our digital life and life in the real world. (…)

The origins of morality

Originally our behavior was pretty much shaped by control, by the external consequences of our lives. So the question is: How did we acquire some standards? There are about three or four ways. One: We evaluate reactions to our behavior. We behave in certain ways, in good ways, in bad ways, and then we receive feedback. We begin to adopt standards from how the social environment reacts to our behavior. Two: We see others behaving in certain ways and we are either self-critical or self-approving. Three: We have precepts that tell us what is good and bad. And once we have certain self-sanctions, we have two other potent factors that can influence our behavior: People will behave in certain ways because they want to avoid legal sanctions to their behavior or the social sanctions in their environment. (…)

Many of our theories of morality are abstract. But the primary concern about the acquisition of morality and about the modes of moral reasoning is only one half of the story, the less interesting half. We adopt standards, but we have about eight mechanisms by which we selectively disengage from those standards. So the challenge to explain is not why do people behave in accordance with these standards, but how is it that people can behave cruelly and still feel good about themselves. Our problem is good people doing bad things – and not evil people doing bad things. (…)

Everyday people can behave very badly. In the book I’m writing on that topic I have a long chapter on moralist disengagement in the media, in the gun industry, in the tobacco industry, in the corporate world, in the finance industry – there’s fantastic data from the last few years – in terrorism and as an impediment to environmental sustainability. That’s probably the most important area of moralist disengagement. We have about forty or fifty years, and if we don’t get our act together, we’ll have a very hard time. It’s going to be awfully crowded on earth and a good part of our cities will be under water. And what are we doing? We don’t have the luxury of time anymore. (…)

Human nature is capable of vindicating behavior. It isn’t that people are bad by nature. But they have a very playful and rewarding lifestyle, filled with gadgets and air conditioning, and they don’t want to give it up. (…)

Q: ‘The story of men is a story about violence, love, power, victory and defeat’ – that’s how poets talk about the course of history. But from an analystic point of view…

A. Bandura: That’s not true for all societies. We assume that aggression is inbred, but some societies are remarkably pacifistic. And we can also see large variations within a society. But the most striking example might be the transformation from warrior societies into peaceful societies. Switzerland is one example. Sweden is another: Those vikings were out mugging everyone and people would pray for protection: “Save our souls from the fury of the Norsemen!” And now, if you look at that society, it’s hard to find child abuse or domestic violence. Sweden has become a mediator of peace.

Q: In German, there’s the term “Schicksalsgemeinschaft,” which translates as “community of fate”: It posits that a nation is bound together by history. Do you think that’s what defines a society: A common history? Or is it religion, or the language we speak?

A. Bandura: All of the above. We put a lot of emphasis on biological evolution, but what we don’t emphasize is that cultures evolve, too. These changes are transmitted from one generation to another. A few decades ago, the role of women was to be housewives and it was considered sinful to co-habit without being married. If you look at the role of women today, there’s a fantastic transformation in a short period of time; change is accelerated. Homogenization is important, picking things from different cultures, cuisines, music traditions, forms of behavior, and so on. But we have also polarization: Bin Laden’s hate of the West, for example. And there’s hybridization as well. (…)

And society is changing, too. Now it’s considered completely normal to live with your partner without being married. In California, it was only about 40 years ago that homosexuality was treated as a disease. Then people protested, and eventually they got the state to change the diagnostic category to sexual orientation rather than a disease. Psychiatry, under public pressure, changed the diagnostic system. (…)

Q: It’s quite interesting to compare Russia and China. Russia has a free internet, so the reaction to protests is very different than in China. If social networks become increasingly global, do you foresee something like a global set of values as well?

A. Bandura: Yes, but there is another factor here, namely the tremendous power of multinational corporations. They now shape global culture. A lot of these global forces are undermining the collective African society, for example. The society does no longer have much control over the economy. In order to restore some power in leverage, societies are going to be organized in unions. We will see more partnerships around the world. (…)

The revolutionary tendency of technology has increased our sense of agency. If I have access to all global knowledge, I would have fantastic capacities to educate myself. (…) The important thing in psychology is that we need a theory of human agency, rather than arguing that we’re controlled by neural networks. In every aspect of our lives we now have a greater capacity for exercicing agency. (…)

Q: But at the same time globalization removes us from the forces that shape our environment.

A. Bandura: The problems are powerful transnational forces. They can undermine the capacity to run our own society: Because of what happens in Iran, gas prices might soon hit five dollars per gallon in the US. That’s where the pressure comes from for systems and societies to form blocks or build up leverage to protect the quality of life of their citizens. But we can see that a global culture is emerging. One example is the transformation of the status of women. Oppressive regimes see that women are able to drive cars, and they cannot continue to deny that right to them. We’re really changing norms. Thanks to the ubiquity of television, we’re motivating them and showing them that they have the capability to initiate change. It’s about agency: Change is deeply rooted in the belief that my actions can have an effect in the world.”

Albert Bandura, a psychologist who is the David Starr Jordan Professor Emeritus of Social Science in Psychology at Stanford University. For almost six decades, he has been responsible for contributions to many fields of psychology, including social cognitive theory, therapy and personality psychology, and was also influential in the transition between behaviorism and cognitive psychology, "We have transcended our biology, The European, 18.02.2013. (Photo: Linda A. Cicero / Stanford News Service)

See also:

‘Human beings are learning machines,’ says philosopher (nature vs. nurture), Lapidarium notes
What Neuroscience Tells Us About Morality: ‘Morality is a form of decision-making, and is based on emotions, not logic’

Feb
10th
Sun
permalink

Universality: In Mysterious Pattern, Math and Nature Converge

image

"In 1999, while sitting at a bus stop in Cuernavaca, Mexico, a Czech physicist named Petr Šeba noticed young men handing slips of paper to the bus drivers in exchange for cash. It wasn’t organized crime, he learned, but another shadow trade: Each driver paid a “spy” to record when the bus ahead of his had departed the stop. If it had left recently, he would slow down, letting passengers accumulate at the next stop. If it had departed long ago, he sped up to keep other buses from passing him. This system maximized profits for the drivers. And it gave Šeba an idea. (…)

The interaction between drivers caused the spacing between departures to exhibit a distinctive pattern previously observed in quantum physics experiments. (…) “We felt here some kind of similarity with quantum chaotic systems.” (…) A “spy” network makes the decentralized bus system more efficient. As a consequence, the departure times of buses exhibit a ubiquitous pattern known as “universality.” (…)

Subatomic particles have little to do with decentralized bus systems. But in the years since the odd coupling was discovered, the same pattern has turned up in other unrelated settings. Scientists now believe the widespread phenomenon, known as “universality,” stems from an underlying connection to mathematics, and it is helping them to model complex systems from the internet to Earth’s climate. (…)

                image

The red pattern exhibits a precise balance of randomness and regularity known as “universality,” which has been observed in the spectra of many complex, correlated systems. In this spectrum, a mathematical formula called the “correlation function” gives the exact probability of finding two lines spaced a given distance apart. (…)

The pattern was first discovered in nature in the 1950s in the energy spectrum of the uranium nucleus, a behemoth with hundreds of moving parts that quivers and stretches in infinitely many ways, producing an endless sequence of energy levels. In 1972, the number theorist Hugh Montgomery observed it in the zeros of the Riemann zeta function, a mathematical object closely related to the distribution of prime numbers. In 2000, Krbálek and Šeba reported it in the Cuernavaca bus system. And in recent years it has shown up in spectral measurements of composite materials, such as sea ice and human bones, and in signal dynamics of the Erdös–Rényi model, a simplified version of the internet named for Paul Erdös and Alfréd Rényi. (…)

Each of these systems has a spectrum — a sequence like a bar code representing data such as energy levels, zeta zeros, bus departure times or signal speeds. In all the spectra, the same distinctive pattern appears: The data seem haphazardly distributed, and yet neighboring lines repel one another, lending a degree of regularity to their spacing. This fine balance between chaos and order, which is defined by a precise formula, also appears in a purely mathematical setting: It defines the spacing between the eigenvalues, or solutions, of a vast matrix filled with random numbers. (…)

It seems to be a law of nature,” said Van Vu, a mathematician at Yale University who, with Terence Tao of the University of California, Los Angeles, has proven universality for a broad class of random matrices.

Universality is thought to arise when a system is very complex, consisting of many parts that strongly interact with each other to generate a spectrum. The pattern emerges in the spectrum of a random matrix, for example, because the matrix elements all enter into the calculation of that spectrum. But random matrices are merely “toy systems” that are of interest because they can be rigorously studied, while also being rich enough to model real-world systems, Vu said. Universality is much more widespread. Wigner’s hypothesis (named after Eugene Wigner, the physicist who discovered universality in atomic spectra) asserts that all complex, correlated systems exhibit universality, from a crystal lattice to the internet.

     

Mathematicians are using random matrix models to study and predict some of the internet’s properties, such as the size of typical computer clusters. (Illustration: Matt Britt)

The more complex a system is, the more robust its universality should be, said László Erdös of the University of Munich, one of Yau’s collaborators. “This is because we believe that universality is the typical behavior.”

— Natalie Wolchover, In Mysterious Pattern, Math and Nature Converge, Wired, Feb 6, 2013. (Photo: Marco de Leija)

See also:

Mathematics of Disordered Quantum Systems and Matrices, IST Austria.