Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Oct
28th
Mon
permalink

Douglas Hofstadter: The Man Who Would Teach Machines to Think

"All the limitative theorems of metamathematics and the theory of computation suggest that once the ability to represent your own structure has reached a certain critical point, that is the kiss of death: it guarantees that you can never represent yourself totally. Gödel’s Incompleteness Theorem, Church’s Undecidability Theorem, Turing’s Halting Theorem, Tarski’s Truth Theorem — all have the flavour of some ancient fairy tale which warns you that “To seek self-knowledge is to embark on a journey which … will always be incomplete, cannot be charted on any map, will never halt, cannot be described.”

Douglas Hofstadter, 1979, cited in Vinod K. Wadhawan, Complexity Explained: 17. Epilogue, Nirmukta, 04 April 2010.

image M. C. Escher, Print Gallery. Hofstadter calls this Escher work a “pictorial parable for Godel’s Incompleteness Theorem.” Why? Look to the center of the painting, is there any way logical way to complete it? — source (Click picture to enlarge)

"On [Douglas] Hofstadter's office wall is a somewhat tattered reproduction of one of his favorite mind-twisting M. C. Escher prints, The Print Gallery.” In it, a boy stands looking at a print depicting a town where a woman looks down from her window at the roof of a gallery in which - yes - the boy stands looking at the print. We appreciate the paradox without being thrown by it, because we are outside looking in. Something like that creates our own unfathomable feelings of self. The self, subject and object, perceiver and perceived, is everywhere in the paradox.

It is a ”circling back,” the Tortoise tells Achilles, ”of a complex representation of the system together with its representations of all the rest of the world.”

”It is just so hard, emotionally,” Achilles tells the Tortoise, ”to acknowledge that a ‘soul’ emerges from so physical a system.” (…)

But philosophers like  [Daniel] Dennett believe, with Hofstadter, that scientific answers can be found without cheating by reducing the question to a manageable scale. (…) [T]he danger of looking only at the lowest biological level is in losing sight of the essential humanity that, in Hofstadter’s view, exists in the pattern and in the paradox. ”There seems to be no alternative to accepting some sort of incomprehensible quality to existence,” as Hofstadter puts it. ”Take your pick.” 

James Gleick, on Douglas R. Hofstadter in Exploring the Labyrinth of the Mind, The New York Times, August 21, 1983.

"In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself. Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.” (…)

“Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines as “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes” and a young man’s style as “hipsterish” and on and on ceaselessly throughout your day. That’s what it means to understand. But how does understanding work? For three decades, Hofstadter and his students have been trying to find out, trying to build “computer models of the fundamental mechanisms of thought.”

“At every moment,” Hofstadter writes in Surfaces and Essences, his latest book (written with Emmanuel Sander), “we are simultaneously faced with an indefinite number of overlapping and intermingling situations.” It is our job, as organisms that want to live, to make sense of that chaos. We do it by having the right concepts come to mind. This happens automatically, all the time. Analogy is Hofstadter’s go-to word. The thesis of his new book, which features a mélange of A’s on its cover, is that analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.

“Look at your conversations,” he says. “You’ll see over and over again, to your surprise, that this is the process of analogy-making.” Someone says something, which reminds you of something else; you say something, which reminds the other person of something else—that’s a conversation. It couldn’t be more straightforward. But at each step, Hofstadter argues, there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.

“Beware,” he writes, “of innocent phrases like ‘Oh, yeah, that’s exactly what happened to me!’ … behind whose nonchalance is hidden the entire mystery of the human mind.” (…)

[Hofstadter] spends most of his time in his study, two rooms on the top floor of his house, carpeted, a bit stuffy, and messier than he would like. His study is the center of his world. He reads there, listens to music there, studies there, draws there, writes his books there, writes his e‑mails there. (Hofstadter spends four hours a day writing e‑mail. “To me,” he has said, “an e‑mail is identical to a letter, every bit as formal, as refined, as carefully written … I rewrite, rewrite, rewrite, rewrite all of my e‑mails, always.”) He lives his mental life there, and it shows. Wall-to-wall there are books and drawings and notebooks and files, thoughts fossilized and splayed all over the room. It’s like a museum for his binges, a scene out of a brainy episode of Hoarders.

“Anything that I think about becomes part of my professional life,” he says. Daniel Dennett, who co-edited The Mind’s I with him, has explained that “what Douglas Hofstadter is, quite simply, is a phenomenologist, a practicing phenomenologist, and he does it better than anybody else. Ever.” He studies the phenomena—the feelings, the inside actions—of his own mind. “And the reason he’s good at it,” Dennett told me, “the reason he’s better than anybody else, is that he is very actively trying to have a theory of what’s going on backstage, of how thinking actually happens in the brain.” (…)

He makes photocopies of his notebook pages, cuts them up with scissors, and stores the errors in filing cabinets and labeled boxes around his study.

For Hofstadter, they’re clues. “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.” Correct speech isn’t very interesting; it’s like a well-executed magic trick—effective because it obscures how it works. What Hofstadter is looking for is “a tip of the rabbit’s ear … a hint of a trap door.

As the wind tunnel was to the Wright brothers, so the computer is to FARG. The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited. In Hofstadter’s view, this is the great opportunity of artificial intelligence. Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why. “I have always felt that the only hope of humans ever coming to fully understand the complexity of their minds,” Hofstadter has written, “is by modeling mental processes on computers and learning from the models’ inevitable failures.” (…)

But very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.

The modern era of mainstream AI—an era of steady progress and commercial success that began, roughly, in the early 1990s and continues to this day—is the long unlikely springtime after a period, known as the AI Winter, that nearly killed off the field.

It came down to a basic dilemma. On the one hand, the software we know how to write is very orderly; most computer programs are organized like a well-run army, with layers of commanders, each layer passing instructions down to the next, and routines that call subroutines that call subroutines. On the other hand, the software we want to write would be adaptable—and for that, a hierarchy of rules seems like just the wrong idea. Hofstadter once summarized the situation by writing, “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”

Machine learning

The “expert systems” that had once been the field’s meal ticket were foundering because of their brittleness. Their approach was fundamentally broken. Take machine translation from one language to another, long a holy grail of AI. The standard attack involved corralling linguists and translators into a room and trying to convert their expertise into rules for a program to follow. The standard attack failed for reasons you might expect: no set of rules can ever wrangle a human language; language is too big and too protean; for every rule obeyed, there’s a rule broken.

If machine translation was to survive as a commercial enterprise—if AI was to survive—it would have to find another way. Or better yet, a shortcut.

The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence. One such device, of course, is the human brain—but the whole point is to avoid grappling with the brain’s complexity. So what you do instead is start with a machine so simple, it almost doesn’t work: a machine, say, that randomly spits out French words for the English words it’s given.

Imagine a box with thousands of knobs on it. Some of these knobs control general settings: given one English word, how many French words, on average, should come out? And some control specific settings: given jump, what is the probability that shot comes next? The question is, just by tuning these knobs, can you get your machine to convert sensible English into sensible French?

It turns out that you can. What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.) You proceed one pair at a time. After you’ve entered a pair, take the English half and feed it into your machine to see what comes out in French. If that sentence is different from what you were expecting—different from the known correct translation—your machine isn’t quite right. So jiggle the knobs and try again. After enough feeding and trying and jiggling, feeding and trying and jiggling again, you’ll get a feel for the knobs, and you’ll be able to produce the correct French equivalent of your English sentence.

By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable result. And the beauty is that you never needed to program the machine explicitly; you never needed to know why the knobs should be twisted this way or that. (…)

Google has projects that gesture toward deeper understanding: extensions of machine learning inspired by brain biology; a “knowledge graph” that tries to map words, like Obama, to people or places or things. But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself. It’s like an enormous Rosetta Stone, the calcified hieroglyphics of minds once at work. (…)

Ever since he was about 15, Hofstadter has read The Catcher in the Rye once every 10 years. In the fall of 2011, he taught an undergraduate seminar called “Why Is J. D. Salinger’s The Catcher in the Rye a Great Novel?” He feels a deep kinship with Holden Caulfield. When I mentioned that a lot of the kids in my high-school class didn’t like Holden—they thought he was a whiner—Hofstadter explained that “they may not recognize his vulnerability.” You imagine him standing like Holden stood at the beginning of the novel, alone on the top of a hill, watching his classmates romp around at the football game below. “I have too many ideas already,” Hofstadter tells me. “I don’t need the stimulation of the outside world.” (…)

“Ars longa, vita brevis,” Hofstadter likes to say. “I just figure that life is short. I work, I don’t try to publicize. I don’t try to fight.”

There’s an analogy he made for me once. Einstein, he said, had come up with the light-quantum hypothesis in 1905. But nobody accepted it until 1923. “Not a soul,” Hofstadter says. “Einstein was completely alone in his belief in the existence of light as particles—for 18 years.

“That must have been very lonely.” “

— James Somers, to read the full article click The Man Who Would Teach Machines to Think, The Atlantic, Oct 23 2013

Douglas Hofstadter, is an American professor of cognitive science whose research focuses on the sense of “I”, consciousness, analogy-making, artistic creation, literary translation, and discovery in mathematics and physics. He is best known for his book Gödel, Escher, Bach: an Eternal Golden Braid, first published in 1979. It won both the Pulitzer Prize for general non-fiction.

See also:

The Mathematical Art of M.C. Escher, Lapidarium notes

Jan
27th
Sun
permalink

Daniel C. Dennett on an attempt to understand the mind; autonomic neurons, culture and computational architecture

image

"What Darwin and Turing had both discovered, in their different ways, was the existence of competence without comprehension."

— Daniel C. Dennett, What Darwin’s theory of evolution teaches us about Alan Turing and artificial intelligence, Lapidarium

"I’m trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine. This is called homuncular functionalism, because you take the whole person. You break the whole person down into two or three or four or seven sub persons that are basically agents. They’re homunculi, and this looks like a regress, but it’s only a finite regress, because you take each of those in turn and you break it down into a group of stupider, more specialized homunculi, and you keep going until you arrive at parts that you can replace with a machine, and that’s a great way of thinking about cognitive science. It’s what good old-fashioned AI tried to do and still trying to do.

The idea is basically right, but when I first conceived of it, I made a big mistake. I was at that point enamored of the McCulloch-Pitts logical neuron. McCulloch and Pitts had put together the idea of a very simple artificial neuron, a computational neuron, which had multiple inputs and a single branching output and a threshold for firing, and the inputs were either inhibitory or excitatory. They proved that in principle a neural net made of these logical neurons could compute anything you wanted to compute. So this was very exciting. It meant that basically you could treat the brain as a computer and treat the neuron as a sort of basic switching element in the computer, and that was certainly an inspiring over-simplification. Everybody knew is was an over-simplification, but people didn’t realize how much, and more recently it’s become clear to me that it’s a dramatic over-simplification, because each neuron, far from being a simple logical switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.

The question is, what happens to your ideas about computational architecture when you think of individual neurons not as dutiful slaves or as simple machines but as agents that have to be kept in line and that have to be properly rewarded and that can form coalitions and cabals and organizations and alliances? This vision of the brain as a sort of social arena of politically warring forces seems like sort of an amusing fantasy at first, but is now becoming something that I take more and more seriously, and it’s fed by a lot of different currents.

Evolutionary biologist David Haig has some lovely papers on intrapersonal conflicts where he’s talking about how even at the level of the genetics, even at the level of the conflict between the genes you get from your mother and the genes you get from your father, the so-called madumnal and padumnal genes, those are in opponent relations and if they get out of whack, serious imbalances can happen that show up as particular psychological anomalies.

We’re beginning to come to grips with the idea that your brain is not this well-organized hierarchical control system where everything is in order, a very dramatic vision of bureaucracy. In fact, it’s much more like anarchy with some elements of democracy. Sometimes you can achieve stability and mutual aid and a sort of calm united front, and then everything is hunky-dory, but then it’s always possible for things to get out of whack and for one alliance or another to gain control, and then you get obsessions and delusions and so forth.

You begin to think about the normal well-tempered mind, in effect, the well-organized mind, as an achievement, not as the base state, something that is only achieved when all is going well, but still, in the general realm of humanity, most of us are pretty well put together most of the time. This gives a very different vision of what the architecture is like, and I’m just trying to get my head around how to think about that. (…)

The vision of the brain as a computer, which I still champion, is changing so fast. The brain’s a computer, but it’s so different from any computer that you’re used to. It’s not like your desktop or your laptop at all, and it’s not like your iPhone except in some ways. It’s a much more interesting phenomenon. What Turing gave us for the first time (and without Turing you just couldn’t do any of this) is a way of thinking in a disciplined way about phenomena that have, as I like to say, trillions of moving parts. Until late 20th century, nobody knew how to take seriously a machine with a trillion moving parts. It’s just mind-boggling.

You couldn’t do it, but computer science gives us the ideas, the concepts of levels, virtual machines implemented in virtual machines implemented in virtual machines and so forth. We have these nice ideas of recursive reorganization of which your iPhone is just one example and a very structured and very rigid one at that.

We’re getting away from the rigidity of that model, which was worth trying for all it was worth. You go for the low-hanging fruit first. First, you try to make minds as simple as possible. You make them as much like digital computers, as much like von Neumann machines, as possible. It doesn’t work. Now, we know why it doesn’t work pretty well. So you’re going to have a parallel architecture because, after all, the brain is obviously massively parallel.

It’s going to be a connectionist network. Although we know many of the talents of connectionist networks, how do you knit them together into one big fabric that can do all the things minds do? Who’s in charge? What kind of control system? Control is the real key, and you begin to realize that control in brains is very different from control in computers. Control in your commercial computer is very much a carefully designed top-down thing.

You really don’t have to worry about one part of your laptop going rogue and trying out something on its own that the rest of the system doesn’t want to do. No, they’re all slaves. If they’re agents, they’re slaves. They are prisoners. They have very clear job descriptions. They get fed every day. They don’t have to worry about where the energy’s coming from, and they’re not ambitious. They just do what they’re asked to do and do it brilliantly with only the slightest tint of comprehension. You get all the power of computers out of these mindless little robotic slave prisoners, but that’s not the way your brain is organized.

Each neuron is imprisoned in your brain. I now think of these as cells within cells, as cells within prison cells. Realize that every neuron in your brain, every human cell in your body (leaving aside all the symbionts), is a direct descendent of eukaryotic cells that lived and fended for themselves for about a billion years as free-swimming, free-living little agents. They fended for themselves, and they survived.

They had to develop an awful lot of know-how, a lot of talent, a lot of self-protective talent to do that. When they joined forces into multi-cellular creatures, they gave up a lot of that. They became, in effect, domesticated. They became part of larger, more monolithic organizations. My hunch is that that’s true in general. We don’t have to worry about our muscle cells rebelling against us, or anything like that. When they do, we call it cancer, but in the brain I think that (and this is my wild idea) maybe only in one species, us, and maybe only in the obviously more volatile parts of the brain, the cortical areas, some little switch has been thrown in the genetics that, in effect, makes our neurons a little bit feral, a little bit like what happens when you let sheep or pigs go feral, and they recover their wild talents very fast.

Maybe a lot of the neurons in our brains are not just capable but, if you like, motivated to be more adventurous, more exploratory or risky in the way they comport themselves, in the way they live their lives. They’re struggling amongst themselves with each other for influence, just for staying alive, and there’s competition going on between individual neurons. As soon as that happens, you have room for cooperation to create alliances, and I suspect that a more free-wheeling, anarchic organization is the secret of our greater capacities of creativity, imagination, thinking outside the box and all that, and the price we pay for it is our susceptibility to obsessions, mental illnesses, delusions and smaller problems.

We got risky brains that are much riskier than the brains of other mammals even, even more risky than the brains of chimpanzees, and that this could be partly a matter of a few simple mutations in control genes that release some of the innate competitive talent that is still there in the genomes of the individual neurons. But I don’t think that genetics is the level to explain this. You need culture to explain it.

'Culture creates a whole new biosphere'

This, I speculate, is a response to our invention of culture; culture creates a whole new biosphere, in effect, a whole new cultural sphere of activity where there’s opportunities that don’t exist for any other brain tissues in any other creatures, and that this exploration of this space of cultural possibility is what we need to do to explain how the mind works.

Everything I just said is very speculative. I’d be thrilled if 20 percent of it was right. It’s an idea, a way of thinking about brains and minds and culture that is, to me, full of promise, but it may not pan out. I don’t worry about that, actually. I’m content to explore this, and if it turns out that I’m just wrong, I’ll say, “Oh, okay. I was wrong. It was fun thinking about it,” but I think I might be right.

I’m not myself equipped to work on a lot of the science; other people could work on it, and they already are in a way. The idea of selfish neurons has already been articulated by Sebastian Seung of MIT in a brilliant keynote lecture he gave at Society for Neuroscience in San Diego a few years ago. I thought, oh, yeah, selfish neurons, selfish synapses. Cool. Let’s push that and see where it leads. But there are many ways of exploring this. One of the still unexplained, so far as I can tell, and amazing features of the brain is its tremendous plasticity.

Mike Merzenich sutured a monkey’s fingers together so that it didn’t need as much cortex to represent two separate individual digits, and pretty soon the cortical regions that were representing those two digits shrank, making that part of the cortex available to use for other things. When the sutures were removed, the cortical regions soon resumed pretty much their earlier dimensions. If you blindfold yourself for eight weeks, as Alvaro Pascual-Leone does in his experiments, you find that your visual cortex starts getting adapted for Braille, for haptic perception, for touch.

The way the brain spontaneously reorganizes itself in response to trauma of this sort, or just novel experience, is itself one of the most amazing features of the brain, and if you don’t have an architecture that can explain how that could happen and why that is, your model has a major defect. I think you really have to think in terms of individual neurons as micro-agents, and ask what’s in it for them?

Why should these neurons be so eager to pitch in and do this other work just because they don’t have a job? Well, they’re out of work. They’re unemployed, and if you’re unemployed, you’re not getting your neuromodulators. If you’re not getting your neuromodulators, your neuromodulator receptors are going to start disappearing, and pretty soon you’re going to be really out of work, and then you’re going to die.

In this regard, I think of John Hollands work on the emergence of order. His example is New York City. You can always find a place where you can get gefilte fish, or sushi, or saddles or just about anything under the sun you want, and you don’t have to worry about a state bureaucracy that is making sure that supplies get through. No. The market takes care of it. The individual web of entrepreneurship and selfish agency provides a host of goods and services, and is an extremely sensitive instrument that responds to needs very quickly.

Until the lights go out. Well, we’re all at the mercy of the power man. I am quite concerned that we’re becoming hyper-fragile as a civilization, and we’re becoming so dependent on technologies that are not as reliable as they should be, that have so many conditions that have to be met for them to work, that we may specialize ourselves into some very serious jams. But in the meantime, thinking about the self-organizational powers of the brain as very much like the self-organizational powers of a city is not a bad idea. It just reeks of over-enthusiastic metaphor, though, and it’s worth reminding ourselves that this idea has been around since Plato.

Plato analogizes the mind of a human being to the state. You’ve got the rulers and the guardians and the workers. This idea that a person is made of lots of little people is comically simpleminded in some ways, but that doesn’t mean it isn’t, in a sense, true. We shouldn’t shrink from it just because it reminds us of simpleminded versions that have been long discredited. Maybe some not so simpleminded version is the truth.

There are a lot of cultural fleas

My next major project will be trying to take another hard look at cultural evolution and look at the different views of it and see if I can achieve a sort of bird’s eye view and establish what role, if any, is there for memes or something like memes and what are the other forces that are operating. We are going to have to have a proper scientific perspective on cultural change. The old-fashioned, historical narratives are wonderful, and they’re full of gripping detail, and they’re even sometimes right, but they only cover a small proportion of the phenomena. They only cover the tip of the iceberg.

Basically, the model that we have and have used for several thousand years is the model that culture consists of treasures, cultural treasures. Just like money, or like tools and houses, you bequeath them to your children, and you amass them, and you protect them, and because they’re valuable, you maintain them and prepare them, and then you hand them on to the next generation and some societies are rich, and some societies are poor, but it’s all goods. I think that vision is true of only the tip of the iceberg.

Most of the regularities in culture are not treasures. It’s not all opera and science and fortifications and buildings and ships. It includes all kinds of bad habits and ugly patterns and stupid things that don’t really matter but that somehow have got a grip on a society and that are part of the ecology of the human species in the same way that mud, dirt and grime and fleas are part of the world that we live in. They’re not our treasures. We may give our fleas to our children, but we’re not trying to. It’s not a blessing. It’s a curse, and I think there are a lot of cultural fleas. There are lots of things that we pass on without even noticing that we’re doing it and, of course, language is a prime case of this, very little deliberate intentional language instruction goes on or has to go on.

Kids that are raised with parents pointing out individual objects and saying, “See, it’s a ball. It’s red. Look, Johnny, it’s a red ball, and this is a cow, and look at the horsy” learn to speak, but so do kids who don’t have that patient instruction. You don’t have to do that. Your kids are going to learn ball and red and horsy and cow just fine without that, even if they’re quite severely neglected. That’s not a nice observation to make, but it’s true. It’s almost impossible not to learn language if you don’t have some sort of serious pathology in your brain.

Compare that with chimpanzees. There are hundreds of chimpanzees who have spent their whole lives in human captivity. They’ve been institutionalized. They’ve been like prisoners, and in the course of the day they hear probably about as many words as a child does. They never show any interest. They never apparently get curious about what those sounds are for. They can hear all the speech, but it’s like the rustling of the leaves. It just doesn’t register on them as worth attention.

But kids are tuned for that, and it might be a very subtle tuning. I can imagine a few small genetic switches, which, if they were just in a slightly different position, would make chimpanzees just as pantingly eager to listen to language as human babies are, but they’re not, and what a difference it makes in their world! They never get to share discoveries the way we do and to share our learning. That, I think, is the single feature about human beings that distinguishes us most clearly from all others: we don’t have to reinvent the wheel. Our kids get the benefit of not just what grandpa and grandma and great grandpa and great grandma knew. They get the benefit of basically what everybody in the world knew in the years when they go to school. They don’t have to invent calculus or long division or maps or the wheel or fire. They get all that for free. It just comes as part of the environment. They get incredible treasures, cognitive treasures, just by growing up. (…)

A lot of naïve thinking by scientists about free will

Moving Naturalism Forward" was a nice workshop that Sean Carroll put together out in Stockbridge a couple of weeks ago, and it was really interesting. I learned a lot. I learned more about how hard it is to do some of these things and that’s always useful knowledge, especially for a philosopher.

If we take seriously, as I think we should, the role that Socrates proposed for us as midwives of thinking, then we want to know what the blockades are, what the imagination blockades are, what people have a hard time thinking about, and among the things that struck me about the Stockbridge conference were the signs of people really having a struggle to take seriously some ideas which I think they should take seriously. (…)

I realized I really have my work cut out for me in a way that I had hoped not to discover. There’s still a lot of naïve thinking by scientists about free will. I’ve been talking about it quite a lot, and I do my best to undo some bad thinking by various scientists. I’ve had some modest success, but there’s a lot more that has to be done on that front. I think it’s very attractive to scientists to think that here’s this several-millennia-old philosophical idea, free will, and they can just hit it out of the ballpark, which I’m sure would be nice if it was true.

It’s just not true. I think they’re well intentioned. They’re trying to clarify, but they’re really missing a lot of important points. I want a naturalistic theory of human beings and free will and moral responsibility as much as anybody there, but I think you’ve got to think through the issues a lot better than they’ve done, and this, happily, shows that there’s some real work for philosophers.

Philosophers have done some real work that the scientists jolly well should know. Here’s an area where it was one of the few times in my career when I wanted to say to a bunch of scientists, “Look. You have some reading to do in philosophy before you hold forth on this. There really is some good reading to do on these topics, and you need to educate yourselves.”

A combination of arrogance and cravenness

The figures about American resistance to evolution are still depressing, and you finally have to realize that there’s something structural. It’s not that people are stupid, and I think it’s clear that people, everybody, me, you, we all have our authorities, our go-to people whose word we trust. If you want to question about the economic situation in Greece, for instance, you need to check it out with somebody whose opinion on that we think is worth taking seriously. We don’t try to work it out for ourselves. We find some expert that we trust, and right around the horn, whatever the issues are, we have our experts, and so a lot of people have as their experts on matters of science, they have their pastors. This is their local expert.

I don’t blame them. I wish they were more careful about vetting their experts and making sure that they found good experts. They wouldn’t choose an investment advisor, I think, as thoughtlessly as they go along with their pastor. I blame the pastors, but where do they get their ideas? Well, they get them from the hierarchies of their churches. Where do they get their ideas? Up at the top, I figure there’s some people that really should be ashamed of themselves. They know better.

They’re lying, and when I get a chance, I try to ask them that. I say, “Doesn’t it bother you that your grandchildren are going to want to know why you thought you had to lie to everybody about evolution?” I mean, really. They’re lies. They’ve got to know that these are lies. They’re not that stupid, and I just would love them to worry about what their grandchildren and great grandchildren would say about how their ancestors were so craven and so arrogant. It’s a combination of arrogance and cravenness.

We now have to start working on that structure of experts and thinking, why does that persist? How can it be that so many influential, powerful, wealthy, in-the-public people can be so confidently wrong about evolutionary biology? How did that happen? Why does it happen? Why does it persist? It really is a bit of a puzzle if you think about how they’d be embarrassed not to know that the world is round. I think that would be deeply embarrassing to be that benighted, and they’d realize it. They’d be embarrassed not to know that HIV is the vector of AIDS. They’d be embarrassed to not understand the way the tides are produced by the gravitational forces of the moon and the sun. They may not know the details, but they know that the details are out there. They could learn them in 20 minutes if they wanted to. How did they get themselves in the position where they could so blithely trust people who they’d never buy stocks and bonds from? They’d never trust a child’s operation to a doctor that was as ignorant and as ideological as these people. It is really strange. I haven’t got to the bottom of that. (…)

This pernicious sort of lazy relativism

[T]here’s a sort of enforced hypocrisy where the pastors speak from the pulpit quite literally, and if you weren’t listening very carefully, you’d think: oh my gosh, this person really believes all this stuff. But they’re putting in just enough hints for the sophisticates in the congregation so that the sophisticates are supposed to understand: Oh, no. This is all just symbolic. This is all just metaphorical. And that’s the way they want it, but of course, they could never admit it. You couldn’t put a little neon sign up over the pulpit that says, “Just metaphor, folks, just metaphor.” It would destroy the whole thing.

You can’t admit that it’s just metaphor even when you insist when anybody asks that it’s just metaphor, and so this professional doubletalk persists, and if you study it for a while the way Linda [pdf] and I have been doing, you come to realize that’s what it is, and that means they’ve lost track of what it means to tell the truth. Oh, there are so many different kinds of truth. Here’s where postmodernism comes back to haunt us. What a pernicious bit of intellectual vandalism that movement was! It gives license to this pernicious sort of lazy relativism.

One of the most chilling passages in that great book by William James, The Varieties of Religious Experience, is where he talks about soldiers in the military: "Far better is it for an army to be too savage, too cruel, too barbarous, thant to possess too much sentimentality and human reasonableness.” This is a very sobering, to me, a very sobering reflection. Let’s talk about when we went into Iraq. There was Rumsfeld saying, “Oh, we don’t need a big force. We don’t need a big force. We can do this on the cheap,” and there were other people, retrospectively we can say they were wiser, who said, “Look, if you’re going to do this at all, you want to go in there with such overpowering, such overwhelming numbers and force that you can really intimidate the population, and you can really maintain the peace and just get the population to sort of roll over, and that way actually less people get killed, less people get hurt. You want to come in with an overwhelming show of force.”

The principle is actually one that’s pretty well understood. If you don’t want to have a riot, have four times more police there than you think you need. That’s the way not to have a riot and nobody gets hurt because people are not foolish enough to face those kinds of odds. But they don’t think about that with regard to religion, and it’s very sobering. I put it this way.

Suppose that we face some horrific, terrible enemy, another Hitler or something really, really bad, and here’s two different armies that we could use to defend ourselves. I’ll call them the Gold Army and the Silver Army; same numbers, same training, same weaponry. They’re all armored and armed as well as we can do. The difference is that the Gold Army has been convinced that God is on their side and this is the cause of righteousness, and it’s as simple as that. The Silver Army is entirely composed of economists. They’re all making side insurance bets and calculating the odds of everything.

Which army do you want on the front lines? It’s very hard to say you want the economists, but think of what that means. What you’re saying is we’ll just have to hoodwink all these young people into some false beliefs for their own protection and for ours. It’s extremely hypocritical. It is a message that I recoil from, the idea that we should indoctrinate our soldiers. In the same way that we inoculate them against diseases, we should inoculate them against the economists’—or philosophers’—sort of thinking, since it might lead to them to think: am I so sure this cause is just? Am I really prepared to risk my life to protect? Do I have enough faith in my commanders that they’re doing the right thing? What if I’m clever enough and thoughtful enough to figure out a better battle plan, and I realize that this is futile? Am I still going to throw myself into the trenches? It’s a dilemma that I don’t know what to do about, although I think we should confront it at least.”

Daniel C. Dennett is University Professor, Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University, The normal well-tempered mind, Edge, Jan 8, 2013.

'The Intentional Stance'

"Dennett favours the theory (first suggested by Richard Dawkins) that our social learning has given us a second information highway (in addition to the genetic highway) where the transmission of variant cultural information (memes) takes place via differential replication. Software viruses, for example, can be understood as memes, and as memes evolve in complexity, so does human cognition: “The mind is the effect, not the cause.” (…)

Daniel Dennett: "Natural selection is not gene centrist and nor is biology all about genes, our comprehending minds are a result of our fast evolving culture. Words are memes that can be spoken and words are the best example of memes. Words have a genealogy and it’s easier to trace the evolution of a single word than the evolution of a language." (…)

I don’t like theory of mind. I coined the phrase The Intentional Stance. [Dennett’s Intentional Stance encompasses attributing feelings, memories and beliefs to others as well as mindreading and predicting what someone will do next.] Do you need a theory to ride a bike? (…)

Riding a bike is a craft – you don’t need a theory. Autistic people might need a theory with which to understand other minds, but the rest of us don’t. If a human is raised without social interaction and without language they would be hugely disabled and probably lacking in empathy.”

Daniel C. Dennett, Daniel Dennett: ‘I don’t like theory of mind’ – interview, The Guardian, 22 March 2013.

See also:

Steven Pinker on the mind as a system of ‘organs of computation’, Lapidarium notes
Quantum minds: Why we think like quarks - ‘To be human is to be quantum’, Lapidarium notes
Human Connectome Project: understanding how different parts of the brain communicate to each other
How Free Is Your Will?, Lapidarium notes
Susan Blackmore on memes and “temes”
Mind & Brain tag on Lapidarium notes

Jul
24th
Tue
permalink

Dirk Helbing on A New Kind Of Socio-inspired Technology

The big unexplored continent in science is actually social science, so we really need to understand much better the principles that make our society work well, and socially interactive systems. Our future information society will be characterized by computers that behave like humans in many respects. In ten years from now, we will have computers as powerful as our brain, and that will really fundamentally change society. Many professional jobs will be done much better by computers. How will that change society? How will that change business? What impacts does that have for science, actually?

There are two big global trends. One is big data. That means in the next ten years we’ll produce as many data, or even more data than in the past 1,000 years. The other trend is hyperconnectivity. That means we have networking our world going on at a rapid pace; we’re creating an Internet of things. So everyone is talking to everyone else, and everything becomes interdependent. What are the implications of that? (…)

But on the other hand, it turns out that we are, at the same time, creating highways for disaster spreading. We see many extreme events, we see problems such as the flash crash, or also the financial crisis. That is related to the fact that we have interconnected everything. In some sense, we have created unstable systems. We can show that many of the global trends that we are seeing at the moment, like increasing connectivity, increase in the speed, increase in complexity, are very good in the beginning, but (and this is kind of surprising) there is a turning point and that turning point can turn into a tipping point that makes the systems shift in an unknown way.

It requires two things to understand our systems, which is social science and complexity science; social science because computers of tomorrow are basically creating artificial social systems. Just take financial trading today, it’s done by the most powerful computers. These computers are creating a view of the environment; in this case the financial world. They’re making projections into the future. They’re communicating with each other. They have really many features of humans. And that basically establishes an artificial society, which means also we may have all the problems that we are facing in society if we don’t design these systems well. The flash crash is just one of those examples that shows that, if many of those components — the computers in this case — interact with each other, then some surprising effects can happen. And in that case, $600 billion were actually evaporating within 20 minutes.

Of course, the markets recovered, but in some sense, as many solid stocks turned into penny stocks within minutes, it also changed the ownership structure of companies within just a few minutes. That is really a completely new dimension happening when we are building on these fully automated systems, and those social systems can show a breakdown of coordination, tragedies of the commons, crime or cyber war, all these kinds of things will happen if we don’t design them right.

We really need to understand those systems, not just their components. It’s not good enough to have wonderful gadgets like smartphones and computers; each of them working fine in separation. Their interaction is creating a completely new world, and it is very important to recognize that it’s not just a gradual change of our world; there is a sudden transition in the behavior of those systems, as the coupling strength exceeds a certain threshold.

A traffic flow in a circle

I’d like to demonstrate that for a system that you can easily imagine: traffic flow in a circle. Now, if the density is high enough, then the following will happen: after some time, although every driver is trying hard to go at a reasonable speed, cars will be stopped by a so-called ‘phantom traffic jam.’ That means smooth traffic flow will break down, no matter how hard the drivers will try to maintain speed. The question is, why is this happening? If you would ask drivers, they would say, “hey, there was a stupid driver in front of me who didn’t know how to drive!” Everybody would say that. But it turns out it’s a systemic instability that is creating this problem.

That means a small variation in the speed is amplified over time, and the next driver has to brake a little bit harder in order to compensate for a delayed reaction. That creates a chain reaction among drivers, which finally stops traffic flow. These kinds of cascading effects are all over the place in the network systems that we have created, like power grids, for example, or our financial markets. It’s not always as harmless as in traffic jams. We’re just losing time in traffic jams, so people could say, okay, it’s not a very serious problem. But if you think about crowds, for example, we have this transition towards a large density of the crowd, then what will happen is a crowd disaster. That means people will die, although nobody wants to harm anybody else. Things will just go out of control. Even though there might be hundreds or thousands of policemen or security forces trying to prevent these things from happening.

This is really a surprising behavior of these kinds of strongly-networked systems. The question is, what implication does that have for other network systems that we have created, such as the financial system? There is evidence that the fact that now every bank is interconnected with every other bank has destabilized the system. That means that there is a systemic instability in place that makes it so hard to control, or even impossible to control. We see that the big players, and also regulators, have large difficulties to get control of these systems.  

That tells us something that we need to change our perspective regarding these systems. Those complex systems are not characterized anymore by the properties of their components. But they’re characterized by what is the outcome of the interactions between those components. As a result of those interactions, self-organization is going on in these systems. New emergent properties come up. They can be very surprising, actually, and that means we cannot understand those systems anymore, based on what we see, which is the components.

We need to have new instruments and tools to understand these kinds of systems. Our intuition will not work here. And that is what we want to create: we want to come up with a new information platform for everybody that is bringing together big data with exa-scale computing, with people, and with crowd sourcing, basically connecting the intelligence of the brains of the world.

One component that is going to measure the state of the world is called the Planetary Nervous System. That will measure not just the physical state of the world and the environmental situation, but it is also very important actually that we learn how to measure social capital, such as trust and solidarity and punctuality and these kinds of things, because this is actually very important for economic value generation, but also for social well-being.

Those properties as social capital, like trust, they result from social network interactions. We’ve seen that one of the biggest problems of the financial crisis was this evaporation of trust. It has burned tens of thousands of billion dollars. If we would learn how to stabilize trust, or build trust, that would be worth a lot of money, really. Today, however, we’re not considering the value of social capital. It can happen that we destroyed it or that we exploit it, such as we’ve exploited and destroyed our environment. If we learn how much is the value of social capital, we will start to protect it. Also we’ll take it into account in our insurance policies. Because today, no insurance is taking into account the value of social capital. It’s the material damage that we take into account, but not the social capital. That means, in some sense, we’re underinsured. We’re taking bigger risks than we should.

This is something that we want to learn, how to quantify the fundaments of society, to quantify the social footprint. It means to quantify the implications of our decisions and actions.

The second component, the Living Earth Simulator will be very important here, because that will look at what-if scenarios. It will take those big data generated by the Planetary Nervous System and allow us to look at different scenarios, to explore the various options that we have, and the potential side effects or cascading effects, and unexpected behaviors, because those interdependencies make our global systems really hard to understand. In many cases, we just overlook what would happen if we fix a problem over here: It might have unwanted side effects; in many cases, that is happening in other parts of our world.

We are using supercomputers today in all areas of our development. Like if we are developing a car, a plane or medical tracks or so, supercomputers are being used, also in the financial world. But we don’t have a kind of political or a business flight simulator that helps us to explore different opportunities. I think this is what we can create as our understanding of society progresses. We now have much better ideas of how social coordination comes about, what are the preconditions for cooperation. What are conditions that create conflict, or crime, or war, or epidemicspreading, in the good and the bad sense?

We’re using, of course, viral marketing today in order to increase the success of our products. But at the same time, also we are suffering from a quick spreading of emerging diseases, or of computer viruses, and Trojan horses, and so on. We need to understand these kinds of phenomena, and with the data and the computer power that is coming up, it becomes within reach to actually get a much better picture of these things.

The third component will be the Global Participatory Platform [pdf]. That basically makes those other tools available for everybody: for business leaders, for political decision-makers, and for citizens. We want to create an open data and modeling platform that creates a new information ecosystem that allows you to create new businesses, to come up with large-scale cooperation much more easily, and to lower the barriers for social, political and economic participation.

So these are the three big elements. We’ll furthermore  build exploratories of society, of the economy and environment and technology, in order to be able to anticipate possible crises, but also to see opportunities that are coming up. Those exploratories will bring these three elements together. That means the measurement component, the computer simulation component, and the participation, the interactiveness.

In some sense, we’re going to create virtual worlds that may look like our real world, copies of our world that allow us to explore polices in advance or certain kinds of planning in advance. Just to make it a little bit more concrete, we could, for example, check out a new airport or a new city quarter before it’s being built. Today we have these architectural plans, and competitions, and then the most beautiful design will have win. But then, in practice, it can happen that it doesn’t work so well. People have to stand in line in queues, or are obstructing each other. Many things may not work out as the architect imagined that.                 

What if we populated basically these architectural plans with real people? They could check it out, live there for some months and see how much they like it. Maybe even change the design. That means, the people that would use these facilities and would live in these new quarters of the city could actually participate in the design of the city. In the same sense, you can scale that up. Just imagine Google Earth or Google Street View filled with people, and have something like a serious kind of Second Life. Then we could have not just one history; we can check out many possible futures by actually trying out different financial architectures, or different decision rules, or different intellectual property rights and see what happens.                 

We could have even different virtual planets, with different laws and different cultures and different kinds of societies. And you could choose the planet that you like most. So in some sense, now a new age is opening up with almost unlimited resources. We’re, of course, still living in a material world, in which we have a lot of restrictions, because resources are limited. They’re scarce and there’s a lot of competition for these scarce resources. But information can be multiplied as much as you like. Of course, there is some cost, and also some energy needed for that, but it’s relatively low cost, actually. So we can create really almost infinite new possibilities for creativity, for productivity, for interaction. And it is extremely interesting that we have a completely new world coming up here, absolutely new opportunities that need to be checked out.

But now the question is: how will it all work? Or how would you make it work? Because the information systems that we have created are even more complex than our financial system. We know the financial system is extremely difficult to regulate and to control. How would you want to control an information system of this complexity? I think that cannot be done top-down. We are seeing now a trend that complex systems are run in a more and more decentralized way. We’re learning somehow to use self-organization principles in order to run these kinds of systems. We have seen that in the Internet, we are seeing t for smart grids, but also for traffic control.

I have been working myself on these new ways of self-control. It’s very interesting. Classically one has tried to optimize traffic flow. It’s so demanding that even our fastest supercomputers can’t do that in a strict sense, in real time. That means one needs to make simplifications. But in principle, what one is trying to do is to impose an optimal traffic light control top-down on the city. The supercomputer believes to know what is best for all the cars, and that is imposed on the system.                 

We have developed a different approach where we said: given that there is a large degree of variability in the system, the most important aspect is to have a flexible adaptation to the actual traffic conditions. We came up with a system where traffic flows control the traffic lights. It turns out this makes much better use of scarce resources, such as space and time. It works better for cars, it works better for public transport and for pedestrians and bikers, and it’s good for the environment as well.                 

The age of social innovation

There’s a new kind of socio-inspired technology coming up, now. Society has many wonderful self-organization mechanisms that we can learn from, such as trust, reputation, culture. If we can learn how to implement that in our technological system, that is worth a lot of money; billions of dollars, actually. We think this is the next step after bio-inspired technology.

The next big step is to focus on society. We’ve had an age of physics; we’re now in an age of biology. I think we are entering the age of social innovation as we learn to make sense of this even bigger complexity of society. It’s like a new continent to discover. It’s really fascinating what now becomes understandable with the availability of Big Data about human activity patterns, and it will open a door to a new future.

What will be very important in order to make sense of the complexity of our information society is to overcome the disciplinary silos of science; to think out of the box. Classically we had social sciences, we had economics, we had physics and biology and ecology, and computer science and so on. Now, our project is trying to bring those different fields together, because we’re deeply convinced that without this integration of different scientific perspectives, we cannot anymore make sense of these hyper-connected systems that we have created.                 

For example, computer science requires complexity science and social science to understand those systems that have been created and will be created. Why is this? Because the dense networking and to the complex interaction between the components creates self-organization, and emergent phenomena in those systems. The flash crash is just one example that shows that unexpected things can happen. We know that from many systems.

Complexity theory is very important here, but also social science. And why is that? Because the components of these information communication systems are becoming more and more human-like. They’re communicating with each other. They’re making a picture of the outside world. They’re projecting expectations into the future, and they are taking autonomous decisions. That means if those computers interact with each other, it’s creating an artificial social system in some sense.                 

In the same way, social science will need complexity science and computer science. Social science needs the data that computer science and information communication technology can provide. Now, and even more in the future, those data traces about human activities allow us eventually to detect patterns and kind of laws of human behavior. It will be only possible through the collaboration with computer science to get those data, and to make sense of what is happening actually in society. I don’t need to mention that obviously there are complex dynamics going on in society; that means complexity science is needed for social science as well.

In the same sense, we could say complexity science needs social science and computer science to become practical. To go a step beyond talking about butterfly effects and chaos and turbulence. And to make sure that the thinking of complexity science will pervade our thinking in the natural engineering and social sciences and allow us to understand the real problems of our world. That is kind of the essence: that we need to bring these different scientific fields together. We have actually succeeded to build up these integrated communities in many countries all over the world, ready to go, as soon as money becomes available for that.        

Big Data is not a solution per se. Even the most powerful machine learning algorithm will not be sufficient to make sense of our world, to understand the principles according to which our world is working. This is important to recognize. The great challenge is to marry data with theories, with models. Only then will we be able to make sense of the useful bits of data. It’s like finding a needle in the haystack. The more data you have, the more difficult it may be to find this needle, actually, to a certain extent. And there is this danger of over-fitting, of being distracted from important details. We are certainly already in an age where we’re flooded with information, and our attention span can actually not process all that information. That means there is a danger that this undermines our wisdom, if our attention is attracted by the wrong details of information. So we are confronted with the problem of finding the right institutions and tools and instruments for decision-making.        

The Living Earth Simulator will basically take the data that is gathered by the Internet, search requests, and created by sensor networks, and feed it into big computer simulations that are based on models of social and economic and technological behavior. In this way, we’ll be able to look at what-if scenarios. We hope to get a better understanding, for example, of financial systems and some answers to controversial questions such as how much leverage effect is good? Under what conditions is ‘naked short-selling’ beneficial? When does it destabilize markets? To what extent is high frequency trading good, or it can it also have side effects? All these kinds of questions, which are difficult to answer. Or how to deal best with the situation in Europe, where we have trouble, obviously, in Greece, but also kind of contagious effects on other countries and on the rest of the financial system. It would be very good to have the models and the data that allow us actually to simulate these kinds of scenarios and to take better-informed decisions. (…)

The idea is to have an open platform to create a data and model commons that everybody can contribute to, so people could upload data and models, and others could use that. People would also judge the quality of the data and models and rate them according to their criteria. And we also point out the criteria according to which they’re doing the rating. But in principle, everybody can contribute and everybody can use it. (…)                            

We have much better theories, also, that allows us to make sense of those data. We’re entering into an age where we can understand society and the economy much better, namely as complex self-organizing systems.           

It will be important to guide us into the future because we are creating very powerful systems. Information society will transform our society fundamentally and we shouldn’t just let it happen. We want to understand how that will change our society, and what are the different pathes that our society may take, and decide for the one that we want it to take. For that, we need to have a much better understanding.

Now a lot of social activity data are becoming available through Facebook and Twitter and Google search requests and so on. This is, of course, a huge opportunity for business. Businesses are talking about the new oil, personal data as new asset class. There’s something like a gold rush going on. That also, of course, has huge opportunities for science, eventually we can make sense of complex systems such as our society. There are different perspectives on this. They range from some people who think that information communication technologies eventually will create a God’s-eye view: systems that make sense of all human activities, and the interactions of people, while others are afraid of a Big Brother emerging.                 

The question is how to handle that situation. Some people say we don’t need privacy in society. Society is undergoing a transformation, and privacy is not anymore needed. I don’t actually share this point of view, as a social scientist, because public and private are two sides of the same coin, so they cannot exist without the other side. It is very important, for a society to work, to have social diversity. Today, we know to appreciate biodiversity, and in the same way we need to think about social diversity, because it’s a motor of innovation. It’s also an important factor for societal resilience. The question now is how all those data that we are creating, and also recommender system and personalized services are going to impact people’s decision-making behavior, and society overall.                 

This is what we need to look at now. How is people’s behavior changing through these kinds of data? How are people changing their behavior when they feel they’re being observed? Europe is quite sensitive about privacy. The project we are working on is actually trying to find a balance between the interest of companies and Big Data of governments and individuals. Basically we want to develop technologies that allow us to find this balance, to make sure that all the three perspectives actually are taken into account. That you can make big business, but also at the same time, the individual’s privacy is respected. That individuals have more control over their own data, know what is happening with them, have influence on what is happening with them. (…)           

In some sense, we want to create a new data and model commons, a new kind of language, a new public good that allows people to do new things. (…)

My feeling is that actually business will be made on top of this sea of data that’s being created. At the moment data is kind of the valuable resource, right? But in the future, it will probably be a cheap resource, or even a free resource to a certain extent, if we learn how to deal with openness of data. The expensive thing will be what we do with the data. That means the algorithms, the models, and theories that allow us to make sense of the data.”

Dirk Helbing, physicist, and professor of sociology at ETH Zurich – Swiss Federal Institute of Technology, in particular for modelling and simulation, A New Kind Of Socio-inspired Technology, Edge Conversation, June 19, 2012. (Illustration: WSF)

See also:

☞ Dirk Helbing, New science and technology to understand and manage our complex world in a more sustainable and resilient way (pdf) (presentation), ETH Zurich
Why does nature so consistently organize itself into hierarchies? Living Cells Show How to Fix the Financial System
Geoffrey West on Why Cities Keep Growing, Corporations and People Always Die, and Life Gets Faster
The Difference Between Online Knowledge and Truly Open Knowledge. In the era of the Internet facts are not bricks but networks
Networks tag on Lapidarium notes

Apr
25th
Wed
permalink

Waking Life animated film focuses on the nature of dreams, consciousness, and existentialism



Waking Life is an American animated film (rotoscoped based on live action), directed by Richard Linklater and released in 2001. The entire film was shot using digital video and then a team of artists using computers drew stylized lines and colors over each frame.

The film focuses on the nature of dreams, consciousness, and existentialism. The title is a reference to philosopher George Santayana's maxim: “Sanity is a madness put to good uses; waking life is a dream controlled.”

Waking Life is about an unnamed young man in a persistent dream-like state that eventually progresses to lucidity. He initially observes and later participates in philosophical discussions of issues such as reality, free will, the relationship of the subject with others, and the meaning of life. Along the way the film touches on other topics including existentialism, situationist politics, posthumanity, the film theory of André Bazin, and lucid dreaming itself. By the end, the protagonist feels trapped by his perpetual dream, broken up only by unending false awakenings. His final conversation with a dream character reveals that reality may be only a single instant which the individual consciousness interprets falsely as time (and, thus, life) until a level of understanding is achieved that may allow the individual to break free from the illusion.

Ethan Hawke and Julie Delpy reprise their characters from Before Sunrise in one scene. (Wiki)

Eamonn Healy speaks about telescopic evolution and the future of humanity

We won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). (…) The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially).

So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the twentieth century saw only about 25 years of progress (again at today’s rate of progress) since we have been speeding up to current rates. So the twenty-first century will see almost a thousand times greater technological change than its predecessor.

Ray Kurzweil, American author, scientist, inventor and futurist, The Law of Accelerating Returns, KurzweilAI, March 7, 2001.

"If we’re looking at the highlights of human development, you have to look at the evolution of the organism and then at the development of its interaction with the environment. Evolution of the organism will begin with the evolution of life perceived through the hominid coming to the evolution of mankind. Neanderthal and Cro-Magnon man. Now, interestingly, what you’re looking at here are three strings: biological, anthropological — development of the cities — and cultural, which is human expression.

Now, what you’ve seen here is the evolution of populations, not so much the evolution of individuals. And in addition, if you look at the time scales that are involved here — two billion years for life, six million years for the hominid, 100,000 years for mankind as we know it — you’re beginning to see the telescoping nature of the evolutionary paradigm. And then when you get to agricultural, when you get to scientific revolution and industrial revolution, you’re looking at 10,000 years, 400 years, 150 years. Uou’re seeing a further telescoping of this evolutionary time. What that means is that as we go through the new evolution, it’s gonna telescope to the point we should be able to see it manifest itself within our lifetime, within this generation.

The new evolution stems from information, and it stems from two types of information: digital and analog. The digital is artificial intelligence. The analog results from molecular biology, the cloning of the organism. And you knit the two together with neurobiology. Before on the old evolutionary paradigm, one would die and the other would grow and dominate. But under the new paradigm, they would exist as a mutually supportive, noncompetitive grouping. Okay, independent from the external.

And what is interesting here is that evolution now becomes an individually centered process, emanating from the needs and desires of the individual, and not an external process, a passive process where the individual is just at the whim of the collective. So, you produce a neo-human, okay, with a new individuality and a new consciousness. But that’s only the beginning of the evolutionary cycle because as the next cycle proceeds, the input is now this new intelligence. As intelligence piles on intelligence, as ability piles on ability, the speed changes. Until what? Until we reach a crescendo in a way could be imagined as an enormous instantaneous fulfillment of human? human and neo-human potential. It could be something totally different. It could be the amplification of the individual, the multiplication of individual existences. Parallel existences now with the individual no longer restricted by time and space.

And the manifestations of this neo-human-type evolution, manifestations could be dramatically counter-intuitive. That’s the interesting part. The old evolution is cold. It’s sterile. It’s efficient, okay? And its manifestations of those social adaptations. We’re talking about parasitism, dominance, morality, okay? Uh, war, predation, these would be subject to de-emphasis. These will be subject to de-evolution. The new evolutionary paradigm will give us the human traits of truth, of loyalty, of justice, of freedom. These will be the manifestations of the new evolution. And that is what we would hope to see from this. That would be nice.”

Eamonn Healy, professor of chemistry at St. Edward’s University in Austin, Texas, where his research focuses on the design of structure-activity probes to elucidate enzymatic activity. He appears in Richard Linklater's 2001 film Waking Life discussing concepts similar to a technological singularity and explaining “telescopic evolution.”, Eamonn Healy speaks about telescopic evolution and the future of humanity from Brandon Sergent, Transcript

See also:

Jason Silva on singularity, synthetic biology and a desire to transcend human boundaries

Oct
24th
Mon
permalink

Kevin Kelly on information, evolution and technology: ‘The essence of life is not energy but ideas’

                   

"Technology’s dominance ultimately stems not from its birth in human minds but from its origin in the same self-organization that brought galaxies, planets, life, and minds into existence. It is part of a great asymmetrical arc that begins at the big bang and extends into ever more abstract and immaterial forms over time. The arc is the slow yet irreversible liberation from the ancient imperative of matter and energy.”

Kevin Kelly, What Technology Wants, New York: Viking, The Penguin Group, 2010

"The best way to understand the manufactured world is not to see it as a work of human imagination only, but to see it as an extension of the biological world. Most of us walk around with a strict mental dichotomy between the natural world of genes and the artificial world of concrete and code. When we actually look at how evolution works, the distinction begins to break down. The defining force behind life is not energy but information. Evolution is a process of information transmission, and so is technology, which is why it too reflects a biological transcendence.

Q: You have described technology as the “seventh kingdom of life” – which is a very ontological description – and as “the accumulation of ideas” – which is an epistemological description. Are the two converging?

Kelly: I take a very computational view of life and evolution. If you look at the origins of life and the forces of evolution, they arevery intangible. Life is built on bits, on ideas, on information, on immaterial things. The technology sphere we have made – which is what I call the Technium – consists of information as well. We can take a number of atoms and arrange them in such a way as to maximize their usefulness – for example by creating a cell phone. When we think about who we are, we are always talking about information, about knowledge, about processes that increase the complexity of things. (…)

I am a critic of those who say that the internet has become a sentient and living being. But while the internet is not conscious like an organism, it exhibits some lifelike qualities. Life is not a binary thing that is either there or not there. It is a continuum between semi-living things like viruses and very living things like us. What we are seeing right now is an increased “lifeness” in technology as we move across the continuum. As things become more complex, they become more lifelike. (…)

One of the problems for biologists right now is to distinguish between random and organized processes. If we want to think coherently about the relationship between biology and technology, we need good working definitions to outline the edges of the spectrum of life that we are investigating. One of the ways to do that is to create artificial life and then debate whether we have crossed a threshold. I think we are beginning to see actual evolution in technology because the similarities to natural evolution are so large that it has become hard to ignore them. (…)

I think that the essence of life is natural and subject to the investigation by reason. Quantum physics is science, but it is so far removed from our normal experience that the investigation becomes increasingly difficult. Not everyone might understand it, but collectively we can. One of the reasons we want to build artificial intelligence is to supplement our human intelligence, because we may require other kinds of thinking to understand these mysteries Technology is a way to manufacture types of thinking that don’t yet exist. (…)

Innovation always has unintended consequences. Every new invention creates new solutions, but it also creates almost as many new problems. I tend to think that technology is not really powerful unless it can be powerfully abused. The internet is a great example of that: It will be abused, there will be very significant negative consequences. Even the expansion of choices itself has unintended consequences. Barry Schwartz calls it the “paradox of choice”: Humans have evolved with a limited capacity for making decisions. We can be paralyzed by choice! (…)

Most of the problems today have been generated by technology, and most future problems will be generated by technology as well. I am so technocentric that I say: The solution to technological problems is more technology. Here’s a tangible example: If I throw around some really bad ideas in this interview, you won’t counsel me to stop thinking. You will encourage me to think more and come up with better ideas. Technology is a way of thinking. The proper response to bad technology is not less, but more and better technology. (…)

I always think of technology as a child: You have to work with it, you have to find the right role and keep it away from bad influences. If you tell your child, “I will disown you if you become a lawyer”, that will almost guarantee that they become a lawyer. Every technology can be weaponized. But the way to stop that is not prohibition but an embrace of that technology to steer its future development. (…)

I am not a utopian who believes that technology will solve our problems. I am a protopian, I believe in gradual progress. And I am convinced that much of that progress is happening outside of our control. In nature, new species fill niches that can be occupied and inhabited. And sometimes, these niches are created by previous developments. We are not really in control of those processes. The same is true for innovation: There is an innate bias in the Technium that makes certain processes inevitable. (…)

I use the term the same way you would describe adolescence as the inevitable step between childhood and adulthood. We are destined by the physics and chemistry of matter. If we looked at a hundred planets in the universe that were inhabited by intelligent life, I bet that we would eventually see something like the internet on almost all of them. But can we find exceptions? Probably. (…)

Q: Is innovation a process that can continue indefinitely? Or does the infinite possibility space eventually run against the constraints of a world with finite resources and finite energy?

Kelly: I don’t believe in omega points. One of the remarkable things about life is that evolution does not stop. It always finds new paths forwards and new niches to occupy. As I said before, the essence of life is not energy but ideas. If there are limits to how many ideas can exist within a brain or within a system, we are still very far away from those limits. (…)

Long before we reach a saturation point, we will evolve into something else. We invented our humanity, and we can reinvent ourselves with genetic engineering or other innovations. We might even fork into a species that embraces speedy development and a species that wants no genetic engineering.

Q: You are advocating a very proactive approach to issues like genetic enhancements and human-technological forms of symbiosis, yet you also stress the great potential for abuse, for ethical problems and for unintended consequences.

Kelly: Yes, we are steamrolling ahead. The net gain will slightly outweigh the negative aspects. That is all we need: A slightly greater range of choices and opportunities every year equals progress. (…)

For the past ten thousand years, technological progress has on average enabled our opportunities to expand. The easiest way to demonstrate the positive arc of progress is to look at the number of people today who would want to live in an earlier time. Any of us could sell all material possessions within days and live like a caveman. I have written on the Amish people, and I have lived with native tribes, so I understand the attractions of that lifestyle. It’s a very supportive and grounded reality. But the cost of that experience is the surrender of all the other choices and opportunities we now enjoy. (…)

My point about technology is that every person has a different set of talents and abilities. The purpose of technology is to provide us with tools to maximize our talents and explore our opportunities. The challenge is to make use of the tools that fit us. Your technology can be different from my technology because our talents and interests are different. If you look at the collective, you might think that we are all becoming more alike. But when you go down to the individual level, technology has the potential to really bring out the differences that make us special. Innovation enables individualization. (…)

Q: Is the internet increasing our imaginative or innovative potential?

Kelly: That is a good point. A lot of these impossibilities happen within collective or globalist structures. We can do things that were completely impossible during the industrial age because we can now transcend our individual experience. (…)

Q: The industrial age made large-scale production possible, now we see large-scale collaboration. What is the next step?

Kelly: I love that question. What is the next stage? I think we are decades or centuries away from a global intelligence, but that would be another phase of human development. If you could generate thoughts on a planetary scale, if we moved towards singularity, that would be huge.

Q: The European: The speed of change leaves room for optimism.

Kelly: My optimism is off the chart. I got it from Asia, where I saw how quickly civilizations could move from abject poverty to incredible wealth. If they can do it, almost anything is possible. Let me go back to the original quote about seeing God in a cell phone: The reason we should be optimistic is life itself. It keeps bouncing back even when we do horrible things to it. Life is brimming with possibilities, details, intelligence, marvels, ingenuity. And the Technium is very much an extension of that possibility space.”

Kevin Kelly, writer, photographer, conservationist, the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Catalog, "My Optimism Is Off The Chart", The European Magazine, 20.09.2011 (Illustration: Seashells from Okinawa by Thomas Schmall)

See also:

Kevin Kelly on Technology, or the Evolution of Evolution
Kevin Kelly on Why the Impossible Happens More Often
Kevin Kelly on the Satisfaction Paradox
Technology tag on Lapidarium
Technology tag on Lapidarium notes

Sep
7th
Wed
permalink

Universal Semantic Communication. Is it possible for two intelligent beings to communicate meaningfully, without any common language or background?

                              

"This question has interest on its own, but is especially relevant in the context of modern computational infrastructures where an increase in the diversity of computers is making the task of inter-computer interaction increasingly burdensome. Computers spend a substantial amount of time updating their software to increase their knowledge of other computing devices. In turn, for any pair of communicating devices, one has to design software that enables the two to talk to each other. Is it possible instead to let the two computing entities use their intelligence (universality as computers) to learn each others’ behavior and attain a common understanding? What is “common understanding?” We explore this question in this paper.

To formalize this problem, we suggest that one should study the “goal of communication:” why are the two entities interacting with each other, and what do they hope to gain by it? We propose that by considering this question explicitly, one can make progress on the question of universal communication.

We start by considering a computational setting for the problem where the goal of one of the interacting players is to gain some computational wisdom from the other player. We show that if the second player is “sufficiently” helpful and powerful, then the first player can gain significant computational power (deciding PSPACE complete languages).

Our work highlights some of the definitional issues underlying the task of formalizing universal communication, but also suggests some interesting phenomena and highlights potential tools that may be used for such communication. (…)

Consider the following scenario: Alice, an extraterrestrial, decides to initiate contact with a terrestrial named Bob by means of a radio wave transmission. How should he respond to her? Will he ever be able to understand her message? In this paper we explore such scenarios by framing the underlying questions computationally.

We believe that the above questions have intrinsic interest, as they raise some further fundamental questions. How does one formalize the concept of understanding? Does communication between intelligent beings require a “hardwired” common sense of meaning or language? Or, can intelligence substitute for such requirements? What role, if any, does computational complexity play in all this? (…)

Marvin Minsky suggested that communication should be possible from a philosophical standpoint, but did not provide any formal definitions or constructions.

LINCOS [an abbreviation of the Latin phrase lingua cosmica]: The most notable and extensive prior approach to this problem is due to Hans Freudenthal, who claims that it is possible to code messages describing mathematics, physics, or even simple stories in such a radio transmission which can be understood by any sufficiently humanlike recipient. Ideally, we would like to have such a rich language at our disposal; it should be clear that the “catch” lies in Freudenthal’s assumption of a “humanlike” recipient, which serves as a catch-all for the various assumptions that serve as the foundations for Freudenthal’s scheme.

It is possible to state more precise assumptions which form the basis of Freudenthal’s scheme, but among these will be some fairly strong assumptions about how the recipient interprets the message. In particular, one of these is the assumption that all semantic concepts of interest can be characterized by lists of syntactic examples. (…)

Information Theory

The classical theory of communication does not investigate the meaning associated with information and simply studies the process of communicating the information, in its exact syntactic form. It is the success of this theory that motivates our work: computers are so successful in communicating a sequence of bits, that the most likely source of “miscommunication” is a misinterpretation of what these bits mean. (…)

Interactive Proofs and Knowledge

Finally, the theory of interactive proofs and knowledge [pdf] (and also the related M. Blum and S. Kannan. Designing programs that check their work) gets further into the gap between Alice and Bob, by ascribing to them different, conflicting intents, though they still share common semantics. It turns out this gap already starts to get to the heart of the issues that we consider, and this theory is very useful to us at a technical level. In particular, in this work we consider a setting where Bob wishes to gain knowledge from Alice. Of course, in our setting Bob is not mistrustful of Alice, he simply does not understand her. (…)

Modeling issues

Our goal is to cast the problem of “meaningful” communication between Alice and Bob in a purely mathematical setting. We start by considering how to formulate the problem where the presence of a “trusted third party” would easily solve the problem.

Consider the informal setting in which Alice and Bob speak different natural languages and wish to have a discussion via some binary channel. We would expect that a third party who knows both languages could give finite encoding rules to Alice and Bob to facilitate this discussion, and we might be tempted to require that Alice’s statements translate into the same statements in Bob’s language that the third party would have selected and vice-versa.

In the absence of the third party, this is unreasonable to expect, though: suppose that Alice and Bob were given encoding rules that were identical to those that a third party would have given them, except that some symmetric sets of words have been exchanged—say, Alice thinks “left” means “right,” “clockwise” means “counter-clockwise,” etc. Unless they have some way to tell that these basic concepts have been switched, observe that they would still have a conversation that is entirely sensible to each of them. [See also] Thus, if we are to have any hope at all, we must be prepared to accept interactions that are indistinguishable from successes as “successes” as well. We do not wish to take this to an extreme, though: Bob cannot distinguish among Alices who say nothing, and yet we would not classify their interactions as “successes.”

At the heart of the issues raised by the discussion above is the question: what does Bob hope to get out of this conversation with Alice? In general, why do computers, or humans communicate? Only by pinning down this issue can we ask the question, “can they do it without a common language?”

We believe that there are actually many possible motivations for communication. Some communication is motivated by physical needs, and others are motivated purely by intellectual needs or even curiosity. However these diverse settings still share some common themes: communication is being used by the players to achieve some effects that would be hard to achieve without communication. In this paper, we focus on one natural motivation for communication: Bob wishes to communicate with Alice to solve some computa- tional problems. (…)

In order to establish communication between Alice and Bob, Bob runs in time exponential in a parameter that could be described informally as the length of the dictionary that translates Bob’s language into Alice’s language. (Formally, the parameter is the description length of the protocol for interpreting Alice in his encoding of Turing machines.) (…) [p.3]

[To see proofs of theorems and more, click pdf]

Conclusions

In the previous sections we studied the question, “how can two intelligent interacting players attempt to achieve some meaningful communication in a universal setting, i.e., one in which the two players do not start with a common background?” We return now to the motivation for studying this question, and the challenges that need to be dealt with to address the motivations. (…)

We believe that this work has raised and addressed some fundamental questions of intrinsic interest. However this is not the sole motivation for studying this problem. We believe that these questions also go to the heart of “protocol issues” in modern computer networks. Modern computational infrastructures are built around the concept of communication and indeed a vast amount of effort is poured into the task of ensuring that the computers work properly as communication devices. Yet as computers and networks continue to evolve at this rapid pace, one problem is becoming increasingly burdensome: that of ensuring that every pair of computers is able to “understand” each other, so as to communicate meaningfully. (…)

Current infrastrusctures ensure this ability for pairs to talk to each other by explicitly going through a “setup” phase, where a third party who knows the specifications of both elements of a pair sets up a common language/protocol for the two to talk to each other, and then either or both players learn (download) this common language to establish communication. An everyday example of such an occurence is when we attempt to get our computer to print on a new printer. We download a device driver for our computer which is a common language written by someone who knows both our computer and the printer.

We remark that this issue is a fundamental one, and not merely an issue of improper design. Current protocols are designed with a fixed pair of types of devices in mind. However, we expect for our computers to be capable of communicating with all other communication devices, even ones that did not exist when our computer was built. While it would be convenient if all computers interacted with each other using a single fixed protocol that is static over time, this is no more reasonable to expect than asking humans to agree on a single language to converse in, and then to expect this language to stay fixed over time. Thus, to satisfy our expectations in the current setting, it is essential that computers are constantly updated so as to have universal connectivity over time. (…)

This work was motivated by a somewhat radical alternative scenario for communication. Perhaps we should not set computers up with common languages, but rather exploit the universality in our favor, by letting them evolve to a common language. But then this raises issues such as: how can the computers know when they have converged to a common understanding? Or, how does one of the computers realize that the computer it is communicating with is no longer in the same mode as they were previously, and so the protocol for communication needs to be adjusted? The problem described in the opening paragraph of the introduction is simply the extremal version of such issues, where the communicating players are modeled as having no common background. (…)

Perhaps the main contribution of this work is to suggest that communication is not an end in itself, but rather a means to achieving some general goal. Such a goal certainly exists in all the practical settings above, though it is no longer that of deciding membership in some set S. Our thesis is that one can broaden the applicability of this work to other settings by (1) precisely articulating the goal of communication in each setting and (2) constructing “universal protocols” that achieve these goals. (…)

One of the implicit suggestions in this work is that communicating players should periodically test to see if the assumption of common understanding still holds. When this assumption fails, presumably this happened due to a “mild” change in the behavior of one of the players. It may be possible to design communication protocols that use such a “mildness” assumption to search and re-synchronize the communicating players where the “exponential search” takes time exponential in the amount of change in the behavior of the players. Again, pinning down a precise measure of the change and designing protocols that function well against this measure are open issues.”

Brendan Juba, Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory, and Harvard University. School of Engineering and Applied Sciences - Theory of Computing group, Madhu Sudan, Indian computer scientist, professor of computer science at the Massachusetts Institute of Technology (MIT), Universal Semantic Communication I (pdf), MIT, 2010 (Illustration source)

See also:

☞ Brendan Juba, Madhu Sudan, Universal Semantic Communication II (pdf), MIT
☞ J. Bao, P. Basu, M. Dean, C. Partridge, A. Swami, W. Leland, J. A. Hendler, Towards a Theory of Semantic Communication (Extended Technical Report) (pdf)

Jul
18th
Mon
permalink

From Technologist to Philosopher. Why you should quit your technology job and get a Ph.D. in the humanities

                      

"It’s fun being a technologist. In our Internet-enabled era, it is easy for technologists to parlay creative power into societal power: We build systems that ease the transactions of everyday life, and earn social validation that we are "making the world a better place." Within a few years I had achieved more worldly success than previous generations could have imagined. I had a high-paying technology job, I was doing cutting-edge AI work, and I was living the technotopian good life.

But there was a problem. Over time, it became increasingly hard to ignore the fact that the artificial intelligence systems I was building were not actually that intelligent. They could perform well on specific tasks; but they were unable to function when anything changed in their environment. I realized that, while I had set out in AI to build a better thinker, all I had really done was to create a bunch of clever toys—toys that were certainly not up to the task of being our intellectual surrogates. (…)

I wanted to better understand what it was about how we were defining intelligence that was leading us astray: What were we failing to understand about the nature of thought in our attempts to build thinking machines? (…)

I realized that the questions I was asking were philosophical questions—about the nature of thought, the structure of language, the grounds of meaning. So if I really hoped to make major progress in AI, the best place to do this wouldn’t be another AI lab. If I really wanted to build a better thinker, I should go study philosophy. (…)

Thus, about a decade ago, I quit my technology job to get a Ph.D. in philosophy. (…) I was not aware that there existed distinct branches of analytic and continental philosophy, which took radically different approaches to exploring thought and language; or that there was a discipline of rhetoric, or hermeneutics, or literary theory, where thinkers explore different aspects of how we create meaning and make sense of our world.

As I learned about those things, I realized just how limited my technologist view of thought and language was. I learned how the quantifiable, individualistic, ahistorical—that is, computational—view I had of cognition failed to account for whole expanses of cognitive experience (including, say, most of Shakespeare). I learned how pragmatist and contextualist perspectives better reflect the diversity and flexibility of our linguistic practices than do formal language models. I learned how to recognize social influences on inquiry itself—to see the inherited methodologies of science, the implicit power relations expressed in writing—and how those shape our knowledge.

Most striking, I learned that there were historical precedents for exactly the sort of logical oversimplifications that characterized my AI work. Indeed, there were even precedents for my motivation in embarking on such work in the first place. I found those precedents in episodes ranging from ancient times—Plato’s fascination with math-like forms as a source of timeless truth—to the 20th century—the Logical Positivists and their quest to create unambiguous language to express sure foundations for all knowledge. They, too, had an uncritical notion of progress; and they, too, struggled in their attempts to formally quantify human concepts that I now see as inextricably bound up with human concerns and practices.

In learning the limits of my technologist worldview, I didn’t just get a few handy ideas about how to build better AI systems. My studies opened up a new outlook on the world. I would unapologetically characterize it as a personal intellectual transformation: a renewed appreciation for the elements of life that are not scientifically understood or technologically engineered.

In other words: I became a humanist.

And having a more humanistic sensibility has made me a much better technologist than I was before. I no longer see the world through the eyes of a machine—through the filter of what we are capable of reducing to its logical foundations. I am more aware of how the products we build shape the culture we are in. I am more attuned to the ethical implications of our decisions. And I no longer assume that machines can solve all of our problems for us. The task of thinking is still ours. (…)

The technology issues facing us today—issues of identity, communication, privacy, regulation—require a humanistic perspective if we are to deal with them adequately. (…)

I see a humanities degree as nothing less than a rite of passage to intellectual adulthood. A way of evolving from a sophomoric wonderer and critic into a rounded, open, and engaged intellectual citizen. When you are no longer engaged only in optimizing your products—and you let go of the technotopian view—your world becomes larger, richer, more mysterious, more inviting. More human. (…)

Getting a humanities Ph.D. is the most deterministic path you can find to becoming exceptional in the industry. (…) There is an industrywide shift toward more “product thinking” in leadership—leaders who understand the social and cultural contexts in which our technologies are deployed.

Products must appeal to human beings, and a rigorously cultivated humanistic sensibility is a valued asset for this challenge. That is perhaps why a technology leader of the highest status—Steve Jobs—recently credited an appreciation for the liberal arts as key to his company’s tremendous success with their various i-gadgets.

It is a convenient truth: You go into the humanities to pursue your intellectual passion; and it just so happens, as a by-product, that you emerge as a desired commodity for industry. Such is the halo of human flourishing.”

Damon Horowitz, BA in Computer Science from Columbia, a MS from MIT Media Lab, and a PhD in philosophy from Stanford, recently joined Google as In-House Philosopher / Director of Engineering, From Technologist to Philosopher, The Chronicle of Higher Education, July 17, 2011 (Illustration source: Brian Taylor for The Chronicle)

Why Machines Need People

— Damon Horowitz, Why Machines Need People, TEDxSoMa, 22 jan 2010

Feb
23rd
Wed
permalink

Mark Changizi on Humans, Version 3.0.


The next giant leap in human evolution may not come from new fields like genetic engineering or artificial intelligence, but rather from appreciating our ancient brains.

“Genetic engineering could engender marked changes in us, but it requires a scientific bridge between genotypes—an organism’s genetic blueprints—and phenotypes, which are the organisms themselves and their suite of abilities. A sufficiently sophisticated bridge between these extremes is nowhere in sight.

And machine-enhancement is part of our world even today, manifesting in the smartphones and desktop computers most of us rely on each day. Such devices will continue to further empower us in the future, but serious hardware additions to our brains will not be forthcoming until we figure out how to build human-level artificial intelligences (and meld them to our neurons), something that will require cracking the mind’s deepest mysteries. I have argued that we’re centuries or more away from that. (…)

There is, however, another avenue for human evolution, one mostly unappreciated in both science and fiction. It is this unheralded mechanism that will usher in the next stage of human, giving future people exquisite powers we do not currently possess, powers worthy of natural selection itself. And, importantly, it doesn’t require us to transform into cyborgs or bio-engineered lab rats. It merely relies on our natural bodies and brains functioning as they have for millions of years.

This mystery mechanism of human transformation is neuronal recycling, coined by neuroscientist Stanislas Dehaene, wherein the brain’s innate capabilities are harnessed for altogether novel functions.

This view of the future of humankind is grounded in an appreciation of the biologically innate powers bestowed upon us by hundreds of millions of years of evolution. This deep respect for our powers is sometimes lacking in the sciences, where many are taught to believe that our brains and bodies are taped-together, far-from-optimal kluges. In this view, natural selection is so riddled by accidents and saddled with developmental constraints that the resultant biological hardware and software should be described as a “just good enough” solution rather than as a “fine-tuned machine.”

So it is no wonder that, when many envisage the future, they posit that human invention—whether via genetic engineering or cybernetic AI-related enhancement—will be able to out-do what evolution gave us, and so bootstrap our species to a new level. This rampant overoptimism about the power of human invention is also found among many of those expecting salvation through a technological singularity, and among those who fancy that the Web may some day become smart.

The root of these misconceptions is the radical underappreciation of the design engineered by natural selection into the powers implemented by our bodies and brains, something central to my 2009 book, The Vision Revolution. For example, optical illusions (such as the Hering) are not examples of the brain’s poor hardware design, but, rather, consequences of intricate evolutionary software for generating perceptions that correct for neural latencies in normal circumstances. And our peculiar variety of color vision, with two of our sensory cones having sensitivity to nearly the same part of the spectrum, is not an accidental mutation that merely stuck around, but, rather, appear to function with the signature of hemoglobin physiology in mind, so as to detect the color signals primates display on their faces and rumps.

These and other inborn capabilities we take for granted are not kluges, they’re not “good enough,” and they’re more than merely smart. They’re astronomically brilliant in comparison to anything humans are likely to invent for millennia.

Neuronal recycling exploits this wellspring of potent powers. If one wants to get a human brain to do task Y despite it not having evolved to efficiently carry out task Y, then a key point is not to forcefully twist the brain to do Y. Like all animal brains, human brains are not general-purpose universal learning machines, but, instead, are intricately structured suites of instincts optimized for the environments in which they evolved. To harness our brains, we want to let the brain’s brilliant mechanisms run as intended—i.e., not to be twisted. Rather, the strategy is to twist Y into a shape that the brain does know how to process. (…)

There is a very good reason to be optimistic that the next stage of human will come via the form of adaptive harnessing, rather than direct technological enhancement: It has already happened.

We have already been transformed via harnessing beyond what we once were. We’re already Human 2.0, not the Human 1.0, or Homo sapiens, that natural selection made us. We Human 2.0’s have, among many powers, three that are central to who we take ourselves to be today: writing, speech, and music (the latter perhaps being the pinnacle of the arts). Yet these three capabilities, despite having all the hallmarks of design, were not a result of natural selection, nor were they the result of genetic engineering or cybernetic enhancement to our brains. Instead, and as I argue in both The Vision Revolution and my forthcoming Harnessed, these are powers we acquired by virtue of harnessing, or neuronal recycling.

In this transition from Human 1.0 to 2.0, we didn’t directly do the harnessing. Rather, it was an emergent, evolutionary property of our behavior, our nascent culture, that bent and shaped writing to be right for our visual system, speech just so for our auditory system, and music a match for our auditory and evocative mechanisms.

And culture’s trick? It was to shape these artifacts to look and sound like things from our natural environment, just what our sensory systems evolved to expertly accommodate. There are characteristic sorts of contour conglomerations occurring among opaque objects strewn about in three dimensions (like our natural Earthly habitats), and writing systems have come to employ many of these naturally common conglomerations rather than the naturally uncommon ones. Sounds in nature, in particular among the solid objects that are most responsible for meaningful environmental auditory stimuli, follow signature patterns, and speech also follows these patterns, both in its fundamental phoneme building blocks and in how phonemes combine into morphemes and words. And we humans, when we move and behave, make sounds having a characteristic animalistic signature, something we surely have specialized auditory mechanisms for sensing and processing; music is replete with these characteristic sonic signatures of animal movements, harnessing our auditory mechanisms that evolved for recognizing the actions of other large mobile creatures like ourselves.

Culture’s trick, I have argued in my research, was to harness by mimicking nature. This “nature-harnessing” was the route by which these three kernels of Human 2.0 made their way into Human 1.0 brains never designed for them.

The road to Human 3.0 and beyond will, I believe, be largely due to ever more instances of this kind of harnessing. And although we cannot easily anticipate the new powers we will thereby gain, we should not underestimate the potential magnitude of the possible changes. After all, the change from Human 1.0 to 2.0 is nothing short of universe-rattling: It transformed a clever ape into a world-ruling technological philosopher.

Although the step from Human 1.0 to 2.0 was via cultural selection, not via explicit human designers, does the transformation to Human 3.0 need to be entirely due to a process like cultural evolution, or might we have any hope of purposely guiding our transformation? When considering our future, that’s probably the most relevant question we should be asking ourselves.

I am optimistic that we may be able to explicitly design nature-harnessing technologies in the near future, now that we have begun to break open the nature-harnessing technologies cultural selection has built thus far. One of my reasons for optimism is that nature-harnessing technologies (like writing, speech, and music) must mimic fundamental ecological features in nature, and that is a much easier task for scientists to tackle than emulating the exhorbitantly complex mechanisms of the brain.

And nature-harnessing may be an apt description of emerging technological practices, such as the film industry’s ongoing struggle to better design the 3D experience to tap into the evolved functions of binocular vision, the gaming industry’s attempts to “gameify” certain tasks (exemplified in the work of Jane McGonigal), or the drive within robotics for more emotionally expressive faces (such as the child robot of Minoru Asada).

Admittedly, none of these sound remotely as revolutionary as writing, speech, or music, but it can be difficult to envision what these developments can become once they more perfectly harness our exquisite biological instincts. (Even writing was, for centuries, used mostly for religious and governmental book-keeping purposes—only relatively recently has the impact of the written word expanded to revolutionize the lives of average humans.)

The point is, most science fiction gets all this wrong. While the future may be radically “futuristic,” with our descendants having breathtaking powers we cannot fathom, it probably won’t be because they evolved into something new, or were genetically modified, or had AI-chip enhancements. Those powerful beings will simply be humans, like you and I. But they’ll have been nature-harnessed in ways we cannot anticipate, the magic latent within each of us used for new, brilliant Human 3.0 capabilities.”
Mark Changizi (cognitive scientist, author), Humans, Version 3.0., SEED.com, Feb 23, 2011 See also: Prof. Stanislas Dehaene, "How do humans acquire novel cultural skills? The neuronal recycling model", LSE Institute | Nicod, (Picture source: Rzeczpospolita)