Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Jun
18th
Sat
permalink

Dunbar’s Number: Why We Can’t Have More Than 150 Friends

                              

"Dunbar’s number is suggested to be a theoretical cognitive limit to the number of people with whom one can maintain stable social relationships. These are relationships in which an individual knows who each person is, and how each person relates to every other person. Proponents assert that numbers larger than this generally require more restrictive rules, laws, and enforced norms to maintain a stable, cohesive group.

No precise value has been proposed for Dunbar’s number. It has been proposed to lie between 100 and 230, with a commonly used value of 150. Dunbar’s number states the number of people one knows and keeps social contact with, and it does not include the number of people known personally with a ceased social relationship, nor people just generally known with lack a persistent social relationship, a number which might be much higher and likely depends on long-term memory size.

Dunbar’s number was first proposed by British anthropologist Robin Dunbar, who theorized that "this limit is a direct function of relative neocortex size, and that this in turn limits group size … the limit imposed by neocortical processing capacity is simply on the number of individuals with whom a stable inter-personal relationship can be maintained." On the periphery, the number also includes past colleagues such as high school friends with whom a person would want to reacquaint oneself if they met again.” (Wiki)

"Robin Dunbar used the volume of the neocortex – the ‘thinking’ part of the brain – as his measure of brain size, because this accounts for most of the brain’s expansion within primates. He found that both measures of social complexity correlated with relative neocortex volume in primate species. Subsequently, he predicted the social group size of some other monkey and ape species from their neocortex volumes – with impressive accuracy.

Thanks to his ground-breaking work, and the follow-on studies which it stimulated, numerous features of primate social behaviour can now be predicted from neocortex volume – from the time devoted to social interaction, the level of social skills and the degree of tactical deception practiced to community and coalition size. We can also predict when social groups will split up because their size is unsustainable; Robin Dunbar’s research shows that the volume of the neocortex imposes a limit on the number of relationships that individual primates can sustain in their mental model of their social world.

The human dimension

Humans are primates, too – so do they fit into the pattern established for monkeys and apes? This is the key question which Robin Dunbar sought to answer by using the same equations to predict human social group and clique size from neocortex volume. The results were… ~150 for social group size, and ~12 for the more intimate clique size. He subsequently discovered that modern humans operate on a hierarchy of group sizes. “Interestingly”, he says, “the literature suggests that 150 is roughly to the number of people you could ask for a favour and expect to have it granted. Functionally, that’s quite similar to apes’ core social groups.” “

The ultimate brain teaser, University of Liverpool - Research Intelligence, 17 Aug 2003

Q: How did you come up with this concept?

I was working on the arcane question of why primates spend so much time grooming one another, and I tested another hypothesis – which says the reason why primates have big brains is because they live in complex social worlds. Because grooming is social, all these things ought to map together, so I started plotting brain size and group size and grooming time against one another. You get a nice set of relationships.

It was about 3am, and I thought, hmm, what happens if you plug humans into this? And you get this number of 150. This looked implausibly small, given that we all live in cities now, but it turned out that this was the size of a typical community in hunter-gatherer societies. And the average village size in the Domesday Book is 150 [people].

It’s the same when we have much better data – in the 18th century, for example, thanks to parish registers. County by county, the average size of a village is again 150. Except in Kent, where it was 100. I’ve no idea why.

Q: Has this number evolved at all?

The Dunbar number probably dates back to the appearance of anatomically modern humans 250,000 years ago. If you go back in time, by estimating brain size, you can see community size declining steadily.

Q: Why did we evolve as a social species?

Simply, it’s the key evolutionary strategy of primates. Group living and explicitly communal solutions to the problem of survival out there on the plains or in the forests… that’s a primate adaptation, and they evolved that very early on.

Most species of birds and animals aren’t as intensely social. Sociality for most species hovers around pair-bonds, that’s as complicated as it gets. The species with big brains are the ones who mate monogamously… The lesson is that there is something computationally very demanding about maintaining close relationships over a very long period of time – as we all know!

Q: How can we grow the Dunbar number?

We’re caught in a bind: community sizes were designed for hunter-gatherer- type societies where people weren’t living on top of one another. Your 150 were scattered over a wide are, but everybody shared the same 150. This made for a very densely interconnected community, and this means the community polices itself. You don’t need lawyers and policemen. If you step out of line, granny will wag her finger at you.

Our problem now is the sheer density of folk – our networks aren’t compact. You have clumps of friends scattered around the world who don’t know one another: now you don’t have an interwoven network. It leads to a less well integrated society. How to re-create that old sense of community in these new circumstances? That’s an engineering problem. How do we work around it?

The alternative solution, of course, is that we could evolve bigger brains. But they’d have to be much bigger, and it takes a long time.

Q: What about the role of the web in this?

(…) Words are slippery, a touch is worth a 1,000 words any day.”

Aleks Krotoski, Robin Dunbar: we can only ever have 150 friends at most…, Guardian, The Observer, 14 March 2010 

How Many Friends Does One Person Need?

We are the product of our evolutionary history, and that history colors our experience of everyday life — from the number of friends we have to how religious we are. Renowned evolutionary anthropologist Professor Robin Dunbar visits the RSA to explain how the very distant past underpins all of our current behaviors, and how we can best utilize that knowledge.

Robin Dunbar, British anthropologist and evolutionary psychologist and a specialist in primate behaviour. He is currently Professor of Evolutionary Anthropology and the Director of the Institute of Cognitive and Evolutionary Anthropology of the University of Oxford and the Co-director of the British Academy Centenary Research Project, Robin Dunbar: How Many Friends Does One Person Need?, FORA.tv, RSA, London Feb 2, 2010. (Illustration source: The magic number, RSA Journal)

See also:

Robin I. M. Dunbar, Neocortex size as a constraint on group size in primates, Department of Anthropology, University College London
☞ B. Goncalves, N. Perra, A. Vespignani, Validation of Dunbar’s number in Twitter conversations, Cornell University, May 2011
Is the Social Brain Theory Applicable to Human Individual Differences? Relationship between Sociability Personality Dimension and Brain Size (pdf)

"Our study intends to examine whether the social brain theory is applicable to human individual differences. According to the social brain theory primates have larger brains as it could be expected from their body sizes due to the adaptation to a more complex social life. Regarding humans there were few studies about the relationship between theory of mind and frontal and temporal brain lobes. We hypothesized that these brain lobes, as well as the whole cerebrum and neocortex are in connection with the Sociability personality dimension that is associated with individuals’ social lives. Our findings support this hypothesis as Sociability correlated positively with the examined brain structures if we control the effects of body size differences and age. These results suggest that the social brain theory can be extended to human interindividual differences and they have some implications to personality psychology too.”

Internet users now have more and closer friends than those offline, Ars Technica, June 16, 2011
☞ Douglas Fox, The Physics of Intelligence, Scientific American, July 2011 (Note at Lapidarium)
William Deresiewicz on the meaning of friendship in our time

Jun
16th
Thu
permalink

The Physics of Intelligence

                                   

The laws of physics may well prevent the human brain from evolving into an ever more powerful thinking machine.

  • Human intelligence may be close to its evolutionary limit. Various lines of research suggest that most of the tweaks that could make us smarter would hit limits set by the laws of physics.
  • Brain size, for instance, helps up to a point but carries diminishing returns: brains become energy-hungry and slow. Better “wiring” across the brain also would consume energy and take up a disproportionate amount of space.
  • Making wires thinner would hit thermodynamic limitations similar to those that affect transistors in computer chips: communication would get noisy.
  • Humans, however, might still achieve higher intelligence collectively. And technology, from writing to the Internet, enables us to expand our mind outside the confines of our body.

"What few people realize is that the laws of physics place tough constraints on our mental faculties as well. Anthropologists have speculated about anatomic roadblocks to brain expansion—for instance, whether a larger brain could fit through the birth canal of a bipedal human. If we assume, though, that evolution can solve the birth canal problem, then we are led to the cusp of some even more profound questions.

One might think, for example, that evolutionary processes could increase the number of neurons in our brain or boost the rate at which those neurons exchange information and that such changes would make us smarter. But several recent trends of investigation, if taken together and followed to their logical conclusion, seem to suggest that such tweaks would soon run into physical limits. Ultimately those limits trace back to the very nature of neurons and the statistically noisy chemical exchanges by which they communicate. “Information, noise and energy are inextricably linked,” says Simon Laughlin, a theoretical neuroscientist at the University of Cambridge. “That connection exists at the thermodynamic level.”

Do the laws of thermodynamics, then, impose a limit on neuron-based intelligence, one that applies universally, whether in birds, primates, porpoises or praying mantises? This question apparently has never been asked in such broad terms, but the scientists interviewed for this article generally agree that it is a question worth contemplating. “It’s a very interesting point,” says Vijay Balasubramanian, a physicist who studies neural coding of information at the University of Penn­sylvania. “I’ve never even seen this point discussed in science fiction.”

Intelligence is of course a loaded word: it is hard to measure and even to define. Still, it seems fair to say that by most metrics, humans are the most intelligent animals on earth. But as our brain has evolved, has it approached a hard limit to its ability to process information? Could there be some physical limit to the evolution of neuron-based intelligence—and not just for humans but for all of life as we know it? (…)

Staying in Touch

Much of the energetic burden of brain size comes from the organ’s communication networks: in the human cortex, communications account for 80 percent of energy consumption. But it appears that as size increases, neuronal connectivity also becomes more challenging for subtler, structural reasons. (…)

A typical neuron has an elongated tail called the axon. At its end, the axon branches out, with the tips of the branches forming synapses, or contact points, with other cells. Axons, like telegraph wires, may connect different parts of the brain or may bundle up into nerves that extend from the central nervous system to the various parts of the body.

In their pioneering efforts, biologists measured the diameter of axons under microscopes and counted the size and density of nerve cells and the number of synapses per cell. They surveyed hundreds, sometimes thousands, of cells per brain in dozens of species. Eager to refine their mathematical curves by extending them to ever larger beasts, they even found ways to extract intact brains from whale carcasses. The five-hour process, meticulously described in the 1880s by biologist Gustav Adolf Guldberg, involved the use of a two-man lumberjack saw, an ax, a chisel and plenty of strength to open the top of the skull like a can of beans.

These studies revealed that as brains expand in size from species to species, several subtle but probably unsustainable changes happen. First, the average size of nerve cells increases. This phenomenon allows the neurons to connect to more and more of their compatriots as the overall number of neurons in the brain increases. But larger cells pack into the cerebral cortex less densely, so the distance between cells increases, as does the length of axons required to connect them. And because longer axons mean longer times for signals to travel between cells, these projections need to become thicker to maintain speed (thicker axons carry signals faster).

Researchers have also found that as brains get bigger from species to species, they are divided into a larger and larger number of distinct areas. You can see those areas if you stain brain tissue and view it under a microscope: patches of the cortex turn different colors. These areas often correspond with specialized functions, say, speech comprehension or face recognition. And as brains get larger, the specialization unfolds in another dimension: equivalent areas in the left and right hemispheres take on separate functions—for example, spatial versus verbal reasoning.

For decades this dividing of the brain into more work cubicles was viewed as a hallmark of intelligence. But it may also reflect a more mundane truth, says Mark Changizi, a theoretical neurobiologist at 2AI Labs in Boise, Idaho: specialization compensates for the connectivity problem that arises as brains get bigger. As you go from a mouse brain to a cow brain with 100 times as many neurons, it is impossible for neurons to expand quickly enough to stay just as well connected. Brains solve this problem by segregating like-functioned neurons into highly interconnected modules, with far fewer long-distance connections between modules. The specialization between right and left hemispheres solves a similar problem; it reduces the amount of information that must flow between the hemispheres, which minimizes the number of long, interhemispheric axons that the brain needs to maintain. “All of these seemingly complex things about bigger brains are just the backbends that the brain has to do to satisfy the connectivity problem” as it gets larger, Changizi argues. “It doesn’t tell us that the brain is smarter.”

Jan Karbowski, a computational neuroscientist at the Polish Academy of Sciences in Warsaw, agrees. “Somehow brains have to optimize several parameters simultaneously, and there must be trade-offs,” he says. “If you want to improve one thing, you screw up something else.” What happens, for example, if you expand the corpus callosum (the bundle of axons connecting right and left hemispheres) quickly enough to maintain constant connectivity as brains expand? And what if you thicken those axons, so the transit delay for signals traveling between hemispheres does not increase as brains expand? The results would not be pretty. The corpus callosum would expand—and push the hemispheres apart—so quickly that any performance improvements would be neutralized.

These trade-offs have been laid into stark relief by experiments showing the relation between axon width and conduction speed. At the end of the day, Karbowski says, neurons do get larger as brain size increases, but not quite quickly enough to stay equally well connected. And axons do get thicker as brains expand, but not quickly enough to make up for the longer conduction delays.

Keeping axons from thickening too quickly saves not only space but energy as well, Balasubramanian says. Doubling the width of an axon doubles energy expenditure, while increasing the velocity of pulses by just 40 percent or so. Even with all of this corner cutting, the volume of white matter (the axons) still grows more quickly than the volume of gray matter (the main body of neurons containing the cell nucleus) as brains increase in size. To put it another way, as brains get bigger, more of their volume is devoted to wiring rather than to the parts of individual cells that do the actual computing, which again suggests that scaling size up is ultimately unsustainable.

The Primacy of Primates

It is easy, with this dire state of affairs, to see why a cow fails to squeeze any more smarts out of its grapefruit-size brain than a mouse does from its blueberry-size brain. But evolution has also achieved impressive work­arounds at the level of the brain’s building blocks. When Jon H. Kaas, a neuroscientist at Vanderbilt University, and his colleagues compared the morphology of brain cells across a spectrum of primates in 2007, they stumbled on­to a game changer—one that has probably given humans an edge. (…)

Humans pack 100 billion neurons into 1.4 kilograms of brain, but a rodent that had followed its usual neuron-size scaling law to reach that number of neurons would now have to drag around a brain weighing 45 kilograms. And metabolically speaking, all that brain matter would eat the varmint out of house and home. “That may be one of the factors in why the large rodents don’t seem to be [smarter] at all than the small rodents,” Kaas says.

Having smaller, more densely packed neurons does seem to have a real impact on intelligence. In 2005 neurobiologists Gerhard Roth and Urusula Dicke, both at the University of Bremen in Germany, reviewed several traits that predict intelligence across species (as measured, roughly, by behavioral complexity) even more effectively than the encephalization quotient does. “The only tight correlation with intelligence,” Roth says, “is in the number of neurons in the cortex, plus the speed of neuronal activity,” which decreases with the distance between neurons and increases with the degree of myelination of axons. Myelin is fatty insulation that lets axons transmit signals more quickly.

If Roth is right, then primates’ small neurons have a double effect: first, they allow a greater increase in cortical cell number as brains enlarge; and second, they allow faster communication, because the cells pack more closely. Elephants and whales are reasonably smart, but their larger neurons and bigger brains lead to inefficiencies. “The packing density of neurons is much lower,” Roth says, “which means that the distance between neurons is larger and the velocity of nerve impulses is much lower.”

In fact, neuroscientists have recently seen a similar pattern in variations within humans: people with the quickest lines of communication between their brain areas also seem to be the brightest. One study, led in 2009 by Martijn P. van den Heuvel of the University Medical Center Utrecht in the Netherlands, used functional magnetic resonance imaging to measure how directly different brain areas talk to one another—that is, whether they talk via a large or a small number of intermediary areas. Van den Heuvel found that shorter paths between brain areas correlated with higher IQ. Edward Bullmore, an imaging neuroscientist at the University of Cambridge, and his collaborators obtained similar results the same year using a different approach. They compared working memory (the ability to hold several numbers in one’s memory at once) among 29 healthy people. They then used magnetoencephalographic recordings from their subjects’ scalp to estimate how quickly communication flowed between brain areas. People with the most direct communication and the fastest neural chatter had the best working memory.

It is a momentous insight. We know that as brains get larger, they save space and energy by limiting the number of direct connections between regions. The large human brain has relatively few of these long-distance connections. But Bullmore and van den Heuvel showed that these rare, nonstop connections have a disproportionate influence on smarts: brains that scrimp on resources by cutting just a few of them do noticeably worse. “You pay a price for intelligence,” Bullmore concludes, “and the price is that you can’t simply minimize wiring.”

Intelligence Design

If communication between neurons, and between brain areas, is really a major bottleneck that limits intelligence, then evolving neurons that are even smaller (and closer together, with faster communication) should yield smarter brains. Similarly, brains might become more efficient by evolving axons that can carry signals faster over longer distances without getting thicker. But something prevents animals from shrinking neurons and axons beyond a certain point. You might call it the mother of all limitations: the proteins that neurons use to generate electrical pulses, called ion channels, are inherently unreliable.

Ion channels are tiny valves that open and close through changes in their molecular folding. When they open, they allow ions of sodium, potassium or calcium to flow across cell membranes, producing the electrical signals by which neurons communicate. But being so minuscule, ion channels can get flipped open or closed by mere thermal vibrations. A simple biology experiment lays the defect bare. Isolate a single ion channel on the surface of a nerve cell using a microscopic glass tube, sort of like slipping a glass cup over a single ant on a sidewalk. When you adjust the voltage on the ion channel—a maneuver that causes it to open or close—the ion channel does not flip on and off reliably like your kitchen light does. Instead it flutters on and off randomly. Sometimes it does not open at all; other times it opens when it should not. By changing the voltage, all you do is change the likelihood that it opens.

It sounds like a horrible evolutionary design flaw—but in fact, it is a compromise. “If you make the spring on the channel too loose, then the noise keeps on switching it,” Laughlin says—as happens in the biology experiment described earlier. “If you make the spring on the channel stronger, then you get less noise,” he says, “but now it’s more work to switch it,” which forces neurons to spend more energy to control the ion channel. In other words, neurons save energy by using hair-trigger ion channels, but as a side effect the channels can flip open or close accidentally. The trade-off means that ion channels are reliable only if you use large numbers of them to “vote” on whether or not a neuron will generate an impulse. But voting becomes problematic as neurons get smaller. “When you reduce the size of neurons, you reduce the number of channels that are available to carry the signal,” Laughlin says. “And that increases the noise.”

In a pair of papers published in 2005 and 2007, Laughlin and his collaborators calculated whether the need to include enough ion channels limits how small axons can be made. The results were startling. “When axons got to be about 150 to 200 nanometers in diameter, they became impossibly noisy,” Laughlin says. At that point, an axon contains so few ion channels that the accidental opening of a single channel can spur the axon to deliver a signal even though the neuron did not intend to fire. The brain’s smallest axons probably already hiccup out about six of these accidental spikes per second. Shrink them just a little bit more, and they would blather out more than 100 per second. “Cortical gray matter neurons are working with axons that are pretty close to the physical limit,” Laughlin concludes.

This fundamental compromise between information, energy and noise is not unique to biology. It applies to everything from optical-fiber communications to ham radios and computer chips. Transistors act as gatekeepers of electrical signals, just like ion channels do. For five decades engineers have shrunk transistors steadily, cramming more and more onto chips to produce ever faster computers. Transistors in the latest chips are 22 nanometers. At those sizes, it becomes very challenging to “dope” silicon uniformly (doping is the addition of small quantities of other elements to adjust a semiconductor’s properties). By the time they reach about 10 nanometers, transistors will be so small that the random presence or absence of a single atom of boron will cause them to behave unpredictably.

Engineers might circumvent the limitations of current transistors by going back to the drawing board and redesigning chips to use entirely new technologies. But evolution cannot start from scratch: it has to work within the scheme and with the parts that have existed for half a billion years, explains Heinrich Reichert, a developmental neurobiologist at the University of Basel in Switzerland—like building a battleship with modified airplane parts.

Moreover, there is another reason to doubt that a major evolutionary leap could lead to smarter brains. Biology may have had a wide range of options when neurons first evolved, but 600 million years later a peculiar thing has happened. The brains of the honeybee, the octopus, the crow and intelligent mammals, Roth points out, look nothing alike at first glance. But if you look at the circuits that underlie tasks such as vision, smell, navigation and episodic memory of event sequences, “very astonishingly they all have absolutely the same basic arrangement.” Such evolutionary convergence usually suggests that a certain anatomical or physiological solution has reached maturity so that there may be little room left for improvement.

Perhaps, then, life has arrived at an optimal neural blueprint. That blueprint is wired up through a step-by-step choreography in which cells in the growing embryo interact through signaling molecules and physical nudging, and it is evolutionarily entrenched.

Bees Do It

So have humans reached the physical limits of how complex our brain can be, given the building blocks that are available to us? Laughlin doubts that there is any hard limit on brain function the way there is one on the speed of light. “It’s more likely you just have a law of diminishing returns,” he says. “It becomes less and less worthwhile the more you invest in it.” Our brain can pack in only so many neurons; our neurons can establish only so many connections among themselves; and those connections can carry only so many electrical impulses per second. Moreover, if our body and brain got much bigger, there would be costs in terms of energy consumption, dissipation of heat and the sheer time it takes for neural impulses to travel from one part of the brain to another.

The human mind, however, may have better ways of expanding without the need for further biological evolution. After all, honeybees and other social insects do it: acting in concert with their hive sisters, they form a collective entity that is smarter than the sum of its parts. Through social interaction we, too, have learned to pool our intelligence with others.

And then there is technology. For millennia written language has enabled us to store information outside our body, beyond the capacity of our brain to memorize. One could argue that the Internet is the ultimate consequence of this trend toward outward expansion of intelligence beyond our body. In a sense, it could be true, as some say, that the Internet makes you stupid: collective human intelligence—culture and computers—may have reduced the impetus for evolving greater individual smarts.

Douglas Fox, a freelance writer living in San Francisco. He is a frequent contributor to New Scientist, The Limits of Intelligence, Scientific American, July 2011.

See also:
Quantum Approaches to Consciousness, Stanford Encyclopedia of Philosophy
Dunbar’s Number: Why We Can’t Have More Than 150 Friends

May
29th
Sun
permalink

Anthropocene: “the recent age of man”. Mapping Human Influence on Planet Earth

     

"Humans have a tendency to fall prey to the illusion that their economy is at the very center of the universe, forgetting that the biosphere is what ultimately sustains all systems, both man-made and natural. In this sense, ‘environmental issues’ are not about saving the planet—it will always survive and evolve with new combinations of atom—but about the prosperous development of our own species.”

Carl Folke is the science director of the Stockholm Resilience Centre at Stockholm University, Starting Over, SEED, Aprill 22, 2011.

Science is recognising humans as a geological force to be reckoned with.

"The here and now are defined by astronomy and geology. Astronomy takes care of the here: a planet orbiting a yellow star embedded in one of the spiral arms of the Milky Way, a galaxy that is itself part of the Virgo supercluster, one of millions of similarly vast entities dotted through the sky. Geology deals with the now: the 10,000-year-old Holocene epoch, a peculiarly stable and clement part of the Quaternary period, a time distinguished by regular shifts into and out of ice ages. The Quaternary forms part of the 65m-year Cenozoic era, distinguished by the opening of the North Atlantic, the rise of the Himalayas, and the widespread presence of mammals and flowering plants. This era in turn marks the most recent part of the Phanerozoic aeon, the 540m-year chunk of the Earth’s history wherein rocks with fossils of complex organisms can be found. The regularity of celestial clockwork and the solid probity of rock give these co-ordinates a reassuring constancy.

                                               (Click to enlarge)

Now there is a movement afoot to change humanity’s co-ordinates. In 2000 Paul Crutzen, an eminent atmospheric chemist, realised he no longer believed he was living in the Holocene. He was living in some other age, one shaped primarily by people. From their trawlers scraping the floors of the seas to their dams impounding sediment by the gigatonne, from their stripping of forests to their irrigation of farms, from their mile-deep mines to their melting of glaciers, humans were bringing about an age of planetary change. With a colleague, Eugene Stoermer, Dr Crutzen suggested this age be called the Anthropocene—“the recent age of man”. (…)

The term “paradigm shift” is bandied around with promiscuous ease. But for the natural sciences to make human activity central to its conception of the world, rather than a distraction, would mark such a shift for real. For centuries, science has progressed by making people peripheral. In the 16th century Nicolaus Copernicus moved the Earth from its privileged position at the centre of the universe. In the 18th James Hutton opened up depths of geological time that dwarf the narrow now. In the 19th Charles Darwin fitted humans onto a single twig of the evolving tree of life. As Simon Lewis, an ecologist at the University of Leeds, points out, embracing the Anthropocene as an idea means reversing this trend. It means treating humans not as insignificant observers of the natural world but as central to its workings, elemental in their force.

Sous la plage, les pavés;

The most common way of distinguishing periods of geological time is by means of the fossils they contain. On this basis picking out the Anthropocene in the rocks of days to come will be pretty easy. Cities will make particularly distinctive fossils. A city on a fast-sinking river delta (and fast-sinking deltas, undermined by the pumping of groundwater and starved of sediment by dams upstream, are common Anthropocene environments) could spend millions of years buried and still, when eventually uncovered, reveal through its crushed structures and weird mixtures of materials that it is unlike anything else in the geological record.

The fossils of living creatures will be distinctive, too. Geologists define periods through assemblages of fossil life reliably found together. One of the characteristic markers of the Anthropocene will be the widespread remains of organisms that humans use, or that have adapted to life in a human-dominated world. According to studies by Erle Ellis, an ecologist at the University of Maryland, Baltimore County, the vast majority of ecosystems on the planet now reflect the presence of people. There are, for instance, more trees on farms than in wild forests. And these anthropogenic biomes are spread about the planet in a way that the ecological arrangements of the prehuman world were not. The fossil record of the Anthropocene will thus show a planetary ecosystem homogenised through domestication.

More sinisterly, there are the fossils that will not be found. Although it is not yet inevitable, scientists warn that if current trends of habitat loss continue, exacerbated by the effects of climate change, there could be an imminent and dramatic number of extinctions before long.

All these things would show future geologists that humans had been present. But though they might be diagnostic of the time in which humans lived, they would not necessarily show that those humans shaped their time in the way that people pushing the idea of the Anthropocene want to argue. The strong claim of those announcing the recent dawning of the age of man is that humans are not just spreading over the planet, but are changing the way it works.

Such workings are the province of Earth-system science, which sees the planet not just as a set of places, or as the subject of a history, but also as a system of forces, flows and feedbacks that act upon each other. This system can behave in distinctive and counterintuitive ways, including sometimes flipping suddenly from one state to another. To an Earth-system scientist the difference between the Quaternary period (which includes the Holocene) and the Neogene, which came before it, is not just what was living where, or what the sea level was; it is that in the Neogene the climate stayed stable whereas in the Quaternary it swung in and out of a series of ice ages. The Earth worked differently in the two periods.

The clearest evidence for the system working differently in the Anthropocene comes from the recycling systems on which life depends for various crucial elements. In the past couple of centuries people have released quantities of fossil carbon that the planet took hundreds of millions of years to store away. This has given them a commanding role in the planet’s carbon cycle.

Although the natural fluxes of carbon dioxide into and out of the atmosphere are still more than ten times larger than the amount that humans put in every year by burning fossil fuels, the human addition matters disproportionately because it unbalances those natural flows. As Mr Micawber wisely pointed out, a small change in income can, in the absence of a compensating change in outlays, have a disastrous effect. The result of putting more carbon into the atmosphere than can be taken out of it is a warmer climate, a melting Arctic, higher sea levels, improvements in the photosynthetic efficiency of many plants, an intensification of the hydrologic cycle of evaporation and precipitation, and new ocean chemistry.

All of these have knock-on effects both on people and on the processes of the planet. More rain means more weathering of mountains. More efficient photosynthesis means less evaporation from croplands. And the changes in ocean chemistry are the sort of thing that can be expected to have a direct effect on the geological record if carbon levels rise far enough.

At a recent meeting of the Geological Society of London that was devoted to thinking about the Anthropocene and its geological record, Toby Tyrrell of the University of Southampton pointed out that pale carbonate sediments—limestones, chalks and the like—cannot be laid down below what is called a “carbonate compensation depth”. And changes in chemistry brought about by the fossil-fuel carbon now accumulating in the ocean will raise the carbonate compensation depth, rather as a warmer atmosphere raises the snowline on mountains. Some ocean floors which are shallow enough for carbonates to precipitate out as sediment in current conditions will be out of the game when the compensation depth has risen, like ski resorts too low on a warming alp. New carbonates will no longer be laid down. Old ones will dissolve. This change in patterns of deep-ocean sedimentation will result in a curious, dark band of carbonate-free rock—rather like that which is seen in sediments from the Palaeocene-Eocene thermal maximum, an episode of severe greenhouse warming brought on by the release of pent-up carbon 56m years ago.

The fix is in

No Dickensian insights are necessary to appreciate the scale of human intervention in the nitrogen cycle. One crucial part of this cycle—the fixing of pure nitrogen from the atmosphere into useful nitrogen-containing chemicals—depends more or less entirely on living things (lightning helps a bit). And the living things doing most of that work are now people (see chart). By adding industrial clout to the efforts of the microbes that used to do the job single-handed, humans have increased the annual amount of nitrogen fixed on land by more than 150%. Some of this is accidental. Burning fossil fuels tends to oxidise nitrogen at the same time. The majority is done on purpose, mostly to make fertilisers. This has a variety of unwholesome consequences, most importantly the increasing number of coastal “dead zones” caused by algal blooms feeding on fertiliser-rich run-off waters.


                                                              (Click to enlarge)

Industrial nitrogen’s greatest environmental impact, though, is to increase the number of people. Although nitrogen fixation is not just a gift of life—it has been estimated that 100m people were killed by explosives made with industrially fixed nitrogen in the 20th century’s wars—its net effect has been to allow a huge growth in population. About 40% of the nitrogen in the protein that humans eat today got into that food by way of artificial fertiliser. There would be nowhere near as many people doing all sorts of other things to the planet if humans had not sped the nitrogen cycle up.

It is also worth noting that unlike many of humanity’s other effects on the planet, the remaking of the nitrogen cycle was deliberate. In the late 19th century scientists diagnosed a shortage of nitrogen as a planet-wide problem. Knowing that natural processes would not improve the supply, they invented an artificial one, the Haber process, that could make up the difference. It was, says Mark Sutton of the Centre for Ecology and Hydrology in Edinburgh, the first serious human attempt at geoengineering the planet to bring about a desired goal. The scale of its success outstripped the imaginings of its instigators. So did the scale of its unintended consequences.

For many of those promoting the idea of the Anthropocene, further geoengineering may now be in order, this time on the carbon front. Left to themselves, carbon-dioxide levels in the atmosphere are expected to remain high for 1,000 years—more, if emissions continue to go up through this century. It is increasingly common to hear climate scientists arguing that this means things should not be left to themselves—that the goal of the 21st century should be not just to stop the amount of carbon in the atmosphere increasing, but to start actively decreasing it. This might be done in part by growing forests (see article) and enriching soils, but it might also need more high-tech interventions, such as burning newly grown plant matter in power stations and pumping the resulting carbon dioxide into aquifers below the surface, or scrubbing the air with newly contrived chemical-engineering plants, or intervening in ocean chemistry in ways that would increase the sea’s appetite for the air’s carbon. (…)

It is that the further the Earth system gets from the stable conditions of the Holocene, the more likely it is to slip into a whole new state and change itself yet further.

The Earth’s history shows that the planet can indeed tip from one state to another, amplifying the sometimes modest changes which trigger the transition. The nightmare would be a flip to some permanently altered state much further from the Holocene than things are today: a hotter world with much less productive oceans, for example. Such things cannot be ruled out. On the other hand, the invocation of poorly defined tipping points is a well worn rhetorical trick for stirring the fears of people unperturbed by current, relatively modest, changes.

In general, the goal of staying at or returning close to Holocene conditions seems judicious. It remains to be seen if it is practical. The Holocene never supported a civilisation of 10 billion reasonably rich people, as the Anthropocene must seek to do, and there is no proof that such a population can fit into a planetary pot so circumscribed. So it may be that a “good Anthropocene”, stable and productive for humans and other species they rely on, is one in which some aspects of the Earth system’s behaviour are lastingly changed. For example, the Holocene would, without human intervention, have eventually come to an end in a new ice age. Keeping the Anthropocene free of ice ages will probably strike most people as a good idea.

Dreams of a smart planet

That is an extreme example, though. No new ice age is due for some millennia to come. Nevertheless, to see the Anthropocene as a blip that can be minimised, and from which the planet, and its people, can simply revert to the status quo, may be to underestimate the sheer scale of what is going on.

Take energy. At the moment the amount of energy people use is part of what makes the Anthropocene problematic, because of the carbon dioxide given off. That problem will not be solved soon enough to avert significant climate change unless the Earth system is a lot less prone to climate change than most scientists think. But that does not mean it will not be solved at all. And some of the zero-carbon energy systems that solve it—continent- scale electric grids distributing solar energy collected in deserts, perhaps, or advanced nuclear power of some sort—could, in time, be scaled up to provide much more energy than today’s power systems do. As much as 100 clean terawatts, compared to today’s dirty 15TW, is not inconceivable for the 22nd century. That would mean humanity was producing roughly as much useful energy as all the world’s photosynthesis combined.

In a fascinating recent book, “Revolutions that Made the Earth”, Timothy Lenton and Andrew Watson, Earth-system scientists at the universities of Exeter and East Anglia respectively, argue that large changes in the amount of energy available to the biosphere have, in the past, always marked large transitions in the way the world works. They have a particular interest in the jumps in the level of atmospheric oxygen seen about 2.4 billion years ago and 600m years ago. Because oxygen is a particularly good way of getting energy out of organic matter (if it weren’t, there would be no point in breathing) these shifts increased sharply the amount of energy available to the Earth’s living things. That may well be why both of those jumps seem to be associated with subsequent evolutionary leaps—the advent of complex cells, in the first place, and of large animals, in the second. Though the details of those links are hazy, there is no doubt that in their aftermath the rules by which the Earth system operated had changed.

The growing availability of solar or nuclear energy over the coming centuries could mark the greatest new energy resource since the second of those planetary oxidations, 600m years ago—a change in the same class as the greatest the Earth system has ever seen. Dr Lenton (who is also one of the creators of the planetary-boundaries concept) and Dr Watson suggest that energy might be used to change the hydrologic cycle with massive desalination equipment, or to speed up the carbon cycle by drawing down atmospheric carbon dioxide, or to drive new recycling systems devoted to tin and copper and the many other metals as vital to industrial life as carbon and nitrogen are to living tissue. Better to embrace the Anthropocene’s potential as a revolution in the way the Earth system works, they argue, than to try to retreat onto a low-impact path that runs the risk of global immiseration.

Such a choice is possible because of the most fundamental change in Earth history that the Anthropocene marks: the emergence of a form of intelligence that allows new ways of being to be imagined and, through co-operation and innovation, to be achieved. The lessons of science, from Copernicus to Darwin, encourage people to dismiss such special pleading. So do all manner of cultural warnings, from the hubris around which Greek tragedies are built to the lamentation of King David’s preacher: “Vanity of vanities, all is vanity…the Earth abideth for ever…and there is no new thing under the sun.” But the lamentation of vanity can be false modesty. On a planetary scale, intelligence is something genuinely new and powerful. Through the domestication of plants and animals intelligence has remade the living environment. Through industry it has disrupted the key biogeochemical cycles. For good or ill, it will do yet more.

It may seem nonsense to think of the (probably sceptical) intelligence with which you interpret these words as something on a par with plate tectonics or photosynthesis. But dam by dam, mine by mine, farm by farm and city by city it is remaking the Earth before your eyes.”

A man-made world, The Economist, May 26th 2011. (Illustration source)

Anthropocene Cartography - Mapping Human Influence on Planet Earth 


     Western Eurasian Networks | Cities, roads, railways, tranmission lines and submarine cables.

"This is the age of humans.

At least, that’s the argument a number of scientists and scholars are making. They say that the impact of humans on the earth since the early 19th century has been so great, and so irreversible, that it has created a new era similar to the Pleistocene or Holocene. Nobel Prize winner Paul J. Crutzen even proposed the name Anthropocene, and it’s begun to catch on.

Communicating this idea to the public is one of the goals of Globaïa, an educational organization that specializes in creating visuals to explain environmental issues. In a recent project, they mapped population centers, transportation routes and energy transmission lines. (…)

We know that humans have over the centuries become a driving force on our planet. We have been, for the last thousand of years or so, the main geomorphic agent on Earth. It might be hard to believe but, nowadays, human activities shift about ten times as much material on continents’ surface as all geological processes combined. Though our technologies and extensive land-use, we have become a land-shaping force of nature, similar to rivers, rain, wind and glaciers.

Furthermore, over the last 60 years (since the end of WWII), many major human activities have been sharply accelerating in pace and intensity. Not only population trends and atmospheric CO2 but also water use, damming of rivers, deforestation, fertilizer consumption, to name a few. The period is called the “great acceleration” and today’s environmental problems are somehow linked to this rapid global increase of population and consumption and its impacts on the Earth System. (…)

Mapping the extent of our infrastructures and the energy flows of our activities is, I believe, a good starting point to increase awareness of the peculiarities of the present era. I wish these images, along with other tools created by many scientists and NGOs, could contribute to enhance mutual understanding and create collective solutions. For we all share the same tiny, pale blue dot. (…)

Anthropocene Mapping from Globaïa.

Q: Your maps include cities, transportation paths and various transmission lines of both power and information. Why do you feel these are valid ways of examining the impact of humans on the earth?

There are many ways to map our impacts on planet Earth. We can map croplands and pasture lands, as well as anthropogenic biomes (the so-called “anthromes”). My goal was to create something new where we could essentially see the main channels through which human exchanges (transport, energy, resources, information) are occurring. Roads and railways are high-impacts human features for obvious reasons. Pipelines and transmission lines are feeding our global civilization, for better or for worse. Submarine cables are physically linking continents together and contributing to this “age of information.” I could have added telephone lines, satellites, smalls road, mines, dams and so on — but the point was not to create map with overly saturated areas either. (…)

Q: Can you discuss the role of the human in the ecosystem, and its physical footprint on the earth?

I was referring to the Anthroposphere as the human layer of the Earth System. The biosphere is made out of living matter. Together with the atmosphere, the lithosphere (including the asthenosphere) and the hydrosphere (including the cryosphere), this set of concentric spheres is creating the ecosphere — our world, the Earth. It is quite an old world where many dramatic events took place and where billion of innovations happened through evolution. It is a world fed by our mighty Sun. It is a world where humans appeared only recently. Now, indeed, our species and its 7 billion people is still growing inside it, converting ever more wilderness areas into human-influenced landscapes. This world is however finite, unique and fragile. Now is a good time to start thinking of it this way. I believe we are still, in our heads, living in a pre-Copernician world. It’s time to upgrade our worldview.”

— Felix D. Pharand, Mapping the Age of Humans, The Atlantic Cities, Oct 27, 2011

Welcome to the Anthropocene



A 3-minute journey through the last 250 years of our history, from the start of the Industrial Revolution to the Rio+20 Summit. The film charts the growth of humanity into a global force on an equivalent scale to major geological processes. The film was commissioned by the Planet Under Pressure conference, London 26-29 March, a major international conference focusing on solutions. planetunderpressure2012.net.

HOME documentary

                                   
                                        Click the image to see a film

"Internationally renowned photographer Yann Arthus-Bertrand makes his feature directorial debut with this environmentally conscious documentary produced by Luc Besson, and narrated by Glenn Close. Shot in 54 countries and 120 locations over 217 days, Home presents the many wonders of planet Earth from an entirely aerial perspective. As such, we are afforded the unique opportunity to witness our changing environment from an entirely new vantage point.

In our 200,000 years on Earth, humanity has hopelessly upset Mother Nature’s delicate balance. Some experts claim that we have less than ten years to change our patterns of consumption and reverse the trend before the damage is irreversible. Produced to inspire action and encourage thoughtful debate, Home poses the prospect that unless we act quickly, we risk losing the only home we may ever have.”

HOME a film by Yann Arthus-Bertrand, 2009.

See also:

A Cartography of the Anthropocene, Globaïa
The Age of Anthropocene: Should We Worry? - Imagine a world where cognition arises from techno-human networks rather than the Cartesian individual - the Cognocene, The New York Times debate, May 2011
☞ Adelheid Fisher, A Home Before the End of the World
☞ Andrew C. Revkin, Who Made This Mess of Planet Earth?, The New York Times, July 15, 2011
☞ Daniel T. Willingham, Trust Me, I’m a Scientist, Scientific American, May 5, 2011
Living Planet Report, WWF
It Took Earth Ten Million Years to Recover from Greatest Mass Extinction of all time, ScienceDaily, May 27, 2012
Earth tag on Lapidarium notes

Apr
20th
Wed
permalink

The Psychology of Violence - a fascinating look at a violent act and a modern rethink of the psychology of shame and honour in preventing it


Thomas Jones, King’s Colledge To Wit, (the famous duel between the Duke of Wellington and the Earl of Winchilsea), 1829

Mercutio: Will you pluck your dagger from his pitcher? Make haste, lest mine be about you ere it be out.
Tybalt: I am for you.
Romeo: Gentle Mercutio, put thy weapon up.
Mercutio: Come, sir, your passado!

— (W. Shakespeare, Romeo and Juliet)

James Gilligan: “Violence itself is a form of communication, it’s a way of sending a message and it does that through symbolic means through damaging the body. But if people can express themselves and communicate verbally they don’t need violence and they are much less likely to use their fists or weapons as their means of communication. They are much more likely to use words. I’m saying this on the basis of clinical experience, working with violent people. (…)

I could point out there’s as much violence in the greatest literature in history, the Greek tragedies, Shakespeare’s tragedies and so on, as there is in popular entertainment. The difference is in the great literature violence is depicted realistically for what it really is, namely a tragedy. It’s tragic, it’s not entertainment, it’s not fun, it’s not exciting, it’s a tragedy. (…)

Mercutio: I am hurt.
Romeo: The hurt cannot be much.
Mercutio: No, no tis not so deep as a well nor so wide as a church door, but ‘tis enough: twill serve. A plague o’ both your houses.

— (W. Shakespeare, Romeo and Juliet)

Q: When Shakespeare wrote the tragedy Romeo and Juliet people were at each others’ throats, murder rates were at the highest in Europe’s history and murder took on a different meaning then. For men it was a case of kill or be killed in order to save face.

Pieter Spierenburg: It was basically seen so differently because of the idea of personal honour—specifically male honour—which depended on being prepared for violence.

JG: For example in virtually every language the words for masculinity are the same as the words for courage in Greek andrea, in Latin vir(?), vir is the word that means man but it also means soldier, to be a man is to be a soldier and it’s related to the Latin word for courage, virtus, which is the root of our word virtue but in that warlike culture courage was the prime virtue.

[From Romeo and Juliet]
Benvolio: Oh Romeo, Romeo, brave Mercutio’s dead.

PS: Murder which occurred when the conflict was about people’s reputations, that was easily understandable; the people around it, or the community regarded it as something that could happen. The authorities—and these were basically urban patricians who would be enmeshed in conflicts themselves—they understood especially that revenge; revenge was often officially legal and the first homicides in a conflict, an honourable conflict, would be at least treated leniently or punished with a fine or something.

Q: And killings that were carried out in defence of honour, were those killings seen as murder as we would conceptualise it?

PS: They would usually be regarded as honourable killings so at least excusable as something that perhaps should not have happened. But they could easily happen when two people who had a quarrel and a fight resulting from that quarrel. Murder in order to rob someone for material gain, that was viewed as dishonourable.

Q: What kinds of punishments were there for these two kinds of murder—an honourable one and a dishonourable one—were they very different?

PS: Yes, they were very different. Originally around 1300 the regular punishment for an honourable killing would be a fine or perhaps a banishment, whereas punishment for a treacherous murder would be execution.

Q: Was it a period do you think where the value of the human life was less than now?

PS: I think they still placed value on life but of course it was also a period when all people believed in an afterlife. But in terms of the worldly views they would indeed value honour, personal honour, they would value it more than life. Not only in the European Middle Ages but in many societies where you have open violence and ideas of male honour, your honour is worth more than your life. If you would have to choose

[From Romeo and Juliet]
Lady Capulet: I beg for justice which thou prince must give. Romeo slew Tybalt, Romeo must not live.
Prince: Romeo slew him, he slew Mercutio. Who now the price of this dear blood doth owe…?

Q: In this period of history that you’re talking about the conceptualisation of honour was firmly associated with the body, you suggest. What does that mean?

PS: Yes, the body or the physical person—it was anthropologists who first discovered the vestiges of that in Mediterranean societies, it was tied up with a kind of physical imagery and especially for men that strong men are associated with fierce animals and the like. Or that certain parts or the body symbolically play a big role in honour—your testicles, or your nose, because your nose is the first thing that goes into the world, it goes in front of you as it were.

Q: We’ve all heard the phrases ‘kiss and make up’ or ‘sealed with a kiss’; in fact these come from mediaeval legal ceremonies, elaborate affairs for two warring families to end a murderous feud.

Pieter Spierenburg: There existed rituals of reconciliation, and homicide was not really criminalised, but there was also the possibility of making peace between families who had been in a conflict or to reconcile after a single homicide so as to prevent events from occurring. The two parties, the killer and the most close relative of the victim, would kiss each other on the mouth and then other family members would also kiss each other on the mouth.

Q: These are parties to a vendetta?

Pieter Spierenburg: Yes, and that would seal the reconciliation. But it could also be done with a handshake. The reconciliation had clearly religious overtones and it was often done in a church or in a monastery. The family of the perpetrator would not only pay money but also pay for masses being said for the killed person, which again was also a very material thing because that benefited his soul, and they believed that he would be away from purgatory and into heaven more quickly if these masses were said or if he was made posthumously a monk. (…)

Q: So all of this shifts in the 16th century with an internal process that you call the spiritualisation of honour. Now what do you mean by that?

PS: Basically it means that honour moves away from being based on the body, being tied to the body, being based on preparedness to defend yourself and your dependants, and that you get other sources of honour that for example economic success or that even in a later period what they called sexual purity is a source of honour, being a good husband, a good head of the family, things that people take pride in and that becomes a source of honour— a man can be honourable without being violent.

Q: What was it that triggered that shift in the internal landscape of Europe that changed the way people thought about and conducted violence?

PS: Basically it’s triggered by a broad social change of which processes of state formation, the development of modern states, the establishment of monopolies of violence or monopolies of force, which means that stronger rulers are able to pacify a larger territory and establish a more or less stable government there. You get these monopolies of force with stronger rulers who established courts and then they assemble elite groups, aristocrats around them at courts and people at courts are obliged to behave in a more civilised way which includes a renunciation of personal violence. So they are prestigious people, elites, who still have a peaceful lifestyle and that becomes a kind of cultural model that is eventually within a few centuries also imitated by broader groups in society.

James Gilligan: When people experience their moral universe as going between the polar opposites of shame versus honour, or we could also say shame versus pride, they are more likely to engage in serious violence.

The more people have a capacity for feelings of guilt and feelings of remorse after hurting other people, the less likely they are to kill others. I think in the history of Europe what one can see is a gradual increase in moral development from the shame/honour code to the guilt/innocence code.

PS: They did no longer accept that killings would be a part of life as it were, in the early modern period it was only certain groups, more lower class people of the time who still accepted that fights could be honourable like knife fights or whatever, and that if someone died in a knife fight if this was an accident that of course was regrettable, but that could happen.

Q: In his history of murder Pieter Spierenburg has tracked a big drop in homicides across Europe from the 17th to the 19th centuries as a result. But murder hasn’t left us; people continue to kill each other. And there are other constants across the centuries. Young men have always committed most murders, alcohol has been in the mix too and so has that ever-present ingredient—honour.

JG: Hitler came to power on the campaign promise to undo what he called the shame of Versailles, meaning the Versailles Peace Treaty at the end of World War 1, which he felt had dishonoured Germany, or subjected Germany to national dishonour. And of course his solution for that, the way to restore Germany’s honour and undo the shame was do almost limitless violence. Even if one goes back to the first recorded wars in western history the wars were fought because the side that became the aggressor felt it had been shamed and humiliated by the group that they were attacking.

In the Iliad Menelaus the Greek king felt shamed and humiliated because a Trojan prince by the name of Paris ran off with his wife Helen (who became Helen of Troy) so the Greek army, Menelaus’s friends and family and partners started a war against Troy and burned the city down and killed all the men and took the women into slavery and so on.

Q: And so from the honour of the collective, or national honour, to the deeply personal.

[From Rumble Fish]
Midget: Hey, Rusty James, Biff Wilcox is looking for you Rusty James.
Rusty James: I’m not hiding.
Midget: He says he’s going to kill you Rusty James.
Rusty James: Saying ain’t doing. Shit! So what’s he doing about this, what’s he doing about killing me?
Midget: The deal is man you’re supposed to meet him tonight under the arches behind the pet store at about 10 o’clock.
Rusty James: He’s comin’ alone?
BJ Jackson: I wouldn’t count on it, man.
Rusty James: Well if he’s bringin’ friends then I’m bringin’ friends, man.

JG: The emotional cause that I have found just universal among people who commit serious violence, lethal violence is the phenomenon of feeling overwhelmed by feelings of shame and humiliation. I’ve worked with the most violent people our society produces who tend to wind up in our prisons. I’ve been astonished by how almost always I get the same answer when I ask the question—why did you assault or even kill that person? And the answer I would get back in one set of words or another but almost always meaning exactly the same thing would be, ‘Because he disrespected me,’ or ‘He disrespected my mother,’ or my wife, my girlfriend, whatever.

They use that word ‘disrespect ‘so often that they’ve abbreviated it into the slang term ‘he dissed me’, and it struck me that any time a word is used so often people start abbreviating it, it tells you something about how central that is in their moral and emotional vocabulary. (…)

Nazi Germany. After Hitler came to power millions of German citizens who had never been violent before became murderers. The widespread potential for homicidal behaviour became very clear after the Nazis came to power. To me what that shows empirically is that the potential for homicide is perhaps not totally universal. Certainly there are some people who would rather go to their own death than to kill somebody else, but certainly the potential for homicide is much, much wider than we customarily think. I say that not to be cynical about human nature, but simply because I think it’s important for us to be aware of that, so that we will not inadvertently engage in behaviours that will bring out the potential for violence.

When people feel ashamed or inferior to other people, they feel first of all a threat to the self, because we frequently call shame a feeling, but it’s actually the absence of a feeling. Namely the feeling of self love or pride and yet that absence of a feeling is actually one of the most painful experiences that human beings can undergo. In order to wipe out feelings of shame people can become desperate enough to kill others or to kill themselves. So what is important here is to find ways not only to reduce the intensity and frequency with which people are shamed or feel shamed, but also to increase their capacity to tolerate feelings of shame—you know, without acting out violently as a means of reducing those feelings.

Q: Our traditional response to a violent or murderous crime has been partly to remove the problem by imprisoning a person and removing them from society but this is also a form of punishment, in other words a form of shaming the individual. Is that an effective way of responding to a violent action?

JG: Well let me say first of all I do believe that if anybody is going around killing or committing other forms of violence, going around raping or whatever, that we do have to lock them up at least for as long as they are likely to behave in that way in the future. We do have to protect the public. But I would say that’s a tragic necessity and it is not cost-free and you put your finger on one of the problems, that it can shame a person even further. However, I think that the situation can be understood slightly differently if we understand that it’s vitally important how we treat people after we’ve locked them up. We don’t need to shame them.

Q: There would be the argument that would be made that if someone has committed a violent crime they serve a term of punishment for that crime.

JG: Well my experience and, as I said, there’s a lot of evidence to support this, is that punishment far from inhibiting violence is actually the most powerful stimulant of violence that we’ve discovered yet. For example the prisoners that I have known, the most violent of them had already been punished as seriously as it is possible to punish somebody without actually killing them. I’m talking about the degree of child abuse that these men had suffered. The most violent prisoners were the survivors of their own attempted murder, usually at the hands of their own parents or the survivors of the actual murders of their closest relatives.

Now if punishment would prevent or inhibit violence these should have been the most non-violent people on earth, instead they were the most violent. And I would say that the more we punish people in prisons, as opposed to simply restraining them, the more we stimulate violence, which is why prisons have always been called schools for crime.

Q: You’ve suggested based on this thinking that the best way to respond to and reduce violence is to see it as a symptom, and in that case that the disease is shame, is humiliation—how do you treat that disease?

JG: One of the most important I found working with violent criminals in prisons is to first of all treat everybody with respect, regardless of who they are or what they’ve done. The second thing though is to provide the resources that people need in order to gain self-respect and self-esteem or in other words pride, and feelings of self worth. When I was directing the mental health services for the Commonwealth of Massachusetts in the United States we did a study to find out what program in the prisons had been most effective in preventing re-offending or recidivism after prisoners left the prison. And we found one program that had been 100% successful over a 25-year period with not one of these individuals returning to prison because of a new crime.

And that program was the prisoners getting a college degree while in prison. The professors from Boston University had been donating their time to teach college credit courses and several hundred inmates had gotten a college degree while in prison, then left, went back into the community and did not return because of a new crime. I mean I think there are many reasons why that would have that effect but the most important I think emotionally is that education is one of the most direct ways by which people raise their level of self-esteem and feelings of self worth. When you gain knowledge and skills that you can respect in yourself and that other people respect you’re more likely to be treated with honour by other people and to have the feelings of pride or self worth that protect you against being overwhelmed by shame to the degree that stimulates violence.

Bandy Lee: We at one point felt that human nature was not malleable and that somehow the legacy of our ancestors left us with a nature that is inevitably violence prone and something that we cannot easily correct. The conception has been that there must be something fixed in the brain, that they must have been born with this condition, that it was either a genetic or neurological defect or some kind of faulty wiring that has caused individuals to become violent. What we’re finding is that a lot of the genetics and the neurobiology that even has a remote association to violence is actually shaped largely by environment.

Q: Some people are obviously more prone to violence because they have a personality disorder or neurological affliction that makes them impulsive and that’s really a subject for another show but the brain always sits in a social environment, and that was the focus of an innovative project called Resolve to Stop the Violence led by Yale University psychiatrists Professor Bandy Lee and James Gilligan.

BL: Because the public health approach is to look at things from the very basic level of prevention. In fact we’re going very far upstream in just the way that cleaning up the sewage system and personal hygiene habits would take care of a lot of the diseases as it did over the course of a large part of the 19th century. We are finding that preventive measures are far more effective than trying to treat the problems after their occurrence which a lot of physicians have done or even to try to prevent suicides or homicides immediately before it happens turns out to be very difficult to do. (…)

JG: One of the more interesting ones was a program that was designed to deconstruct and reconstruct what we call the male role belief system. That is the whole set of assumptions and beliefs and definitions, to which almost all men in our society are exposed in the course of growing up, as to how you define masculinity, what men have a right to expect from women, even what they have a right to expect from other men, and what they need to do to prove that they are men. (…)

BL: (…) So at this moment of fatal peril, instead of reacting violently in order to defend and re-affirm this hit man, they would give themselves a moment to take a breather and to engage in social ways. And having the experience of a pro-social way of interacting, of not having to fear one’s peers—and actually they had a mentor system whereby those who were in the program for longer periods would act as mentors to those who were just coming in to the program, were able to teach newcomers that they didn’t have to act violently in order to be accepted, in order to be safe, and this was quite a surprise for those entering into the program.

JG: What came out of this was they’re gaining an awareness that they had been making the assumption that the human world is divided into people who were superior and people who were inferior, and in that distinction men were supposed to be superior and women were supposed to be inferior. And not only that, a real man would be superior to other men.

Now this is a recipe for violence. But the moment they would fight against it, the individual would feel his masculinity was being challenged and to defend his masculinity he would have to resort to violence. What was amazing to me was how quickly they realised and got the point, felt they had been brainwashed by society and immediately wanted to start educating the new inmates who were coming in after them. We trained them to lead these groups themselves, you know sort of like as in Alcoholics Anonymous where people who have the problem sometimes turn out to be the best therapists for others who have the problem.

Q: And what about recidivism rates, were those reduced?

JG: The level of re-committing a violent crime was 83% lower in this group than in the control group. The rate of in-house violence actually dropped to zero in the experimental group. And what we were especially interested in was that reduction in violence continued not at 100% level but close to it, once they left the goal. (…)

I think we need to educate our children and our adults that violence is trivialised when it’s treated simply as entertainment. I call that the pornography of violence. If violence is understood for what it really is, which is the deepest human tragedy, then I think people might become more sympathetic to supporting the changes in our culture that actually would succeed in reducing the amount of violence that we experience. I think that we’re all becoming much more sensitised to the importance of social, and political and economic and cultural factors as factors the either stimulate violence or inhibit it and prevent it.”

Murder in mind, All In The Mind, ABC Radio National, 9 April 2011.

James Gilligan, clinical Professor of Psychiatry, Adjunct Professor in the School of Law, Collegiate Professor in the School of Arts and Science New York University

Pieter Spierenburg, Professor of Historical Criminology Erasumus University,
The Netherlands. He has published on executions, prisons, violence, and the culture of early modern Europe.

Bandy Lee, Assistant Clinical Professor of Psychiatry Yale University, USA

See also:

☞ Charles K. Bellinger, Theories on the Psychology of Violence: An Address to the Association of Muslim Social Scientists, University of Texas at Arlington
Scott Atran on Why War Is Never Really Rational, Lapidarium
The Philosophy of War, Internet Encyclopedia of Philosophy
Steven Pinker on the myth of violence, TED video, 2007
☞ Pauline Grosjean, A History of Violence: The Culture of Honor as a Determinant of Homicide in the US South, The University of New South Wales, August 25, 2011
Emiliano Salinas: A civil response to violence, TED.com, Nov 2010 (video)
Colman McCarthy, Teaching Peace, Hobart and William Smith Colleges, August 30, 2011
Steven Pinker on the History and decline of Violence
Violence tag on Lapidarium notes

Apr
15th
Fri
permalink

Evolution of Language tested with genetic analysis


                                   Human Migration, National Geographic

Evolutionary Babel was in southern Africa

"Where did humanity utter its first words? A new linguistic analysis attempts to rewrite the story of Babel by borrowing from the methods of genetic analysis – and finds that modern language originated in sub-Saharan Africa and spread across the world with migrating human populations.

Quentin Atkinson of the University of Auckland in New Zealand designed a computer program to analyse the diversity of 504 languages. Specifically, the program focused on phonemes – the sounds that make up words, like “c”, “a”, and “tch” in the word “catch”.

Earlier research has shown that the more people speak a language, the higher its phonemic diversity. Large populations tend to draw on a more varied jumble of consonants, vowels and tones than smaller .

Africa turned out to have the greatest phonemic diversity – it is the only place in the world where languages incorporate clicks of the tongue into their vocabularies, for instance – while South America and Oceania have the smallest. Remarkably, this echoes genetic analyses showing that African populations have higher genetic diversity than European, Asian and American populations.

This is generally attributed to the "serial founder" effect: it’s thought that humans first lived in a large and genetically diverse population in Africa, from which smaller groups broke off and migrated to what is now Europe. Because each break-off group carried only a subset of the genetic diversity of its parent group, this migration was, in effect, written in the migrants’ genes.

Dr. Mark Pagel sees language as central to human expansion across the globe.

“Language was our secret weapon, and as soon we got language we became a really dangerous species, he said.

— Nicholas Wade, Phonetic Clues Hint Language Is Africa-Born, NYT, Apr 14, 2011.

Mother language

Atkinson argues that the process was mirrored in languages: as smaller populations broke off and spread across the world, human language lost some of its phonemic diversity, and sounds that humans first spoke in the African Babel were left behind.

To test this, Atkinson compared the phoneme content of languages around the world and used this analysis to determine the most likely origin of all language. He found that sub-Saharan Africa was a far better fit for the origin of modern language than any other location. (…)

"It’s a compelling idea," says Sohini Ramachandran of Brown University in Providence, Rhode Island, who studies population genetics and human evolution. "Language is such an adaptive thing that it makes sense to have a single origin before the diaspora out of Africa. It’s also a nice confirmation of what we have seen in earlier genetic studies. The processes that shaped genetic variation of humans may also have shaped cultural traits.”

Ferris Jabr, Evolutionary Babel was in southern Africa, New Scientist, 14 April 2011. (Journal reference: Science, DOI: 10.1126/science.1199295)


             Out of Africa (Map source: The Mother of All Languages, WSJ.com, Apr 15, 2011)

Language universality idea tested with biology method

                   
(The study challenges the idea that the “language centres” of our brains are the sole driver of language)

A long-standing idea that human languages share universal features that are dictated by human brain structure has been cast into doubt.

A study reported in Nature has borrowed methods from evolutionary biology to trace the development of grammar in several language families.

The results suggest that features shared across language families evolved independently in each lineage.

The authors say cultural evolution, not the brain, drives language development.

At the heart of both studies is a method based on what are known as phylogenetic studies.

Lead author Michael Dunn, an evolutionary linguist at the Max Planck Institute for Psycholinguistics in the Netherlands, said the approach is akin to the study of pea plants by Gregor Mendel, which ultimately led to the idea of heritability of traits. (…)

He inferred the existence of some kind of information transfer just from knowing family trees and observing variation, and that’s exactly the same thing we’re doing.”

Family trees

Modern phylogenetics studies look at variations in animals that are known to be related, and from those can work out when specific structures evolved.

For their studies, the team studied the characteristics of word order in four language families: Indo-European, Uto-Aztec, Bantu and Austronesian.

They considered whether what we call prepositions occur before or after a noun (“in the boat” versus “the boat in”) and how the word order of subject and object work out in either case (“I put the dog in the boat” versus “I the dog put the canoe in”).

The method starts by making use of well-established linguistic data on words and grammar within these language families, and building “family trees” of those languages.

"Once we have those trees we look at distribution of these different word order features over the descendant languages, and build evolutionary models for what’s most likely to produce the diversity that we observe in the world," Dr Dunn said.

The models revealed that while different language structures in the family tree could be seen to evolve along the branches, just how and when they evolved depended on which branch they were on.

We show that each of these language families evolves according to its own set of rules, not according to a universal set of rules,” Dr Dunn explained.

"That is inconsistent with the dominant ‘universality theories’ of grammar; it suggests rather that language is part of not a specialised module distinct from the rest of cognition, but more part of broad human cognitive skills.”

The paper asserts instead that “cultural evolution is the primary factor that determines linguistic structure, with the current state of a linguistic system shaping and constraining future states”.

However, co-author and evolutionary biologist Russell Gray of the University of Auckland stressed that the team was not pitting biology against culture in a mutually exclusive way.

We’re not saying that biology is irrelevant - of course it’s not,” Professor Gray told BBC News.

"But the clumsy argument about an innate structure of the human mind imposing these kind of ‘universals’ that we’ve seen in cognitive science for such a long time just isn’t tenable."

Steven Pinker, a cognitive scientist at Harvard University, called the work “an important and welcome study”.

However, Professor Pinker told BBC News that the finer details of the method need bearing out in order to more fully support their hypothesis that cultural boundaries drive the development of language more than biological limitations do.

The [authors] suggest that the human mind has a tendency to generalise orderings across phrases of different types, which would not occur if the mind generated every phrase type with a unique and isolated rule.

"The tendency may be partial, and it may be elaborated in different ways in differently language families, but it needs an explanation in terms of the working of the mind of language speakers."

— Jason Palmer, Science and technology reporter, Language universality idea tested with biology method, BBC News, 14 April 2011.

Evolution of Language Takes Unexpected Turn

"The findings “do not support simple ideas of the mind as a computer, with a language processor plugged in. They support much-more complex ideas of how language arises.” (…)

 One school of thought, pioneered by linguist Noam Chomsky, holds that language is a product of dedicated mechanisms in the human brain. These can be imagined as a series of switches, each corresponding to particular forms of grammar and syntax and structure.

Such a system would account for why, of the nearly infinite number of languages that are possible — imagine, for instance, a language in which verb conjugation changes randomly; it is possible — relatively few actually exist. Our brains have adapted to contain a limited, universal set of switches.

A limited set of linguistic universals is exactly what was described by the late, great comparative linguist Joseph Greenberg, who empirically tabulated features common to language. He made no claims as to neurological origin, but the essential claim overlapped with Chomsky’s: Language has universals.

If you speak a subject-verb-object language, one in which “I kick the ball,” then you likely use prepositions — “over the fence.” If you speak a subject-object-verb language, one in which “I the ball kicked,” then you almost certainly use postpositions — “the fence over.” And so on.

“What both these views predict is that languages should evolve according to the same set of rules,” said Dunn. “No matter what the language, no matter what the family, if there are two features of language that are somehow linked together structurally, they should be linked together the same way in all languages.”

That’s what Dunn, along with University of Auckland (New Zealand) computational linguist Russell Gray, set out to test.

Unlike earlier linguists, however, Dunn and Gray had access to powerful computational tools that, when set to work on sets of data, calculate the most likely relationships between the data. Such tools are well known in evolutionary biology, where they’re used to create trees of descent from genetic readings, but they can be applied to most anything that changes over time, including language.

     

In the new study, Dunn and Gray’s team created evolutionary trees for eight word-order features in humanity’s best-described language groups — Austronesian, Indo-European, Bantu and Uto-Aztecan. Together they contain more than one-third of humanity’s 7,000 languages, and span thousands of years. If there are universal trends, say Dunn and Gray, they should be visible, with each language family evolving along similar lines.

That’s not what they found.

“Each language family is evolving according to its own set of rules. Some were similar, but none were the same,” said Dunn. “There is much more diversity, in terms of evolutionary processes, than anybody ever expected.”

In one representative example of divergence (diagram above), both Austronesian and Indo-European languages that linked prepositions and object-verb structures (“over the fence, ball kicked) tended to evolve preposition and verb-object structures (“over the fence, kicked ball.”) That’s exactly what universalism would predict.

But when Austronesian and Indo-European languages both started from postposition, verb-object arrangements (“the fence over, kicked ball”), they ended up in different places. Austronesian tended towards preposition, verb-object (“over the fence, kicked ball”) but Indo-European tended towards postposition, object-verb (“the fence over, ball kicked.”)

Such differences might be eye-glazing to people unaccustomed to diagramming sentences, but the upshot is that the two language families took opposite trajectories. Many other comparisons followed suit. “The things specific to language families trumped any kind of universals we could look for,” said Dunn.

“We see that there isn’t any sort of rigid” progression of changes, said University of Reading (England) evolutionary linguist Mark Pagel, who wasn’t involved in the study. “There seems to be quite a lot of fluidity. That leads me to believe this isn’t something where you’re throwing a lot of parameter switches.”

Instead of a simple set of brain switches steering language evolution, cultural circumstance played a role. Changes were the product of chance, or perhaps fulfilled as-yet-unknown needs. For whatever reason, “the fence over, ball kicked” might have been especially useful to Indo-European speakers, but not Austronesians.

There is, however, still room for universals, said Pagel. After all, even if culture and circumstance shapes language evolution, it’s still working with a limited set of possibilities. Of the six possible combinations of subject, verb and object, for example, just two — “I kicked the ball” and “I the ball kicked” — are found in more than 90 percent of all languages, with Yoda-style “Kicked I the ball” exceedingly rare. People do seem to prefer some structures.

“What languages have in common is to be found at a much deeper level. They must emerge from more-general cognitive capacities,” said Dunn.

What those capacities may be is a new frontier for investigation. As for Dunn, his team next plans to conduct similar analyses on other features of language, searching for further evolutionary differences or those deeper levels of universality.”

“This can be applied to every level of language structure,” he said.

Brandon Keim, Evolution of Language Takes Unexpected Turn, Wired.com, April 14, 2011.

See also:

☞ Andis Kaulins, Principles of Historical Language Reconstruction, AABECIS, Feb 24, 2010.
Researchers Synthesize Evolution of Language
Evolution of Language Parallels Evolution of Species
Gut Bacteria, Language Analysis Solve Pacific Migration Mystery
Cultural Evolution Could Be Studied in Google Books Database
Human-Chimp Gene Comparison Hints at Roots of Language
Mark Changizi on how we read
Mark Changizi, The Topography Of Language, Science 2.0, Sep 17, 2009.
A brief history of writing
Evolved structure of language shows lineage-specific trends in word-order universals, Word-Order Research, Basic Vocabulary Database
The Tree of Life: Tangled Roots and Sexy Shoots. Tracing the genetic pathway from the first Eukaryotes to Homo sapiens.
The Genographic Project ☞ A Landmark Study of the Human Journey

Mar
31st
Thu
permalink

From Cave Paintings to the Internet ☞ Chronological and Thematic Studies on the History of Information and Media (Timeline)

"A chronological record of significant events … often including an explanation of their causes." — definition of history from the Merriam Webster Online Dictionary, accessed 12-2010.

"The information overload that we associate with the Internet is not new. While the Internet is undoubtedly compounding an old problem, its instant searchability offers new means of exploring the rapidly expanding universe of information. From Cave Paintings to the Internet cannot save you from information overload and offers no panacea for information insufficiency. Using Internet technology, it is designed to help you follow the development of information and media, and attitudes about them, from the beginning of records to the present. Containing annotated references to discoveries, developments of a social, scientific, theoretical or technological nature, as well as references to physical books, documents, artifacts, art works, and to websites and other digital media, it arranges, both chronologically and thematically, selected historical examples and recent developments of the methods used to record, distribute, exchange, organize, store, and search information. The database is designed to allow you to approach the topics in a wide variety of ways.”

Jeremy Norman's 2,500,000 BCE to 8,000 BCE Timeline: From Cave Paintings to the Internet

See also: Jeremy Norman’s History of Science.com

Mar
19th
Sat
permalink

Sam Harris on the ‘selfish gene’ and moral behavior

“Many people imagine that the theory of evolution entails selfishness as a biological imperative. This popular misconception has been very harmful to the reputation of science. In truth, human cooperation and its attendant moral emotions are fully compatible with biological evolution. Selection pressure at the level of ‘selfish’ genes would surely incline creatures like ourselves to make sacrifices for our relatives, for the simple reason that one’s relatives can be counted on to share one’s genes: while this truth might not be obvious through introspection, your brother’s or sister’s reproductive success is, in part, your own. This phenomenon, known as kin selection, was not given a formal analysis until the 1960s in the work of William Hamilton, but it was at least implicit in the understanding of earlier biologists. Legend has it that J.B.S. Haldane was once asked if he would risk his life to save a drowning brother, to which he quipped, ‘No, but I would save two brothers or eight cousins.’

The work of evolutionary biologist Robert Trivers on reciprocal altruism has gone a long way toward explaining cooperation among unrelated friends and strangers. Trivers’s model incorporates many of the psychological and social factors related to altruism and reciprocity, including friendship, moralistic aggression (i.e., the punishment of cheaters), guilt, sympathy, and gratitude, along with a tendency to deceive others by mimicking these states. As first suggested by Darwin, and recently elaborated by the psychologist Geoffrey Miller, sexual selection may have further encouraged the development of moral behavior. Because moral virtue is attractive to both sexes, it might function as a kind of peacock’s tail: costly to produce and maintain, but beneficial to one’s genes in the end.

Clearly, our selfish and selfless interests do not always conflict. In fact, the well-being of others, especially those closest to us, is one of our primary (and, indeed, most selfish) interests. While much remains to be understood about the biology of our moral impulses, kin selection, reciprocal altruism, and sexual selection explain how we have evolved to be, not merely atomized selves in thrall to our self-interest, but social selves disposed to serve a common interest with others.” “
Sam Harris, American author, and CEO of Project Reason. He received a Ph.D. in neuroscience from UCLA, and is a graduate in philosophy from Stanford University, The Moral Landscape, Free Press, 2010.
Feb
23rd
Wed
permalink

Mark Changizi on Humans, Version 3.0.


The next giant leap in human evolution may not come from new fields like genetic engineering or artificial intelligence, but rather from appreciating our ancient brains.

“Genetic engineering could engender marked changes in us, but it requires a scientific bridge between genotypes—an organism’s genetic blueprints—and phenotypes, which are the organisms themselves and their suite of abilities. A sufficiently sophisticated bridge between these extremes is nowhere in sight.

And machine-enhancement is part of our world even today, manifesting in the smartphones and desktop computers most of us rely on each day. Such devices will continue to further empower us in the future, but serious hardware additions to our brains will not be forthcoming until we figure out how to build human-level artificial intelligences (and meld them to our neurons), something that will require cracking the mind’s deepest mysteries. I have argued that we’re centuries or more away from that. (…)

There is, however, another avenue for human evolution, one mostly unappreciated in both science and fiction. It is this unheralded mechanism that will usher in the next stage of human, giving future people exquisite powers we do not currently possess, powers worthy of natural selection itself. And, importantly, it doesn’t require us to transform into cyborgs or bio-engineered lab rats. It merely relies on our natural bodies and brains functioning as they have for millions of years.

This mystery mechanism of human transformation is neuronal recycling, coined by neuroscientist Stanislas Dehaene, wherein the brain’s innate capabilities are harnessed for altogether novel functions.

This view of the future of humankind is grounded in an appreciation of the biologically innate powers bestowed upon us by hundreds of millions of years of evolution. This deep respect for our powers is sometimes lacking in the sciences, where many are taught to believe that our brains and bodies are taped-together, far-from-optimal kluges. In this view, natural selection is so riddled by accidents and saddled with developmental constraints that the resultant biological hardware and software should be described as a “just good enough” solution rather than as a “fine-tuned machine.”

So it is no wonder that, when many envisage the future, they posit that human invention—whether via genetic engineering or cybernetic AI-related enhancement—will be able to out-do what evolution gave us, and so bootstrap our species to a new level. This rampant overoptimism about the power of human invention is also found among many of those expecting salvation through a technological singularity, and among those who fancy that the Web may some day become smart.

The root of these misconceptions is the radical underappreciation of the design engineered by natural selection into the powers implemented by our bodies and brains, something central to my 2009 book, The Vision Revolution. For example, optical illusions (such as the Hering) are not examples of the brain’s poor hardware design, but, rather, consequences of intricate evolutionary software for generating perceptions that correct for neural latencies in normal circumstances. And our peculiar variety of color vision, with two of our sensory cones having sensitivity to nearly the same part of the spectrum, is not an accidental mutation that merely stuck around, but, rather, appear to function with the signature of hemoglobin physiology in mind, so as to detect the color signals primates display on their faces and rumps.

These and other inborn capabilities we take for granted are not kluges, they’re not “good enough,” and they’re more than merely smart. They’re astronomically brilliant in comparison to anything humans are likely to invent for millennia.

Neuronal recycling exploits this wellspring of potent powers. If one wants to get a human brain to do task Y despite it not having evolved to efficiently carry out task Y, then a key point is not to forcefully twist the brain to do Y. Like all animal brains, human brains are not general-purpose universal learning machines, but, instead, are intricately structured suites of instincts optimized for the environments in which they evolved. To harness our brains, we want to let the brain’s brilliant mechanisms run as intended—i.e., not to be twisted. Rather, the strategy is to twist Y into a shape that the brain does know how to process. (…)

There is a very good reason to be optimistic that the next stage of human will come via the form of adaptive harnessing, rather than direct technological enhancement: It has already happened.

We have already been transformed via harnessing beyond what we once were. We’re already Human 2.0, not the Human 1.0, or Homo sapiens, that natural selection made us. We Human 2.0’s have, among many powers, three that are central to who we take ourselves to be today: writing, speech, and music (the latter perhaps being the pinnacle of the arts). Yet these three capabilities, despite having all the hallmarks of design, were not a result of natural selection, nor were they the result of genetic engineering or cybernetic enhancement to our brains. Instead, and as I argue in both The Vision Revolution and my forthcoming Harnessed, these are powers we acquired by virtue of harnessing, or neuronal recycling.

In this transition from Human 1.0 to 2.0, we didn’t directly do the harnessing. Rather, it was an emergent, evolutionary property of our behavior, our nascent culture, that bent and shaped writing to be right for our visual system, speech just so for our auditory system, and music a match for our auditory and evocative mechanisms.

And culture’s trick? It was to shape these artifacts to look and sound like things from our natural environment, just what our sensory systems evolved to expertly accommodate. There are characteristic sorts of contour conglomerations occurring among opaque objects strewn about in three dimensions (like our natural Earthly habitats), and writing systems have come to employ many of these naturally common conglomerations rather than the naturally uncommon ones. Sounds in nature, in particular among the solid objects that are most responsible for meaningful environmental auditory stimuli, follow signature patterns, and speech also follows these patterns, both in its fundamental phoneme building blocks and in how phonemes combine into morphemes and words. And we humans, when we move and behave, make sounds having a characteristic animalistic signature, something we surely have specialized auditory mechanisms for sensing and processing; music is replete with these characteristic sonic signatures of animal movements, harnessing our auditory mechanisms that evolved for recognizing the actions of other large mobile creatures like ourselves.

Culture’s trick, I have argued in my research, was to harness by mimicking nature. This “nature-harnessing” was the route by which these three kernels of Human 2.0 made their way into Human 1.0 brains never designed for them.

The road to Human 3.0 and beyond will, I believe, be largely due to ever more instances of this kind of harnessing. And although we cannot easily anticipate the new powers we will thereby gain, we should not underestimate the potential magnitude of the possible changes. After all, the change from Human 1.0 to 2.0 is nothing short of universe-rattling: It transformed a clever ape into a world-ruling technological philosopher.

Although the step from Human 1.0 to 2.0 was via cultural selection, not via explicit human designers, does the transformation to Human 3.0 need to be entirely due to a process like cultural evolution, or might we have any hope of purposely guiding our transformation? When considering our future, that’s probably the most relevant question we should be asking ourselves.

I am optimistic that we may be able to explicitly design nature-harnessing technologies in the near future, now that we have begun to break open the nature-harnessing technologies cultural selection has built thus far. One of my reasons for optimism is that nature-harnessing technologies (like writing, speech, and music) must mimic fundamental ecological features in nature, and that is a much easier task for scientists to tackle than emulating the exhorbitantly complex mechanisms of the brain.

And nature-harnessing may be an apt description of emerging technological practices, such as the film industry’s ongoing struggle to better design the 3D experience to tap into the evolved functions of binocular vision, the gaming industry’s attempts to “gameify” certain tasks (exemplified in the work of Jane McGonigal), or the drive within robotics for more emotionally expressive faces (such as the child robot of Minoru Asada).

Admittedly, none of these sound remotely as revolutionary as writing, speech, or music, but it can be difficult to envision what these developments can become once they more perfectly harness our exquisite biological instincts. (Even writing was, for centuries, used mostly for religious and governmental book-keeping purposes—only relatively recently has the impact of the written word expanded to revolutionize the lives of average humans.)

The point is, most science fiction gets all this wrong. While the future may be radically “futuristic,” with our descendants having breathtaking powers we cannot fathom, it probably won’t be because they evolved into something new, or were genetically modified, or had AI-chip enhancements. Those powerful beings will simply be humans, like you and I. But they’ll have been nature-harnessed in ways we cannot anticipate, the magic latent within each of us used for new, brilliant Human 3.0 capabilities.”
Mark Changizi (cognitive scientist, author), Humans, Version 3.0., SEED.com, Feb 23, 2011 See also: Prof. Stanislas Dehaene, "How do humans acquire novel cultural skills? The neuronal recycling model", LSE Institute | Nicod, (Picture source: Rzeczpospolita)
Jan
14th
Fri
permalink

Uncertainty principle: How evolution hedges its bets

"Variety is the key to survival in a changeable world – and evolution may have come up with an extraordinary way of generating more variety. (…)
As he looked round, Feinberg's eyes came to rest on a nearby plaque commemorating physicist Paul Dirac. This set him thinking about quantum theory and evolution, which led him to the idea that epigenetic changes - heritable changes that don’t involve modifications to DNA sequences - might inject a Heisenberg-like uncertainty into the expression of genes, which would boost the chances of species surviving. That, more or less, is what he wrote on the piece of paper.

Put simply, Feinberg’s idea is that life has a kind of built-in randomness generator which allows it to hedge its bets. For example, a characteristic such as piling on the fat could be very successful when famine is frequent, but a drawback in times of plenty. If the good times last for many generations, however, natural selection could eliminate the gene variant for piling on fat from a population. Then, when famine does eventually come, the population could be wiped out.

But if there is some uncertainty about the effect of genes, some individuals might still pile on the fat, even though they have the same genes as everyone else. Such individuals might die young in good times, but if famine strikes they might be the only ones to survive. In an uncertain world, uncertainty could be crucial for the long-term survival of populations.

The implications of this idea are profound. We already know there is a genetic lottery - every fertilised human egg contains hundreds of new mutations. Most of these have no effect whatsoever, but a few can be beneficial or harmful. If Feinberg is right, there is also an epigenetic lottery: some people are more (or less) likely to develop cancer, drop dead of a heart attack or suffer from mental health problems than others with exactly the same DNA. (…)

No one now doubts that environmental factors can produce changes in the offspring of animals even when there is no change in DNA. Many different epigenetic mechanisms have been discovered, from the addition of temporary “tags” to DNA or the proteins around which DNA is wrapped, to the presence of certain molecules in sperm or eggs.

What provokes fierce argument is the role that epigenetic changes play in evolution. A few biologists, most prominently Eva Jablonka of Tel Aviv University in Israel, think that inherited epigenetic changes triggered by the environment are adaptations. They describe these changes as “neo-Lamarckian”, and some even claim that such processes necessitate a major rethink of evolutionary theory.

While such views have received a lot of attention, most biologists are far from convinced. They say the trouble with the idea that adaptive changes in parents can be passed down to offspring via epigenetic mechanisms is that, like genetic mutations, most inherited epigenetic changes acquired as a result of environmental factors have random and often harmful effects.

At most, the inheritance of acquired changes could be seen as a source of variation that is then acted on by natural selection - a view much closer to Darwin’s idea of pangenesis than Lamarck’s claim that the intent of an animal could shape the bodies of its offspring. But even this idea is problematic, because it is very rare for acquired changes to last longer than a generation (Annual Review of Genomics and Human Genetics, vol 9, p 233).

While epigenetic changes can be passed down from cell to cell during the lifetime of an organism, they do not normally get passed down to the next generation. “The process of producing germ cells usually wipes out epigenetic marks,” says Feinberg. “You get a clean slate epigenetically.” And if epigenetic marks do not usually last long, it’s hard to see how they can have a significant role in evolution - unless it is not their stability but their instability that counts.

Rather than being another way to code for specific characteristics, as biologists like Jablonka believe, Feinberg’s “new way of looking at evolution” sees epigenetic marks as introducing a degree of randomness into patterns of gene expression. In fluctuating environments, he suggests, lineages able to generate offspring with variable patterns of gene expression are most likely to last the evolutionary course.

Is this “uncertainty hypothesis” right? There is evidence that epigenetic changes, as opposed to genetic mutations or environmental factors, are responsible for a lot of variation in the characteristics of organisms. The marbled crayfish, for instance, shows a surprising variation in coloration, growth, lifespan, behaviour and other traits even when genetically identical animals are reared in identical conditions. And a study last year found substantial epigenetic differences between genetically identical human twins. On the basis of their findings, the researchers speculated that random epigenetic variations are actually “much more important” than environmental factors when it comes to explaining the differences between twins (Nature Genetics, vol 41, p 240). (…)

"The mice were from the same parents, from the same litter, eating the same food and water and living in the same cage," Feinberg says.

Despite this, he and Irizarry were able to identify hundreds of sites across the genome where the methylation patterns within a given tissue differed hugely from one individual to the next. Interestingly, these variable regions appear to be present in humans too (Proceedings of the National Academy of Sciences, vol 107, p 1757). "Methylation can vary across individuals, across cell types, across cells within the same cell type and across time within the same cell," says Irizarry.

It fell to Irizarry to produce a list of genes associated with each region that could, in theory at least, be affected by the variation in methylation. What he found blew him away. The genes that show a high degree of epigenetic plasticity are very much those that regulate basic development and body plan formation. “It’s a counter-intuitive and stunning thing because you would not expect there to be that kind of variation in these very important patterning genes,” says Feinberg.

The results back the idea that epigenetic changes to DNA might blur the relationship between genotype (an organism’s genetic make-up) and phenotype (its form and behaviour). “It could help explain why there is so much variation in gene expression during development,” says Günter Wagner, an evolutionary biologist at Yale University. But that does not necessarily mean epigenetic changes are adaptive, he says. “There has not been enough work on specifying the conditions under which this kind of mechanism might evolve.” (…)

he modelled what would happen in a fixed environment where being tall is an advantage. “The taller people survive more often, have more children and eventually everyone’s tall,” he says.

Then, he modelled what would happen in a changeable environment where, at different times, it is advantageous to be tall or short. "If you are a tall person that only has tall kids, then your family is going to go extinct." In the long run, the only winners in this kind of scenario are those that produce offspring of variable height.

This result is not controversial. “We know from theory that goes some way back that mechanisms that induce ‘random’ phenotypic variation may be selected over those that produce a single phenotype,” says Tobias Uller, a developmental biologist at the University of Oxford. But showing that something is theoretically plausible is a long way from showing that the variability in methylation evolved because it boosts survival.

Jerry Coyne, an evolutionary geneticist at the University of Chicago, is blunter. “There is not a shred of evidence that variation in methylation is adaptive, either within or between species,” he says. “I know epigenetics is an interesting phenomenon, but it has been extended willy-nilly to evolution. We’re nowhere near getting to grips with what epigenetics is all about. This might be a part of it, but if it is it’s going to be a small part.”

To Susan Lindquist of the Massachusetts Institute of Technology, however, it is an exciting idea that makes perfect sense. "It’s not just that epigenetics influences traits, but that epigenetics creates greater variance in the traits and that creates greater phenotypic diversity," she says. And greater phenotypic diversity means a population has a better chance of surviving whatever life throws at it. (…)

While Jablonka remains convinced that epigenetic marks play an important role in evolution through “neo-Lamarckian” inheritance, she welcomes Feinberg and Irizarry’s work. “It would be worth homing in on species that live in highly changeable environments,” she suggests. “You would expect more methylation, more variability, and inheritance of variability from one generation to the next.”

As surprising as Feinberg’s idea is, it does not challenge the mainstream view of evolution. “It’s straight population genetics,” says Coyne. Favourable mutations will still win out, even if there is a bit of fuzziness in their expression. And if Feinberg is right, what evolution has selected for is not epigenetic traits, but a genetically encoded mechanism for producing epigenetic variation. This might produce variation completely randomly or in response to environmental factors, or both.

Feinberg predicts that if the epigenetic variation produced by this mechanism is involved in disease, it will be most likely found in conditions like obesity and diabetes, where lineages with a mechanism for surviving environmental fluctuation would win out in the evolutionary long run.”

Henry Nicholls, Uncertainty principle: How evolution hedges its bets, New Scientist, 10 January 2011 (Picture source)

Aug
29th
Sun
permalink
Our ancestors have been human for a very long time. If a normal baby girl born forty thousand years ago were kidnapped by a time traveler and raised in a normal family in New York, she would be ready for college in eigh teen years. She would learn English (along with—who knows?— Spanish or Chinese), understand trigonometry, follow baseball and pop music; she would probably want a pierced tongue and a couple of tattoos. And she would be unrecognizably different from the brothers and sisters she left behind.
Jul
19th
Mon
permalink

Mark Changizi on how we read

                               

Writing was invented only around five thousand years ago, far too recently to have affected our brains. In fact, most of us don’t have to look back more than several generations to find ancestors who couldn’t read.

How, then, do we have reading areas for a brain that didn’t evolve to read?

Stanislas Dehaene, neuroscientist and author of Reading in the Brain, argues that our brains have undergone “neuronal recycling,” where writing has shaped itself over time to be easy on our visual systems.

And what’s the trick to getting writing to fit into our illiterate visual system?

In my own research I have suggested how it happened: culture shaped letters to look “like nature.” Oliver Sacks describes the research this way:

"Such a redeployment of neurons is facilitated by the fact that all (natural) writing systems seem to share certain topological features with the environment, features that our brains have evolved to decode. Mark Changizi and his colleagues at Caltech examined more than a hundred ancient and modern writing systems, including alphabetic systems and Chinese ideograms, from a computational point of view. They have shown that all of them, while geometrically very different, share certain topological similarities. (This visual signature is not evident in artificial writing systems, such as shorthand, which are designed to emphasize speed more than visual recognition.) Changizi et al. have found similar topological invariants in a range of natural settings, and this is has led them to hypothesize that the shapes of letters “have been selected to resemble the conglomerations of contours found in natural scenes, thereby tapping into our already-existing object recognition mechanisms.”

(…) Reading and writing is a recent human invention, going back only several thousand years, and much more recently for many parts of the world. We are reading using the eyes and brains of our illiterate ancestors. (…)

Good Listening

That’s what good listeners do. They rewind the story if needed, or forward it to parts they haven’t heard, or ask for greater detail about parts. And good communicators tend to be those who are able to be interacted with while talking. (…)

Even though we (arguably) evolved to speak and listen, but didn’t evolve to read, there is a sense in which writing has allowed us to be much better listeners than speech ever did. That’s because readers can easily interact with the writer, no matter how non-present the writer may be. Readers can pause the communication, skim ahead, rewind back to something not understood, and delve deeper into certain parts. We listeners can, when reading, manipulate the speaker’s stream of communication far beyond what the speaker would let us get away with in conversation. (…)

When one’s eyes are free, people prefer to read stories rather than hear them on tape, and the market for books on tape is miniscule compared to that for hard copy books. We humans have brains that may have evolved to comprehend speech, and yet we prefer to listen with our eyes, despite our eyes not having been designed for this! (…)

When we speak there are typically only a small number of people listening, and most often there’s just one person listening (and often less than that when I speak in my household). For this reason spoken language has evolved to be a compromise between the mouth and ear: somewhat easy for the speaker to utter, and somewhat easy for the listener to hear. In contrast, a single writer can have arbitrarily many readers, or “visual listeners.” If cultural evolution has shaped writing to minimize the overall efforts of the community, then it is the readers’ efforts that will drive the evolution of writing because there are so many of them. That’s why as amazing, as writing may be, it is a gift to the eye more than a gift to the hand. For example, a book may take six months to write, but it may take only six hours to read. That’s a good solution because there are usually many readers of any given book. (…)

Harness the Wild Eye

Just as horses didn’t evolve to be ridden, eyes didn’t evolve for the written. Your eyes reading these words are wild eyes, the same eyes and visual systems of our ancient preliterate ancestors. And yet, despite being born without a “bridle,” your visual system is now saddled with reading. We have, then, the same mystery as we find in horses: how do our ancient visual systems fit so well in modern reading-intensive society? (…)

Eyes may seem like a natural choice for pulling information stored on material, and indeed vision probably has inherent superiorities over touch or taste, just as horses are inherently better rides than rhinos. But just as horses don’t fit efficiently into culture without culture evolving to fit horses, the visual system couldn’t be harnessed for reading until culture evolved writing to fit the requirements of the visual system. We didn’t evolve to read, but culture has gone out of its way to create the illusion that we did. We turn next to the question of what exactly cultural evolution has done to help our visual systems read so well. (…)

Word and Object

Is there something beneficial about drawing objects for the words in writing? I suspect so, and I suspect that it is the same reason that animal-call symbols tend to be animal-call-like: we probably possess innate circuitry that responds specifically to animal-call-like sounds, and so our brain is better able to efficiently process a spoken word that means an animal call if the word itself sounds animal-call-like. Similarly, we possess a visual system designed to recognize objects and efficiently react to the information. If a word’s meaning is that of an object (even an abstract object), then our visual system will be better able to process and react to the written symbol for that object if the written symbol is itself object-like. (…)                                                                                                       

Our brains evolved to perceive objects, not object-parts, because objects are the clumps of matter that stay connected over time and are crucial to parsing and making sense of the world. Our brains naturally look for objects and want to interpret stimuli out there as objects, so using a single stroke for a word (or using a junction for a word) is not something our brains are happy about. Instead, when seeing the stroke-word sentence in (a) in the “rain in spain” figure, the brain will desperately try to see objects in the jumble of strokes, and if it can find one, it will interpret that jumble of strokes in an object-like fashion. But if it did this, it would be interpreting a phrase or whole sentence as an object, something that is not helpful for understanding a sentence: the meaning of a sentence is “true” or “false,” not any single word meaning. Using single strokes as words is, then, a bad idea because the brain is not designed to treat single contours as meaningful. Nor is it designed to treat object junctions as meaningful. That’s why spoken words tend to be written with symbols having a complexity no smaller than visual objects. (…)

If written words must be built out of multiple symbols, then to make words look object-like, make the symbols look like object parts. That’s what culture did. Culture dealt with the speech-writer dilemma by designing letters that look like the object parts found in nature, object junctions, in particular. That way written words will typically be object-like, so that again our visual system can be best harnessed for reading.”

Mark Changizi, The Man Who Mistook His Y for a Hat. Oliver Sacks and how we read, Psychology Today, July 15, 2010

The Topography Of Language

The Variety of Visual Signs

"The evolution of ornamentation, art, painting, and other non-linguistic visual signs (i.e., signs not part of language) has gone on unabated, diversifying into millions of non-linguistic symbols used over the ages, and occupying nearly all aspects of our lives, including pottery, body art, religion, politics, folklore, medicine, music, architecture, trademarks and traffic.


                                                       (Click image for larger size)

Writing (i.e., visual signs distinguished by use as a means of visually recording the content of spoken language) has also undergone an evolutionary explosion in variety. The earliest writing appeared several thousand years ago, and occurred independently in Sumer, Egypt and China (and much more recently in the Americas). These earliest linguistic visual signs were pictograms, evolving later to logograms (where a character denotes an object, idea or action), and a single logographic writing system (such as Chinese or Linear B) can have many thousands of distinct visual signs. It wasn’t until about 2000 years ago in Egypt that phonemic writing was invented and used, where each character stands for a constituent of speech rather than having a meaning as in logographic writing. Many hundreds of writing systems have evolved and diversified from this ancestor (e.g., Latin, Arabic, Avestan, Mongolian, Phags-pa), varying widely in geometrical shape and style, and in the aspects of speech the characters represent (e.g., alphabets represent consonants and vowels, abugidas represent just consonants, and syllabaries represent syllables).

Amongst both non-linguistic and linguistic signs, some visual signs are representations of the world­e.g., cave paintings and pictograms, respectively­and it is, of course, not surprising that these visual signs look like nature. It would be surprising, however, to find that non-pictorial visual signs look, despite first appearances, like nature. Although writing began with pictograms, there have been so many mutations to writing over the millenia that if writing still looks like nature, it must be because this property has been selectively maintained. For non-linguistic visual signs, there is not necessarily any pictorial origin as there is for writing, because amongst the earliest non-linguistic visual signs were non-pictorial decorative signs. The question we then ask is, Why are non-pictorial visual signs shaped the way they are?

Previous efforts at answering this question have primarily concentrated on the differences. In particular, some of the shape differences among different (non-pictorial) visual signs are due to the kind of writing implement used, whether impressions in clay tablets with a blunt reed, rounded writing on leaves, or the physical details of a modified feather-tip point. Little attention has been devoted to uncovering the similarities, however, and as we will see here, there are deeper visual regularities that hold across human visual signs, independent of the writing mechanism (regularities that are also found in nature).

It is as if someone had noticed that throat size causes male and female voices to sound differently, without noticing that male and female speech possesses a critical deeper regularity, namely that they utter the same set of phonemes, morphemes, words and sentences as one another (within a single language speaking community). We will find that, despite superficial differences in their shapes, visual signs appear to possess similar underlying “visual phonemes.” (…)

We have seen that human non-pictorial visual signs appear to possess a characteristic signature, and we have seen that this signature is not a result of chance. Before attempting to explain this signature, a natural first question is, Does this signature appear to be good for the eye, or good for the hand (or any other writing mechanism)? 

There are at least two reasons for expecting that visual sign shapes are designed (by cultural selection) for ease of reading, not ease of writing. First, visual signs are written once, but can be read many times. Second, writing speed is typically limited not by the motor system, but by the time taken for the writer to compose the sentence; that is, writing is not like talking, where we can talk effortlessly without feeling as if we are composing our thoughts. (…)

Natural to the Eye

The topological shapes of non-pictorial visual signs are, then, for the eye, not the hand. But we are still left with the question, Why does the eye like these shapes? Here is where the evolutionary, or ecological, hypothesis enters into the story. Because over millions of years of evolution our visual systems have been selected to be good at processing the conglomerations of contours occurring in nature, I reasoned that if visual signs have culturally evolved to be easy to see, then we should expect visual signs to have natural topological shapes.
Where are these topological shapes in nature? What were conglomerations of strokes for visual signs are now conglomerations of contours for natural scenes. Contours are the edges of objects (as seen by the eye), not, of course, strokes in the world. For example, an L occurs in the world when exactly two edges of an object meet at their endpoints, like an elbow. A T occurs in the world when the edge of an object goes behind another object in the foreground. A Y occurs, for example, at the inside corner of a rectangular room. (…)

(i) we wish to read words, not letters; and (ii) we have evolved to see objects, not object-junctions. In this light, we expect culture to select words to look like objects, so that words may be processed by the same area in visual cortex responsible for recognizing objects.

Logographic characters (e.g., Chinese) and non-linguistic symbols do tend to be more object-like, possessing many more than three strokes. For phonemic writing, however, there are severe limits to how closely words can match natural objects, for the manner in which letters combine is determined by speech. However, by having letters shaped like natural object-junctions—rather than natural contours or natural whole objects—written words become combinations of natural junctions, and thus more similar to objects and more easily processed by our visual system.

Evolution by natural selection is too slow to design our brains for reading, and so cultural selection has come to the rescue, designing (without any designer) visual signs for our brains. Because our visual systems have evolved to be good at perceiving natural objects, cultural evolution has created non-linguistic symbols, logographic symbols, and written words in phonemic writing that tend to be built out of object-junction-like constituents, and are thus object-like.

In particular, this explains why letters tend to have around three strokes and have the topological shapes they do. We expect that these insights will be useful in designing optimal alphabets or visual displays.

Because culture is capable of designing for the eye, the visual signs of our culture are a fingerprint of what our visual systems like. Akin to the linguistic study of the auditory productions humans make, the “visual linguistic” study of the visual productions people make is a currently under-utilized tool for vision research.

There is every reason to believe that the study of visual linguistics will aid traditional lab experiments on vision and brain design as much as linguistics has supplemented lab experiments on cognition.”

Mark Changizi, cognitive scientist, author, The Topography Of Language, Science 2.0, Sep 17, 2009.

See also: 

A brief history of writing, Lapidarium
☞ Maria Popova, A Visual History of the Alphabet, The Atlantic, Jun 21, 2011
☞ Mark Changizi, Are We “Meant” to Have Language and Music? How Language and Music Mimicked Nature and Transformed Ape to Man
Mark Changizi, Music Sounds Like Moving People, Science 2.0, Jan 10, 2010.
☞ Mark Changizi, How To Put Art And Brain Together
Mark Changizi on brain’s perception of the world
A brief history of writing, Lapidarium notes

permalink
Blair Bolles on the development of human speech

“The most important breakthrough came when individuals trusted one another enough to share their thoughts and believe what somebody else told them. That milestone was passed about 2.5 million years ago. Attempts to teach sign language have been successful enough to show that apes are smart enough to learn a few hundred words and even put a couple of them together, but in the wild they never actually do it. And even when trained to use sign language, they use it only to manipulate their trainers or in response to a question. So it cannot be intelligence that keeps apes from using any language at all.

The problem appears to be that there is no benefit from sharing information. If I tell you what I know and you only give me hogwash in return, I lose, you win. Somehow our ancestors came to trust one another and reap the great benefits that come from sharing knowledge honestly. More than intelligence, more than syntax, that social change made language possible.” “
Blair Bolles in interview with T. DeLene Beeland, Why humans speak: It’s a matter of trust, The Charlotte Observer, Jul. 12, 2010 (via xixidu)
Jan
29th
Fri
permalink
Fossils of the Human Family: Timeline | Science magazine, Oct 2009
This timeline shows the fossils upon which our current understanding of human evolution is based. The new fossil skeleton of Ardipithecus ramidus, nicknamed Ardi, fills a large gap before the Lucy skeleton, Australopithecus afarensis, but after the hominid line split from the line that led to today’s chimpanzees.

Fossils of the Human Family: Timeline | Science magazine, Oct 2009

This timeline shows the fossils upon which our current understanding of human evolution is based. The new fossil skeleton of Ardipithecus ramidus, nicknamed Ardi, fills a large gap before the Lucy skeleton, Australopithecus afarensis, but after the hominid line split from the line that led to today’s chimpanzees.