Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Homepage
Twitter
Facebook

A Box Of Stories
Reading Space

Contact

Archive

Oct
20th
Sun
permalink

Alphabet Evolution

                                                          (click image to enlarge gif)                                             source

Oct
8th
Mon
permalink

Aleppo, a City in Flames

image
                               Handcraft market near Aleppo Castle (Photo: A. Asaad)

The desert knows me well, the night and the mounted men.

The battle and the sword, the paper and the pen.

— 10th century poet al-Mutanabbi at court of Emirate of Aleppo


Aleppo located in northwestern Syria 310 kilometres from Damascus, is one of the oldest continuously inhabited cities in the world; it has been inhabited since perhaps as early as the 6th millennium BC. Excavations at Tell as-Sawda and Tell al-Ansari, just south of the old city of Aleppo, show that the area was occupied since at least the latter part of the 3rd millennium BC; and this is also when Aleppo is first mentioned in cuneiform tablets unearthed in Ebla and Mesopotamia, in which it is noted for its commercial and military proficiency. Such a long history is probably due to its being a strategic trading point midway between the Mediterranean Sea and Mesopotamia.

The city’s significance in history has been its location at the end of the Silk Road, which passed through central Asia and Mesopotamia.” (Wiki)

"Located at the crossroads of several trade routes from the 2nd millennium B.C., Aleppo was ruled successively by the Hittites, Assyrians, Arabs, Mongols, Mamelukes and Ottomans. The 13th-century citadel, 12th-century Great Mosque and various 17th-century madrasas, palaces, caravanserais and hammams all form part of the city’s cohesive, unique urban fabric. (…)

The old city of Aleppo reflects the rich and diverse cultures of its successive occupants. Many periods of history have left their influence in the architectural fabric of the city. Remains of Hittite, Hellenistic, Roman, Byzantine and Ayyubid structures and elements are incorporated in the massive surviving Citadel. The diverse mixture of buildings including the Great Mosque founded under the Umayyads and rebuilt in the 12th century; the 12th century Madrasa Halawiye, which incorporates remains of Aleppo’s Christian cathedral, together with other mosques and madrasas, suqs and khans represents an exceptional reflection of the social, cultural and economic aspects of what was once one of the richest cities of all humanity." (UNESCO)

Remembering Syria’s historic Silk Road souk in Aleppo

image
                                           Souk in Aleppo (Photo: A. Skomorowska)

"A few miles from Aleppo are the hills where human beings first domesticated wild grasses. All the wheat we eat originates from those plants and the first farmers. Once those hunter gatherers settled, they set in motion developments that led to towns and then markets. Aleppo was one such place and its souk lay on the first great trade routes, becoming part of an economic engine that made astonishing new products available to more and more people. The warehouses filled up with soaps, silks, spices, precious metals, ceramics and textiles, especially the colourful and diaphanous type favoured by harem-dwellers. Eventually all this mercantile activity focussed into one particular area and a fabulous bazaar was built, mostly in the Ottoman heyday of the 15th and 16th centuries. It was a honeycomb of surprises and flavours, a tribute to the best aspects of human society, but now it has run smack into the opposite tendency: war. (…)

Of course, the human suffering is far more important and pressing, but I also mourn the loss of a place that so effortlessly encapsulated everything that was light, vivacious, sociable and friendly, everything that war is not. (…)

What it had was tradition, heritage and incredible diversity. Five hundred years after Shakespeare made Aleppo souk the epitome of a distant cornucopia, you could still buy almost anything here, eat and drink a vast range of dishes, and even bathe in the traditional Hammam Nahasin. There were eight miles of lanes linking a range of khans or caravanserai – the British Consul held court in one of them well into the 20th century. When I first wandered in via the gate near the citadel, I discovered that there was only one thing I could not find in there: the desire to leave. It was just too diverting and fascinating. Every shopkeeper seemed to want to have a chat over a glass of red tea.

"Let me tell you about scarves. You buy antelope hair for the woman you want and silk for the mistress.’

"What about wives?"

He shrugs. “We have polyester. It comes with divorce papers.”

It was clear that this was not a place that ever stood still. Neither was it a museum, and certainly not a pastiche preserved for tourists. (…)

image
         Architecture during the Ottoman Occupation from 1516 to 1918 (Photo: S. Maraashi)

In great trading cities filled with communal diversity, the inhabitants usually learn to get along and trust each other. It is outsiders who bring danger and suspicion. In fact Aleppo has been sacked, destroyed and left in ruins many times over. When Tamerlane visited in 1400, he left a pile of severed heads outside – reportedly 20,000 of them. The Byzantines had previously done their worst, as had the Mongols, more than once. But it was politics that did for the city’s pre-eminence as a market. Slowly and inexorably it was cut off from its hinterlands. The Silk Road died, the Suez Canal was dug, the northern territories were taken by Turkey as were the ports of the Levant. The machinations of the Great Powers turned a vibrant trading city into a divided backwater.

In some ways that decline helped preserve the medieval nature of the place, but now it is gone. When Syria rises out of the chaos, there can be some idea of restoration. But any future attempt to rebuild will always be a re-creation, probably with the tourist buck in mind. That will be better than nothing, of course, but it cannot hide the fact that one of the world’s greatest treasures has been lost.”

— Kevin Rushby, Remembering Syria’s historic Silk Road souk in Aleppo, The Guardian, Oct 5, 2012.

Aleppo and Arab history

                  image
    A horseman crossing at Bab Qennesrine at sunset, Aleppo old town (Photo: E. Lafforgue)

"If stones could weep, the ancient foundations of Aleppo would be wet for centuries.

Since the time of Abraham they have been pounded by the hooves of invaders’ horses, stained by the blood of Muslims, Christians and Jews, and torched by conquerors expunging all trace of their enemies.

This week, one of the world’s oldest cities became Ground Zero in Syria’s spiralling civil war. And as rebel forces edge closer to the medieval citadel that is Aleppo’s proudest symbol of survival, the scene is set for a deadly clash between the city’s future and its past.

Aleppo — where Abraham legendarily milked his cows, Alexander the Great pitched his tents, the Crusaders met defeat, King Faisal declared Syria’s independence and a secret vault guarded the oldest text of the Hebrew bible — has become a nearly deserted war zone, its irreplaceable treasures exposed to destruction and theft.

“Unfortunately, we’ve seen this before,” says Michael Danti, an archaeology professor from Boston University who has worked in Syria for more than 20 years. “It’s not just the loss of buildings and museum objects, it’s the risk of losing entire sets of data that make up history.”

But relentless war is nothing new for a historic centre that once crowned the known world, and was cast into the dust as many times as there were invaders to conquer it.

The hub of culture and commerce in northern Syria, Aleppo boasts history that goes back to the first inklings of human settlement. As a city, its earliest traces date to around 5000 BC, and it claims, with Damascus, to be one of the longest-inhabited urban sites on earth.

Before the dawn of Christianity, Aleppo was put to the sword by Hittites, Mittannians, Assyrians, Babylonians, Persians, Alexander the Great, the Seleucid Empire and Rome. And they were only curtain-raisers for another 2,000 years of wreckage and rebuilding to come.

Close to the Middle East’s main waterways, the Mediterranean Sea and the Tigris and Euphrates rivers, Aleppo was a fertile agricultural region and a stop on the storied Silk Road trade route. But its riches and geography made it a target for invaders. The citadel, a defensive base in pre-Islamic times, was fortified in the late 12th century to become one of the great strongholds of the medieval Islamic world.

That didn’t stop the Mongols from overrunning the city. In the 13th century, Hugalu, son of Genghis Khan, stormed Aleppo and killed 50,000 people. Later, the fearsome emperor Timur would pile a mountain of skulls at the city gates as a warning to would-be rebels.

But throughout battles between Timur and Turkey’s muscular Ottoman Empire, Aleppo flourished. Its mosques and madrassas expanded, and it was again a centre for art and culture. Caravanserais were built to shelter Silk Road travellers shuttling between Italy and Persia. Merchants flocked to buy spices, coffee, pepper, intricate jewelry and Eastern luxuries. The sprawling souks were a smorgasbord of languages and cultures, and visiting businessfolk enjoyed lavish homes with Ottoman, Venetian and European touches.

Aleppo survived not only attacks, but a devastating earthquake, plague and famine. But miraculously, traces of its old empires remain — from the 12th century Great Mosque and citadel to the churches of the Christian quarter and the 15th-century Al-Bandara Central Synagogue, reborn from the ruins of an earlier temple.

Now, as the civil war rages on, UNESCO’s director-general warned in a statement of the danger to Aleppo’s “astounding monumental heritage reflecting the diverse cultures of the peoples that have settled here over millennia.”

What’s at risk is more than ancient buildings. (…) For Aleppo’s other treasures, there is more uncertainty.

“The Citadel is a natural fortress with lots of tunnels and subterranean spaces,” says Danti. “It would be hard to dislodge (fighters) from there without using heavy weapons.” (…)

For centuries Aleppo has been destroyed by outsiders. Now its fate depends on two homegrown foes who must decide whether, in planting their flags on the future, they will destroy their common past.”

Syria: The death of historic Aleppo, Aug 3, 2012

"Built at the crossroads of important trade routes between East and West, the city has seen more than its share of war and violence.

Architecturally and culturally, Aleppo carries the genetic imprint of a succession of ruling powers and invaders including Hittites, Assyrians, Arabs, Greeks, Romans, crusading European Christians, Mamelukes and Ottomans.

But now, a city that over the centuries has survived the attentions of countless besieging armies, appears in danger of being destroyed from within by its own people, with shocking images of the ancient souq consumed by fire as Syria’s civil war pits rebels and government forces for control of one of the world’s oldest cities.

"I do not have the words that can possibly express my dismay and horror at what has happened to Syria," says Prof Jeremy Johns, director of the Khalili Research Centre and professor of art and archaeology of the Islamic Mediterranean at Oxford University.

In addition to the destruction being wrought on priceless archaeological monuments, Prof Johns fears for the future of the nation’s artefactual heritage.

"I know that antiquities looted from archaeological sites are already reaching the international market," he says. (…) "The monuments are being destroyed but also the whole social fabric around them is being destroyed at the same time."

Prof Johns also worries about the future of the city’s relatively modern history.

"The Armenian Baron Hotel where T. E. Lawrence and all sorts of other players in that extraordinary early 20th century colonial game stayed, including archaeologists, politicians and spies, is in the centre of Aleppo and I have little doubt that will be damaged.”

Unesco declared Aleppo a World Heritage Site in 1986 and says it has “exceptional universal value because it represents medieval Arab architectural styles that are rare and authentic, in traditional human habitats”.

It is, for example, “an outstanding example of an Ayyubid 12th century city, with its military fortifications constructed as its focal point following the success of Sala El-Din against the crusaders”.

The multilayered history of the city, reflected in its disparate mix of buildings, layout and spaces, constitutes “testimony of the city’s cultural, social and technological development, representing continuous and prosperous commercial activity from the Mameluke period.

"It contains vestiges of Arab resistance against the Crusaders, but there is also the imprint of Byzantine, Roman and Greek occupation in the streets and in the plan of the city."

The monumental Citadel of Aleppo “rising above the souqs, mosques and madrasas of the old walled city, is testament to Arab military might from the 12th to the 14th centuries”. (…)

Here, in the walls of mosques, palaces and bath buildings, can be found evidence of occupation by civilisations dating back to the 10th century BC. In Aleppo, every ancient brick tells a story - and every shattered brick threatens the loss of that story for future generations.

The extraordinary thing about the Citadel, says Prof Johns, “is that is essentially an artificial mound that has grown up with human detritus over the millennia, and that you can stand in the remains of the Ottoman fortress, looking down to the excavated remains of cultures that go back into the second millennium BC.

               image
                                       Inside the Aleppo Citadel (Photo: S. Maraashi)

"There’s continuity in that whole site that sums up the historical and architectural development of Syria and the whole region, and that is what is under threat."

According to a report from the city this week, the wooden gates of the Citadel are now destroyed and a medieval stone engraving above them badly damaged.

"A bomb crater now marks the entrance and its walls are pockmarked with bullet holes," the report said.

"A stump is all that remains of the minaret of the 14th century Al Kiltawiya school. A rocket has crashed into el-Mihmandar Mosque, also built some 700 years ago."

On Sunday, after news that fire had destroyed hundreds of shops in Aleppo’s ancient souq, Unesco director general Irina Bokova described what was happening to the city as “deeply distressing”.

"The human suffering caused by this situation is already extreme," Ms Bokova said. "That the fighting is now destroying cultural heritage that bears witness to the country’s millenary history, valued and admired the world over, makes it even more tragic."

Aleppo’s souqs, she added, had been a part of the city’s economic and social life since its beginning: “They stand as testimony to Aleppo’s importance as a cultural crossroads since the second millennium BC.” (…)

Historians and archaeologists are growing increasingly worried that the true toll on Aleppo’s ancient fabric - both as a result of fighting and of heritage looting of the type that ravaged Iraq’s museums - will prove far greater than is currently known. (…)

Cairo University antiquities professor Mahmoud Al Banna said he could not believe what Syrians were doing to their own heritage in Aleppo.

"No different than what the Tatars or Mongols did," he said, referring to invasions in the 13th and 14th centuries that devastated the region. "We are talking about the history of all people, of humankind and not just of Islam."

Aleppo, and Arab history, is burning, The National, Oct 3, 2012.

'The most enchanting city in the Middle East’

"Aleppo is arguably the most enchanting city in the Middle East. Awash in mosques and minarets, the city is also stuffed with Armenian churches, Maronite cathedrals, and even a synagogue, a consequence of its unique position at the crossroads of Ottoman, French, and Jewish influences. Its maze-like souk and massive citadel on a hill are remarkable enough. But throw in hospitable people, trendy rooftop restaurants whose waiters sneak alcohol in teacups to Westerners with a wink and a nod, and the welcoming aroma of underground shops lined with tasty sweets and pistachio nuts, and Aleppo would seem to be custom-built for vacationers seeking a relaxing setting to kick back and nibble on mezze (appetizers). (…)

I remember the patio of the city’s famous, if slightly musty, Baron Hotel, where Agatha Christie once resided, was crammed with loud Europeans smoking late into the night. Across town in Al-Aziziah, Syrian students huddled in front of large screens to watch bad soap operas, smoke water pipes, and sing karaoke.

Like Prague in the early 1990s, Aleppo felt like it was on the verge of discovery, an idyllic (and safe) place for Westerners to sample the best of Arab culture and cuisine. Expatriates would revel in Al Jdeida, an Armenian district of quiet squares and quaint restaurants. This part of the Old City holds a kind of mythical draw for outsiders. Its tangle of narrow cobblestone streets and tucked-away courtyards full of jasmine and citrus trees are a pleasure to peruse; the inlaid wooden doors of its storefronts as ornately carved as the back of a backgammon board. (…)

Aleppo is surrounded by sweeping plains dotted with olive groves and “dead cities,” abandoned ruins from the Byzantine age. They serve as vivid reminders of what happens to once-prosperous trading centers left abandoned. The international community owes it to Syrians to defend UNESCO-protected sites like this one. Syria does not need any more dead cities.”

image
Men walk on a road amid wreckage after blasts ripped through Aleppo’s main Saadallah al-Jabari Square. (Stringer/Reuters)

Lionel Beehner, 'It Is Our Soul': The Destruction of Aleppo, Syria's Oldest City, The Atlantic, Oct 4, 2012

How cities become invisible

"Syria’s cities became embedded within the lines of the Invisible Cities. I listened, along with Kublai Khan, to Marco Polo’s narrations and tried to understand how cities become invisible.

Watching death has become a pastime of the revolution. (…) But the death of a city is different. It is slow — each neighborhood’s death is documented bomb by bomb, shell by shell, stone by fallen stone. Witnessing the deaths of your cities is unbearable. Unlike the news of dead people — which arrives too late, always after the fact — the death of a city seems as if it can be halted, that the city can be saved from the clutches of destruction. But it is an illusion: The once-vibrant cities cannot be saved, so you watch, helpless, as they become ruins.

Ruins are sold to us as romantic and poetic. As tourists wandering ancient sites, cameras dangling from our necks and guidebooks in hand, we seek beauty in the swirling dust over the remains of a dead civilization. We imagine what is was like then, before empires decayed and living objects became historical artifacts. But that kind of romanticism is only afforded with the distance of time and geography. In war, ruins-in-the-making are not beautiful, not vessels of meaningful lessons, not a fanciful setting for philosophical contemplations on the follies of men. When you witness it live, when it is real, and when it happens to your city, it becomes another story altogether. (…)

Being from Aleppo is unlike being from anywhere else in the world. We walked on history so deep, we did not understand it — we simply learned to call this place, older than all others, home. We grew up knowing that our insignificant existence was the thinnest layer of dust on the thick geological strata of empires, kingdoms, and generations, which lived within our stone walls. We knew without doubt, from an early age, that we were nothing but a blink of our city’s eye.

When you are from Aleppo, you are plagued with a predicament: Nothing here will ever change. For some people, living in the city that never changes becomes too difficult. The city’s permanence and your inability to make a mark on it push you to eventually leave Aleppo, trading comfort for change. After you leave, no matter where you are in the world, you know that Aleppo is there, waiting exactly as you left it. Instead, it is you who returns in a reinvented form each time you come home — a university graduate, a bride, a mother, each time proudly carrying your new ideas and identity to your patiently waiting city.

In Aleppo, you grow up worrying if your legacy will ever be worthy of your city’s. But you never worry about your city’s legacy — which we thoughtlessly leaned on — for how could we, ever, change Aleppo’s legacy?

Aleppo is Calvino’s city of Lalage, a city of minarets on which the moon “rest[s] now on one, now on another.” It is a city of churches, temples, relics, and graves of revered mystics. It is a city where the spices of Armenia meld with the tastes of Turkey. It is a city where Arabic, Kurdish, and Armenian tongues speak parallel to each other, with an occasional French word mixed in here or there. It is a city of trade and industry, where men are constantly bargaining and negotiating in the same souks as their fathers before them. It is a city where girls walking down the streets in tight jeans and high heels pass by women in long black coats and white veils pinned under their chins. And they know they all belong right here, to Aleppo.

A man who is not from Aleppo recently told me, “When you travel to Aleppo, you don’t see it until you arrive.” I had never noticed that. Perhaps, because I was always inside it, I never searched for it when we returned. I never doubted that it would always be there, exactly as I left it, untouched, unchanged. But he was right; Aleppo is an inward-looking city; it sees the world reflected in itself. And because we’ve lived here for generations, we became like that too. (…)

You learn about things when they are broken — friendships, love, people, and even cities. I learned from watching the revolution that when things are broken, they take up more space. (…) When things are destroyed, you realize, too late, how fragile it all once was: bones, stones, walls, buildings, cities. (…)

Comprehension of destruction and the change it brings comes in waves — like grasping that your family is in exile or understanding that places from your childhood have disappeared forever. The dark spaces of the city begin to match the dark places in your mind. (…) Our artifacts leave Syria to live in other homes, where people will tell their children tales about an ancient place that once was, before it was invisible. Before it died. (…)

Aleppo, like Calvino’s cities, is a woman. Her complete name, Halab al-Shahba, refers to the milk of the prophet Ibrahim’s ashen cow. It is no surprise that Aleppo’s name would hold meanings both holy and earthly, of sacredness and sustenance. It is a city of milk and marble — nothing nourishes Aleppo’s spirit more than its stone and cuisine. Now, Aleppo is a city of ash and blood.  (…)

Aleppo is Calvino’s Almema, the city of the dead, where “you reach a moment in life when, among the people you have known, the dead outnumber the living.” In Syria, we are living aberrations to life itself. We have seen what no one is supposed to see. (…) Tectonic shifts in a city like Aleppo simply do not happen in one’s lifetime. It is no longer a given that my city will outlive me. (…) We were supposed to live and die in an Aleppo unchanged, just as our grandfathers had before us, but instead we broke the laws of nature and pass on what we had inherited intact to the few survivors, in ruins. (…)

At some point, trust breaks between Marco Polo and Kublai Khan. Storyteller and listener separate into worlds independent of each other. Kublai Khan eventually doubts his narrator and accuses Marco Polo of weaving fantasies out of nothing. Do these cities even exist, he asks, or did you make them up?

Cities are both real and imagined. In peace, they are a backdrop, quietly absorbing your ego, waiting to be noticed when someone visits and sees her anew, while we drag our heels, unappreciative, along the pavements. You dream of leaving this place that never changes, leaving behind the burden of history where you will never amount to even a speck of dust in its never-ending tale. You dream of a place outside this place where the possibility to escape the past and become someone else seems easier. You never imagined that one day, the city will be the one that is exposed, unprotected, and vulnerable — you never imagined that one day, your city, not you, will be the one that needs to be saved. In war, the city becomes precious, each inch mourned, each stone remembered. The city’s sights, smells, and tastes haunt you. You cling to every memory of every place you had ever been to and remember that this is what it was like. Before.

But memories are deceptive. You weave them into images, and the images into a story to tell your child about a city you once knew, named Aleppo. A city of monuments and milk, of sweets and spices, a city so perfect and so beautiful it was named after a prophet’s ashen cow. Its minarets once changed shape from square to round to thin spindles, and every call to prayer was a symphony of voices across the neighborhoods echoing each other, as if in constant dialogue. You continue the tale, skipping certain details: the fleeing people, the smoke, the ashes, the fallen minarets and the silenced athans, the blood in the bread lines, and the relentless stench of death. Unlike Calvino, you gloss over the dark underbellies of society, overlooking the evils of men, the betrayals of people — in fact, you ignore the people altogether because you have become convinced that without the people, a city can remain innocent.

Never mind; those details don’t belong here; what matters is holding on to what once was. And you speak faster, describing the homes of grandparents and great-grandparents, pretending they are not empty. You speak of ancient neighborhoods of great-great-grandfathers, rebuilding them with your words in perfect form and not as they are now — the centuries-old gate a smoldering heap of crushed stone, the jasmine vines broken and dead, the tiled courtyard fountain dried up and covered with dirt. All of this you skip in the narrative, trying to keep the nightmare separate from the dream, for you have not completely learned from Calvino’s wisdom: Cities exist in their dualities.

And the child will ask you, because children always do, Mama, does it really exist? Or are you making it up? And you will not know what to say, for the story is both a falsehood and the truth. At once it is real and in the next moment it is intangible, even as you hold the photograph in your hand and the memories in your mind. Despite all your efforts, or perhaps in spite of them, it changed.

And with my words, both said and unsaid, I had finally rendered my city, invisible.”

Amal Hanano is a pseudonym for a Syrian-American writer, The Land of Topless Minarets and Headless Little Girls, Foreign Policy, December 11, 2012.

See also:

Aleppo city, Wikipedia
Brief History of Aleppo: A Great World City Now in the Grip of War, TIME, July 27, 2012
The Ancient Cities of the Middle East :: Aleppo
☞ Map: Syria archaeological and historical from Jean Hureau, Syria Today, editions j.a., Paris, 1977, p. 232-233.
☞ Alexander Russell, The natural history of Aleppo, G. G. and J. Robinson, 1794. (Google book)
☞ Patricia Cohen, Syrian Conflict Imperils Historical Treasures, The New York Times, Aug 15, 2012.
Syria conflict: Aleppo’s souk burns as battles rage, BBC News, Sept 29, 2012
Syrian fighting torches historic medieval market in Aleppo, Ottawa Citizen, Sept 29, 2012
Syria’s Looted Past: How Ancient Artifacts Are Being Traded for Guns, TIME, Sept 12, 2012
☞ Ronen Bergman, The Aleppo Codex Mystery, NYTimes, Jul 25, 2012

May
22nd
Tue
permalink

The reinvention of the night. A history of the night in early modern Europe

                        
                                                                  Bridgeman Art Library

"During the previous generation or so, elites across Europe had moved their clocks forward by several hours. No longer a time reserved for sleep, the night time was now the right time for all manner of recreational and representational purposes. This is what Craig Koslofsky calls “nocturnalisation”, defined as “the ongoing expansion of the legitimate social and symbolic uses of the night”, a development to which he awards the status of “a revolution in early modern Europe”. (…)

The shift from street to court and from day to night represented “the sharpest break in the history of celebrations in the West”. (…) By the time of Louis XIV, all the major events – ballets de cour, operas, balls, masquerades, firework displays – took place at night. (…) The kings, courtiers – and those who sought to emulate them – adjusted their daily timetable accordingly. Unlike Steele’s friend, they rose and went to bed later and later. Henry III of France, who was assassinated in 1589, usually had his last meal at 6 pm and was tucked up in bed by 8. Louis XIV’s day began with a lever at 9 and ended (officially) at around midnight. (…)

As with so much else at Versailles, this was a development that served to distance the topmost elite from the rest of the population. Koslofsky speculates that it was driven by the need to find new sources of authority in a confessionally fragmented age.

More directly – and convincingly – authoritarian was the campaign to “colonize” the night by reclaiming it from the previously dominant marginal groups. The most effective instrument was street-lighting, introduced to Paris in 1667. (…)
In 1673, Madame de Sévigné [wrote]: “We found it pleasant to be able to go, after midnight, to the far end of the Faubourg Saint-Germain”. (…)

Street lighting had made life more difficult for criminals, but also for those who believed in ghosts, devils and things that go bump. Addressing an imaginary atheist in a sermon in 1629, John Donne invited him to look ahead just a few hours until midnight: “wake then; and then dark and alone, Hear God and ask thee then, remember that I asked thee now, Is there a God? and if thou darest, say No”. A hundred years later, there were plenty of Europeans prepared to say “No”. In 1729, the Paris police expressed grave anxiety about the spread of irreligion through late-night café discussions of the existence or non-existence of God.”

— Tim Blanning, review of Craig Koslofsky’s "Evening’s Empire. A history of the night in early modern Europe", The reinvention of the night, TLS, Sep 21, 2011.

See also:

☞ Craig Koslofsky, Evening’s Empire — extensive excerpts at Google Books
☞ Benjamin Schwarz, Night Owls, The Atlantic, Apr, 2012

May
20th
Sun
permalink

ChronoZoom ☞ The history of life, the universe and everything - visualised


                                                                 Click image to explore

"Imagine a timeline of the universe, complete with high-resolution videos and images, in which you could zoom from a chronology of Egypt’s dynasties and pyramids to the tale of a Japanese-American couple interned in a World War II relocation camp to a discussion of a mass extinction that occurred on Earth 200 million years ago – all in seconds. (…)

A University of California, Berkeley, geologist and his students have teamed up with Microsoft Research Connections engineers to make this web-based software possible. (…)

The idea arose in a UC Berkeley course about Big History taught by Walter Alvarez, the campus geologist who first proposed that a comet or asteroid smashed into the Earth 65 million years ago and killed off the dinosaurs. Big History is a unified, interdisciplinary way of looking at and teaching the history of the cosmos, Earth, life and humanity: the history of everything.

One of the difficulties of teaching history –- and teaching Big History, in particular –- is conveying a sense of the time scale, which ranges from the 50,000-year time span of modern humans to the 13.7 billion-year history of the universe, Alvarez said. Human history compared to cosmic history is like “a postage stamp relative to the whole size of the United States.”

“With ChronoZoom, you are browsing history, not digging it out piece by piece,” said Alvarez, a Professor of the Graduate School in the Department of Earth and Planetary Science. (…)

ChronoZoom is a visualization tool allowing for the first time people to mash up data from all sorts of different places in different formats enabling new insights that would never have been possible before.”

ChronoZoom: A deep dive into the history of everything

See also:

David Christian: Big History Project | TED

'Backed by stunning illustrations, David Christian narrates a complete history of the universe, from the Big Bang to the Internet, in a riveting 18 minutes. This is “Big History”: an enlightening, wide-angle look at complexity, life and humanity, set against our slim share of the cosmic timeline.”

David Christian, David Christian: Big history, TED, March 2011.

Timeline tag on Lapidarium notes

Mar
4th
Sun
permalink

Rome Reborn ☞ A Digital Model of Ancient Rome

Rome Reborn is an international initiative whose goal is the creation of 3D digital models illustrating the urban development of ancient Rome from the first settlement in the late Bronze Age (ca. 1000 B.C.) to the depopulation of the city in the early Middle Ages (ca. A.D. 550). With the advice of an international Scientific Advisory Committee, the leaders of the project decided that A.D. 320 was the best moment in time to begin the work of modeling. At that time, Rome had reached the peak of its population, and major Christian churches were just beginning to be built. After this date, few new civic buildings were built.

Much of what survives of the ancient city dates to this period, making reconstruction less speculative than it must, perforce, be for earlier phases. But having started with A.D. 320, the Rome Reborn team intends to move both backwards and forwards in time until the entire span of time foreseen by our mission has been covered.”

Rome Reborn - introduction

Rome Reborn 2.2: A Tour of Ancient Rome in 320 CE

This video presents a fly-through of the latest version of Rome Reborn (2.2). The new version incorporates some new content (including the Pantheon) and for the first time includes animations. — Prof. Bernard Frischer

See also:

☞ Dylla, Kimberly,  B. Frischer, Rome Reborn 2.0: A Case Study of Virtual City Reconstruction Using Procedural Modeling Techniques (pdf), Archaeopress: Oxford, 2010.
☞ More papers

Jan
6th
Fri
permalink

Why Do Languages Die? Urbanization, the state and the rise of nationalism

       

"The history of the world’s languages is largely a story of loss and decline. At around 8000 BC, linguists estimate that upwards of 20,000 languages may have been in existence. Today the number stands at 6,909 and is declining rapidly. By 2100, it is quite realistic to expect that half of these languages will be gone, their last speakers dead, their words perhaps recorded in a dusty archive somewhere, but more likely undocumented entirely. (…)

The problem with globalization in the latter sense is that it is the result, not a cause, of language decline. (…) It is only when the state adopts a trade language as official and, in a fit of linguistic nationalism, foists it upon its citizens, that trade languages become “killer languages.” (…)

Most importantly, what both of the above answers overlook is that speaking a global language or a language of trade does not necessitate the abandonment of one’s mother tongue. The average person on this planet speaks three or four languages. (…)

The truth is, most people don’t “give up” the languages they learn in their youth. (…) To wipe out a language, one has to enter the home and prevent the parents from speaking their native language to their children.

Given such a preposterous scenario, we return to our question — how could this possibly happen?

One good answer is urbanization. If a Gikuyu and a Giryama meet in Nairobi, they won’t likely speak each other’s mother tongue, but they very likely will speak one or both of the trade languages in Kenya — Swahili and English. Their kids may learn a smattering of words in the heritage languages from their parents, but by the third generation any vestiges of those languages in the family will likely be gone. In other cases, extremely rural communities are drawn to the relatively easier lifestyle in cities, until sometimes entire villages are abandoned. Nor is this a recent phenomenon.

The first case of massive language die-off was probably during the Agrarian (Neolithic) Revolution, when humanity first adopted farming, abandoned the nomadic lifestyle, and created permanent settlements. As the size of these communities grew, so did the language they spoke. But throughout most of history, and still in many areas of the world today, 500 or fewer speakers per language has been the norm. Like the people who spoke them, these languages were constantly in flux. No language could grow very large, because the community that spoke it could only grow so large itself before it fragmented. The language followed suit, soon becoming two languages. Permanent settlements changed all this, and soon larger and larger populations could stably speak the same language. (…)

"In primitive times every migration causes not only geographical but also intellectual separation of clans and tribes. Economic exchanges do not yet exist; there is no contact that could work against differentiation and the rise of new customs. The dialect of each tribe becomes more and more different from the one that its ancestors spoke when they were still living together. The splintering of dialects goes on without interruption. The descendants no longer understand one other.… A need for unification in language then arises from two sides. The beginnings of trade make understanding necessary between members of different tribes. But this need is satisfied when individual middlemen in trade achieve the necessary command of language.”

Ludwig von Mises, Nation, State, and Economy (Online edition, 1919; 1983), Ludwig von Mises Institute, p. 46–47.

Thus urbanization is an important factor in language death. To be sure, the wondrous features of cities that draw immigrants — greater economies of scale, decreased search costs, increased division of labor — are all made possible with capitalism, and so in this sense languages may die for economic reasons. But this is precisely the type of language death that shouldn’t concern us (unless you’re a linguist like me), because urbanization is really nothing more than the demonstrated preferences of millions of people who wish to take advantage of all the fantastic benefits that cities have to offer.

In short, these people make the conscious choice to leave an environment where network effects and sociological benefits exist for speaking their native language, and exchange it for a greater range of economic possibilities, but where no such social benefits for speaking the language exist. If this were the only cause of language death — or even just the biggest one — then there would be little more to say about it. (…)

Far too many well-intentioned individuals are too quick to substitute their valuations for those of the last speakers of indigenous languages this way. Were it up to them, these speakers would be resigned to misery and poverty and deprived of participation in the world’s advanced economies in order that their language might be passed on. To be sure, these speakers themselves often fall victim to the mistaken ideology that one language necessarily displaces or interferes with another.

Although the South African Department of Education is trying to develop teaching materials in the local African languages, for example, many parents are pushing back; they want their children taught only in English. In Dominica, the parents go even further and refuse to even speak the local language, Patwa, to their children.[1] Were they made aware of the falsity of this notion of language displacement, perhaps they would be less quick to stop speaking their language to their children. But the decision is ultimately theirs to make, and theirs alone.

Urbanization, however, is not the only cause of language death. There is another that, I’m sad to say, almost none of the linguists who work on endangered languages give much thought to, and that is the state. The state is the only entity capable of reaching into the home and forcibly altering the process of language socialization in an institutionalized way.

How? The traditional method was simply to kill or remove indigenous and minority populations, as was done as recently as 1923 in the United States in the last conflict of the Indian War. More recently this happens through indirect means — whether intentional or otherwise — the primary method of which has been compulsory state schooling.

There is no more pernicious assault on the cultural practices of minority populations than a standardized, Anglified, Englicized compulsory education. It is not just that children are forcibly removed from the socialization process in the home, required to speak an official language and punished (often corporally) for doing otherwise. It is not just that schools redefine success, away from those things valued by the community, and towards those things that make someone a better citizen of the state. No, the most significant impact of compulsory state education is that it ingrains in children the idea that their language and their culture is worthless, of no use in the modern classroom or society, and that it is something that merely serves to set them apart negatively from their peers, as an object of their vicious torment.

But these languages clearly do have value, if for no other reason than simply because people value them. Local and minority languages are valued by their speakers for all sorts of reasons, whether it be for use in the local community, communicating with one’s elders, a sense of heritage, the oral and literary traditions of that language, or something else entirely. Again, the praxeologist is not in a position to evaluate these beliefs. The praxeologist merely notes that free choice in language use and free choice in association, one not dictated by the edicts of the state, will best satisfy the demand of individuals, whether for minority languages or lingua francas. What people find useful, they will use.

By contrast, the state values none of these things. For the state, the goal is to bind individuals to itself, to an imagined homogeneous community of good citizens, rather than their local community. National ties trump local ones in the eyes of the state. Free choice in association is disregarded entirely. And so the state forces many indigenous people to become members of a foreign community, where they are a minority and their language is scorned, as in the case of boarding schools. Whereas at home, mastering the native language is an important part of functioning in the community and earning prestige, and thus something of value, at school it becomes a black mark and a detriment. Given the prisonlike way schools are run, and how they exhibit similar intense (and sometimes dangerous) pressures from one’s peers, minority-language-speaking children would be smart to disassociate themselves as quickly as possible from their cultural heritage.

Mises himself, though sometimes falling prey to common fallacies regarding language like linguistic determinism and ethnolinguistic isomorphism, was aware of this distinction between natural language decline and language death brought on by the state. (…)

This is precisely what the Bureau of Indian Affairs accomplished by coercing indigenous children into attending boarding schools. Those children were cut off from their culture and language — their nation — until they had effectively assimilated American ideologies regarding minority languages, namely, that English is good and all else is bad.

Nor is this the only way the state affects language. The very existence of a modern nation-state, and the ideology it encompasses, is antithetical to linguistic diversity. It is predicated on the idea of one state, one nation, one people. In Nation, State, and Economy, Mises points out that, prior to the rise of nationalism in the 17th and 18th centuries, the concept of a nation did not refer to a political unit like state or country as we think of it today.

A “nation” instead referred to a collection of individuals who share a common history, religion, cultural customs and — most importantly — language. Mises even went so far as to claim that “the essence of nationality lies in language.”[2] The “state” was a thing apart, referring to the nobility or princely state, not a community of people (hence Louis XIV’s famous quip, “L’état c’est moi.”).[3] In that era, a state might consist of many nations, and a nation might subsume many states.

The rise of nationalism changed all this. As Robert Lane Greene points out in his excellent book, You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity,

The old blurry linguistic borders became inconvenient for nationalists. To build nations strong enough to win themselves a state, the people of a would-be nation needed to be welded together with a clear sense of community. Speaking a minority dialect or refusing to assimilate to a standard wouldn’t do.[4]

Mises himself elaborated on this point. Despite his belief in the value of a liberal democracy, which would remain with him for the rest of his life, Mises realized early on that the imposition of democracy over multiple nations could only lead to hegemony and assimilation:

In polyglot territories, therefore, the introduction of a democratic constitution does not mean the same thing at all as introduction of democratic autonomy. Majority rule signifies something quite different here than in nationally uniform territories; here, for a part of the people, it is not popular rule but foreign rule. If national minorities oppose democratic arrangements, if, according to circumstances, they prefer princely absolutism, an authoritarian regime, or an oligarchic constitution, they do so because they well know that democracy means the same thing for them as subjugation under the rule of others.[5]

From the ideology of nationalism was also born the principle of irredentism, the policy of incorporating historically or ethnically related peoples into the larger umbrella of a single state, regardless of their linguistic differences. As Greene points out, for example,

By one estimate, just 2 or 3 percent of newly minted “Italians” spoke Italian at home when Italy was unified in the 1860s. Some Italian dialects were as different from one another as modern Italian is from modern Spanish.[6]

This in turn prompted the Italian statesman Massimo D’Agelizo (1798–1866) to say, “We have created Italy. Now we need to create Italians.” And so these Italian languages soon became yet another casualty of the nation-state.

Mises once presciently predicted that,

If [minority nations] do not want to remain politically without influence, then they must adapt their political thinking to that of their environment; they must give up their special national characteristics and their language.[7]

This is largely the story of the world’s languages. It is, as we have seen, the history of the state, a story of nationalistic furor, and of assimilation by force. Only when we abandon this socialist and utopian fantasy of one state, one nation, one people will this story begin to change.”

Danny Hieber is a linguist working to document and revitalize the world’s endangered languages, Why Do Languages Die?, Ludwig von Mises Institute, Jan 04, 2012. (Illustration: The Evolution of the Armenian Alphabet)

[1] Amy L. Paugh, Playing With Languages: Children and Change in a Caribbean Village (2012), Berghahn Books.
[2] Ludwig von Mises, Human Action: A Treatise on Economics (Scholar’s Edition, 2010) Auburn, AL: Ludwig von Mises Institute, p.37.
[3] “I am the state.”
[4] Robert Lane Greene, You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity (Kindle Edition, 2011), Delacorte Press, p. 132.
[5] Mises, Nation, State, and Economy, p. 77.
[6] Greene, You Are What You Speak, p. 141.
[7] Mises, Nation, State, and Economy, p. 77.

“Isn’t language loss a good thing, because fewer languages mean easier communication among the world’s people? Perhaps, but it’s a bad thing in other respects. Languages differ in structure and vocabulary, in how they express causation and feelings and personal responsibility, hence in how they shape our thoughts. There’s no single purpose “best” language; instead, different languages are better suited for different purposes.

For instance, it may not have been an accident that Plato and Aristotle wrote in Greek, while Kant wrote in German. The grammatical particles of those two languages, plus their ease in forming compound words, may have helped make them the preeminent languages of western philosophy.

Another example, familiar to all of us who studied Latin, is that highly inflected languages (ones in which word endings suffice to indicate sentence structure) can use variations of word order to convey nuances impossible with English. Our English word order is severely constrained by having to serve as the main clue to sentence structure. If English becomes a world language, that won’t be because English was necessarily the best language for diplomacy.”

— Jared Diamond, American scientist and author, currently Professor of Geography and Physiology at UCLA, The Third Chimpanzee: The Evolution & Future of the Human Animal, Hutchinson Radius, 1991.

See also:

Lists of endangered languages, Wiki
☞ Salikoko S. Mufwene, How Languages Die (pdf), University of Chicago, 2006
☞ K. David Harrison, When Languages Die. The Extinction of the World’s Languages and the Erosion of Human Knowledge (pdf), Oxford University Press, 2007

"It is commonly agreed by linguists and anthropologists that the majority of languages spoken now around the globe will likely disappear within our lifetime. The phenomenon known as language death has started to accelerate as the world has grown smaller. "This extinction of languages, and the knowledge therein, has no parallel in human history. K. David Harrison’s book is the first to focus on the essential question, what is lost when a language dies? What forms of knowledge are embedded in a language’s structure and vocabulary? And how harmful is it to humanity that such knowledge is lost forever?"

Nicholas Ostler on The Last Lingua Franca. English Until the Return of Babel, Lapidarium notes
☞ Henry Hitchings, What’s the language of the future?, Salon, Nov 6, 2011.

Nov
24th
Thu
permalink

Are You Totally Improbable Or Totally Inevitable?

                        

"If we have never been amazed by the very fact that we exist, we are squandering the greatest fact of all."

Will Durant, American writer, historian, and philosopher (1885-1981)

"Not only have you been lucky enough to be attached since time immemorial to a favored evolutionary line, but you have also been extremely — make that miraculously — fortunate in your personal ancestry. Consider the fact that for 3.8 billion years, a period of time older than the Earth’s mountains and rivers and oceans, every one of your forebears on both sides has been attractive enough to find a mate, healthy enough to reproduce, and sufficiently blessed by fate and circumstances to live long enough to do so. Not one of your pertinent ancestors was squashed, devoured, drowned, starved, stuck fast, untimely wounded or otherwise deflected from its life’s quest of delivering a tiny charge of genetic material to the right partner at the right moment to perpetuate the only possible sequence of hereditary combinations that could result — eventually, astoundingly, and all too briefly — in you. (…)

The number of people on whose cooperative efforts your eventual existence depends has risen to approximately 1,000,000,000,000,000,000, which is several thousand times the total number of people who have ever lived. (…)

We are awfully lucky to be here-and by ‘we’ I mean every living thing. To attain any kind of life in this universe of ours appears to be quite an achievement. As humans we are doubly lucky, of course: We enjoy not only the privilege of existence but also the singular ability to appreciate it and even, in a multitude of ways, to make it better. It is a talent we have only barely begun to grasp.”

Bill Bryson, A Short History of Nearly Everything, Black Swan, 2003

“Statistically, the probability of any one of us being here is so small that you’d think the mere fact of existing would keep us all in a contented dazzlement of surprise.”

Lewis Thomas, The Lives of a Cell, Bantam Books, 1984, p. 165.

“Life is one huge lottery where only the winning tickets are visible.”

Jostein Gaarder, The Orange Girl, Orion Publishing, 2004

“’We are the lucky ones for we shall die’, as there is an infinite number of possible forms of DNA all but a few billions of which will never burst into consciousness.”

Frank Close, a noted particle physicist who is currently Professor of Physics at the University of Oxford, The Void, Oxford University Press

"What are the odds that you exist, as you, today? Author Dr Ali Binazir attemps to quantify the probability that you came about and exist as you today, and reveals that the odds of you existing are almost zero.

Think about yourself.
You are here because…
Your dad met your mom.
Then your dad and mom conceived you.
So a particular egg in your mom
Joined a particular sperm from your dad
Which could only happen because not one of your direct ancestors, going all the way back to the beginning of life itself, died before passing on his or her genes…
So what are the chances of you happening?
Of you being here?

Author Ali Binazir did the calculations last spring and decided that the chances of anyone existing are one in 102,685,000. In other words (…) you are totally improbable.

— Robert Krulwich, Are You Totally Improbable Or Totally Inevitable?, NPR, Nov 21, 2011

"First, let’s talk about the probability of your parents meeting.  If they met one new person of the opposite sex every day from age 15 to 40, that would be about 10,000 people. Let’s confine the pool of possible people they could meet to 1/10 of the world’s population twenty years go (one tenth of 4 billion = 400 million) so it considers not just the population of the US but that of the places they could have visited. Half of those people, or 200 million, will be of the opposite sex.  So let’s say the probability of your parents meeting, ever, is 10,000 divided by 200 million:

104/2×108= 2×10-4, or one in 20,000.

Probability of boy meeting girl: 1 in 20,000.

So far, so unlikely.

Now let’s say the chances of them actually talking to one another is one in 10.  And the chances of that turning into another meeting is about one in 10 also.  And the chances of that turning into a long-term relationship is also one in 10.  And the chances of that lasting long enough to result in offspring is one in 2.  So the probability of your parents’ chance meeting resulting in kids is about 1 in 2000.

Probability of same boy knocking up same girl: 1 in 2000.

So the combined probability is already around 1 in 40 million — long but not insurmountable odds.  Now things start getting interesting.  Why?  Because we’re about to deal with eggs and sperm, which come in large numbers.

Each sperm and each egg is genetically unique because of the process of meiosis; you are the result of the fusion of one particular egg with one particular sperm.  A fertile woman has 100,000 viable eggs on average.  A man will produce about 12 trillion sperm over the course of his reproductive lifetime.  Let’s say a third of those (4 trillion) are relevant to our calculation, since the sperm created after your mom hits menopause don’t count.  So the probability of that one sperm with half your name on it hitting that one egg with the other half of your name on it is

1/(100,000)(4 trillion)= 1/(105)(4×1012)= 1 in 4 x 1017, or one in 400 quadrillion.

Probability of right sperm meeting right egg: 1 in 400 quadrillion.

But we’re just getting started.

Because the existence of you here now on planet earth presupposes another supremely unlikely and utterly undeniable chain of events.  Namely, that every one of your ancestors lived to reproductive age – going all the way back not just to the first Homo sapiens, first Homo erectus and Homo habilis, but all the way back to the first single-celled organism.  You are a representative of an unbroken lineage of life going back 4 billion years.

Let’s not get carried away here; we’ll just deal with the human lineage.  Say humans or humanoids have been around for about 3 million years, and that a generation is about 20 years.  That’s 150,000 generations.  Say that over the course of all human existence, the likelihood of any one human offspring to survive childhood and live to reproductive age and have at least one kid is 50:50 – 1 in 2. Then what would be the chance of your particular lineage to have remained unbroken for 150,000 generations?

Well then, that would be one in 2150,000 , which is about 1 in 1045,000– a number so staggeringly large that my head hurts just writing it down. That number is not just larger than all of the particles in the universe – it’s larger than all the particles in the universe if each particle were itself a universe.

Probability of every one of your ancestors reproducing successfully: 1 in 1045,000

But let’s think about this some more.  Remember the sperm-meeting-egg argument for the creation of you, since each gamete is unique?  Well, the right sperm also had to meet the right egg to create your grandparents.  Otherwise they’d be different people, and so would their children, who would then have had children who were similar to you but not quite you.  This is also true of your grandparents’ parents, and their grandparents, and so on till the beginning of time.  If even once the wrong sperm met the wrong egg, you would not be sitting here noodling online reading fascinating articles like this one.  It would be your cousin Jethro, and you never really liked him anyway.

That means in every step of your lineage, the probability of the right sperm meeting the right egg such that the exact right ancestor would be created that would end up creating you is one in 1200 trillion, which we’ll round down to 1000 trillion, or one quadrillion.

So now we must account for that for 150,000 generations by raising 400 quadrillion to the 150,000th power:

[4x1017]150,000 ≈ 102,640,000

That’s a ten followed by 2,640,000 zeroes, which would fill 11 volumes of a book the size of The Tao of Dating with zeroes.

To get the final answer, technically we need to multiply that by the 1045,000 , 2000 and 20,000 up there, but those numbers are so shrimpy in comparison that it almost doesn’t matter.  For the sake of completeness:

(102,640,000)(1045,000)(2000)(20,000) = 4x 102,685,007 ≈ 102,685,000

Probability of your existing at all: 1 in 102,685,000

As a comparison, the number of atoms in the body of an average male (80kg, 175 lb) is 1027.  The number of atoms making up the earth is about 1050. The number of atoms in the known universe is estimated at 1080.

So what’s the probability of your existing?  It’s the probability of 2 million people getting together – about the population of San Diego – each to play a game of dice with trillion-sided dice. They each roll the dice, and they all come up the exact same number – say, 550,343,279,001.”


                                                         Click image to enlarge

— Ali Binazir, What are the chances of your coming into being?, June 15, 2011

A lovely comment by PZ Myers, a biologist and associate professor at the University of Minnesota:

"You are a contingent product of many chance events, but so what? So is everything else in the universe. That number doesn’t make you any more special than a grain of sand on a beach, which also arrived at its precise shape, composition, and location by a series of chance events. (…)

You are one of 7 billion people, occupying an insignificant fraction of the volume of the universe, and you aren’t a numerical miracle at all — you’re actually rather negligible.”

— PZ Myers, A very silly calculation, Pharyngula, Nov 14, 2011

'Life is one huge lottery where only the winning tickets are visible'

   “Thirteen forty-nine,” was the first thing [he] said.
   “The Black Death,” I replied. I had a pretty good knowledge of history, but I had no idea what the Black Death had to do with coincidences.
   ”Okay,” he said, and off he went. “You probably know that half Norway’s population was wiped out during that great plague. But there’s a connection here I haven’t told you about. Did you know that you had thousands of ancestors at that time?” he continued.
   I shook my head in dispair. How could that possibly be?
   ”You have two parents, four grandparents, eight great-grandparents, sixteen great-great grandparents — and so on. If you work it out, right back to 1349 — there are quite a lot.
  “Then came the bubonic plague. Death spread from neighborhood to neighborhood, and the children were hit worst. Whole families died, sometimes one or two family members survived. A lot of your ancestors were children at this time, Hans Thomas. But none of them kicked the bucket.”
  “How can you be so sure about that?” I asked on amazement.
   He took a long drag on his cigarette and said, “Because you’re sitting here  looking out over the Adriatic.

  “The chances of of single ancestor of yours not dying while growing up is one in several billion. Because it isn’t just about the Black Death, you know. Actually all of your ancestors have grown up and had children — even during the worst natural disasters, even when the child mortality rate was enormous. Of course, a lot of them have suffered from illness, but they’ve always pulled through. In a way, you have been a millimeter from death billions of times, Hans Thomas.

Your life on this planet has been threatned by insects, wild animals, meteorites, lightning, sickness, war, flods, fires, poisoning, and attempted murders. In the battle of Stikelstad alone you were injured hundreds of times. Because you must have had ancestors on both sides — yes, really you were fighting against yourself and your chances of being born a thousand years later. You know, the same goes for the last world war. If Grandpa had been shot by good Norwegians during the occupation, then neither you nor I would have been born. The point is, this happened billions of times through history. Each time an arrow rained through the air, your chances of being born have been reduced to the minimum.”

   He continued: “I am talking about one long chain of coincidences. In fact, that chain goes right back to the first living cell, which divided in two, and from there gave birth to everything growing and sprouting on this planet today. The chance of my chain not being broken at one time or another duirng three or four billion years is so little it is almost inconceivable. But I have pulled through, you know. Damned right, I have. In return, I appreciate how fantastically lucky I am to be able to experience this planet this planet together with you. I realize how lucky every single little crawling insect on this planet is.”

   "What about the unlucky ones?" I asked.
   ”They don’t exist! They were never born. Life is one huge lottery where only the winning tickets are visible.”

Jostein Gaarder, The Orange Girl, Orion Publishing, 2004.

(Illustration source)

See also:

☞ Richard Dawkins, Unweaving the Rainbow, Lapidarium notes

Nov
17th
Thu
permalink

Why Man Creates by Saul Bass (1968)

"Whaddaya doin?” ‘I’m painting the ceiling! Whadda you doin?” “I’m painting the floor!” — the exchange between Michaelangelo and da Vinci

Why Man Creates is a 1968 animated short documentary film which discusses the nature of creativity. It was written by Saul Bass and Mayo Simon, and directed by Saul and Elaine Bass.

The movie won the Academy Award for Documentary Short Subject. An abbreviated version of it ran on the first-ever broadcast of CBS’ 60 Minutes, on September 24, 1968.

Why Man Creates focuses on the creative process and the different approaches taken to that process. It is divided into eight sections: The Edifice, Fooling Around, The Process, Judgment, A Parable, Digression, The Search, and The Mark.

In 2002, this film was selected for preservation in the United States National Film Registry by the Library of Congress as being “culturally, historically, or aesthetically significant”.

Summary

The Edifice begins with early humans hunting. They attempt to conquer their prey with stones, but fail, so they begin to use spears and bait. They kill their prey, and it turns into a cave painting, upon which a building begins to be built. Throughout the rest of the section, the camera tracks upward as the edifice grows ever taller.

Early cavemen begin to discover various things such as the lever, the wheel, ladders, agriculture and fire. It then cuts to clips of early societies and civilizations. It depicts the appearance of the first religions and the advent of organized labor. It then cuts to the Great Pyramids at Giza and depicts the creation of writing.

Soon an army begins to move across the screen chanting “BRONZE,” but they are overrun by an army chanting “IRON”. The screen then depicts early cities and civilizations.

This is followed by a black screen with one man in traditional Greek clothing who states, “All was in chaos ‘til Euclid arose and made order.” Next, various Greek achievements in mathematics are depicted as they build Greek columns around which Greeks discuss items, including, “What is the good life and how do you lead it?” “Who shall rule the state?” “The Philosopher King.” “The Aristocrat.” “The People.” “You mean ALL the people?” “What is the nature of the Good? What is the nature of Justice?” “What is Happiness?”

The culture of ancient Greece fades into the armies of Rome. The organized armies surround the great Roman architecture as they chant “Hail Caesar!” A man at a podium states, “Roman Law is now in session!”, and when he bangs his gavel, the architecture collapses. Dark soldiers begin to pop up from the rubble and eventually cover the whole screen with darkness symbolizing the Dark Ages.

The Dark Ages consist of inaudible whisperings and mumblings. At one point, a light clicks on and an Arab mathematician says, “Allah be praised! I’ve discovered the zero.” at which point his colleague responds, “What?” and he says “Nothing, nothing.” Next come cloistered monks who sing, “What is the shape of the Earth? Flat. What happens when you get to the edge? You fall off. Does the earth move? Never.”

Finally the scene brightens and shows a stained glass window. Various scientists open stained glass doors and say things such as, “The Earth moves!” “The Earth is round!” “The blood circulates!” “There are worlds smaller than ours!” “There are worlds larger than ours!” Each time one opens a door, a large, hairy arm slams the door shut. Finally, the stained glass breaks in the wake of the new Enlightenment.

Next, Michelangelo and da Vinci are depicted. The steam engine is invented, and gears and belts begin to cover everything. The light bulb and steam locomotive are created. Darwin is referred to as two men hit each other with their canes arguing whether man is an animal. The telegraph is invented and psychology created. Next, a small creature hops across the screen saying, “I’m a bug, I’m a germ, I’m a bug, I’m a germ… [indrawn breath] Louis Pasteur! I’m not a bug, I’m not a germ…” Great musicians such as Beethoven are depicted. Alfred Nobel invents dynamite.

Next, the cartooning shows the great speeches and documents on government and society from the American Revolution onward with quotes such as “All men are created equal…”, “Life, liberty and the pursuit of happiness”, “And the Government, by the people,…”, etc. and ends with “One World.”

Finally, the building stops and the Wright Brothers' plane lands on top of it. It is quickly covered in more advanced planes, in cars, in televisions, and finally in early computers. At the top is a radioactive atom which envelops a man in smoke. The Edifice ends with that man yelling, “HELP!”

Fooling Around displays a random series of perspectives and the creative ideas which come from them.

The Process displays a man who is making artwork from a series of geometrical figures. Each time he attempts to keep them in place, they move and rearrange themselves. He tries many different approaches to the problem. Finally he accepts a working configuration and calls his wife to look at it. She says, “All it needs is an American flag.”

Judgment is a series of reactions, presumably to the creation from The Process. It displays their criticisms of it, such as “It represents the decline of Western culture…”, and only a very few support it.

A Parable begins at a ping-pong ball factory. Each ball is made in exactly the same way, and machines test them to get rid of anomalies. As the balls are being tested for their bounce levels, one bounces much higher than the rest. It is placed in a chute which leads to a garbage can outside the factory. It proceeds to bounce across town to a park, where it begins to bounce. Quickly, a cluster of ping-pong balls gather around it. It keeps bouncing higher and higher, until it doesn’t come back. It concludes with the comment:
“There are some who say he’s coming back and we have only to wait …
There are some who say he burst up there because ball was not meant to fly …
And there are some who maintain he landed safely in a place where balls bounce high …”

Digression is a very short section in which one snail says to another, “Have you ever thought that radical ideas threaten institutions, then become institutions, and in turn reject radical ideas which threaten institutions?” to which the other snail replies “No.” and the first says dejectedly, “Gee, for a minute I thought I had something.”

The Search shows scientists who have been working for years on projects such as solving world hunger, developing a cure for Cancer, or questioning the origin of the universe. Then it showed a scientist who had worked on a project for 20 years, and it simply didn’t work out. He was asked what he would do with himself, and he replied that he didn’t know. (Note: each of the scientists shown was working on something which still has not been solved to date, even though each one expected solid results in only a few years. This forwards the concept shown in this session far better than the creators could have known in 1968.)

The Mark asks the question, Why does man create? and determines that man creates to simply state, “I Am.” The film ends by displaying “I Am” written in paint on the side of a building.” — (Wiki)

Nov
11th
Fri
permalink

The Genographic Project ☞ A Landmark Study of the Human Journey 


                                       (Click image to explore Atlas of Human Journey)

Human Migration, Population Genetics, Maps, DNA.

"Where do you really come from? And how did you get to where you live today? DNA studies suggest that all humans today descend from a group of African ancestors who—about 60,000 years ago—began a remarkable journey.

The Genographic Project is seeking to chart new knowledge about the migratory history of the human species by using sophisticated laboratory and computer analysis of DNA contributed by hundreds of thousands of people from around the world. In this unprecedented and of real-time research effort, the Genographic Project is closing the gaps of what science knows today about humankind’s ancient migration stories.

The Genographic Project is a multi-year research initiative led by National Geographic Explorer-in-Residence Dr. Spencer Wells. Dr. Wells and a team of renowned international scientists and IBM researchers, are using cutting-edge genetic and computational technologies to analyze historical patterns in DNA from participants around the world to better understand our human genetic roots.”


                                       (Click image to explore Globe of Human History)

The Genographic Project - Human Migration, Population Genetics, Maps, DNA, National Geographic

The Genographic Project - Introduction

     

See also:

Evolution of Language tested with genetic analysis

Nov
7th
Mon
permalink

How Epicurus’ ideas survived through Lucretius’ poetry, and led to toleration

       image
                                    Illustration:  Oxford: Anthony Stephens, 1683

Hunc igitur terrorem animi tenebrasque necessest
non radii solis neque lucida tela diei
discutiant, sed naturae species ratioque.

"Therefore it is necessary that neither the rays of the sun nor the shining spears of Day should shatter this terror and darkness of the mind, but the aspect and reason of nature."

— Lucretius, De Rerum Natura (On the Nature of Things), Book I, line 90-93.

As Greenblatt describes it, Lucretius (borrowing from Democritus and others), says [more than 2,000 years ago] the universe is made of an infinite number of atoms:

"Moving randomly through space, like dust motes in a sunbeam, colliding, hooking together, forming complex structures, breaking apart again, in a ceaseless process of creation and destruction. There is no escape from this process. (…) There is no master plan, no divine architect, no intelligent design.

All things, including the species to which you belong, have evolved over vast stretches of time. The evolution is random, though in the case of living organisms, it involves a principle of natural selection. That is, species that are suited to survive and to reproduce successfully, endure, at least for a time; those that are not so well suited, die off quickly. But nothing — from our own species, to the planet on which we live, to the sun that lights our day — lasts forever. Only the atoms are immortal.”

— cited in Lucretius, Man Of Modern Mystery, NPR, Sep 19, 2011

””On the Nature of Things,” a poem written 2,000 years ago that flouted many mainstream concepts, helped the Western world to ease into modernity. (…)

Harvard literary scholar Stephen Greenblatt has proposed a sort of metaphor for how the world became modern. An ancient Roman poem, lost for 1,000 years, was recovered in 1417. Its presciently modern ideas — that the world is made of atoms, that there is no life after death, and that there is no purpose to creation beyond pleasure — dropped like an atomic bomb on the fixedly Christian culture of Western Europe.

But this poem’s radical and transformative ideas survived what could have been a full-blown campaign against it, said Greenblatt. (…) One reason is that it was art. A tract would have drawn the critical attention of the authorities, who during the Renaissance still hewed to Augustine’s notion that Christian beliefs were “unshakeable, unchangeable, coherent.”

The ancient poem that contained such explosive ideas, and that packaged them so pleasingly, was “On the Nature of Things” (“De Rerum Natura”) by Roman poet and philosopher Titus Lucretius Carus, who died five decades before the start of the Christian era. Its intent was to counter the fear of death and the fear of the supernatural. Lucretius rendered into poetry the ideas of Epicurus, a Greek philosopher who had died some 200 years earlier. Both men embraced a core idea: that life was about the pursuit of pleasure and the avoidance of pain. (…)

Among the most stunning ideas Lucretius promoted in his poem was that the world is made of atoms, imperishable bits of matter he called “seeds.” All the rest was void — nothingness. Atoms never disappeared, but were material grist for the world’s ceaseless change, without any creator or design or afterlife.

These ideas, “drawn from a defunct pagan past,” were intolerable in 15th-century Europe, said Greenblatt, so much so that for the next 200 years they had to survive every “formal and informal mechanism of aversion and repression” of the age.

“A few wild exceptions” embraced this pagan past explicitly, said Greenblatt, including Dominican friar Giordano Bruno, whose “fatal public advocacy” of Lucretius came to an end in 1600. Branded a pantheist, he was imprisoned, tortured, and burned at the stake.

But the poem itself, a repository of intolerable ideas, was allowed to circulate. How was this so?

Greenblatt offered three explicit reasons:

Reading strategies. In the spirit of commonplace books, readers of that era focused on individual passages rather than larger (and disturbing) meanings. Readers preferred to see the poem as a primer on Latin and Greek grammar, philology, natural history, and Roman culture.

— Scholarship. Official commentaries on the text were not intended to revive the radical ideas of Lucretius, but to put the language and imagery of a “dead work” in context, “a homeostatic survival,” said Greenblatt, “to make the corpse accessible.” He showed an image from a 1511 scholarly edition of the poem, in which single lines on each page lay “like a cadaver on a table,” surrounded by elaborate scholarly text. But the result was still preservation. “Scholarship,” he said, “is rarely credited properly in the history of toleration.”

— Aesthetics. A 1563 annotated edition of the poem acknowledged that its precepts were alien to Christian belief, but “it is no less a poem.”

“Certainly almost every one of the key principles was an offense to right-thinking Christians,” said Greenblatt. “But the poetry was compellingly, stunningly beautiful.”

Its “immensely seductive form,” he said — the soul of tolerance — helped to make aesthetics the concept that bridged the gap between the Renaissance and the early modern age.

Michel de Montaigne, the 16th-century French nobleman who invented the art of the essay, helped to maintain that aesthetic thread. His work includes almost 100 quotations from Lucretius. It was explicitly aesthetic appreciation of the old Roman, said Greenblatt, despite Montaigne’s own “genial willingness to submit to Christian orthodoxy.”

In the end, Lucretius and the ideas he borrowed from Epicurus survived because of art. “That aesthetic dimension of the ancient work (…) was the key element in the survival and transmission of what was perceived (…) by virtually everyone in the world to be intolerable,” said Greenblatt. “The thought police were only rarely called in to investigate works of art.”

One irony abides. Epicurus himself was known to say, “I spit on poetry,” yet his ideas only survive because of it. Lucretius saw his art as “honey smeared around the lip of a cup,” said Greenblatt, “that would enable readers to drink it down.”

The Roman poet thought there was no creator or afterlife, but that “should not bring with it a cold emptiness,” said Greenblatt. “It shouldn’t be only the priests of the world, with their delusions, who could convey to you that feeling of the deepest wonder.””

— Corydon Ireland, Through artistry, toleration, Harvard Gazette, Oct 31, 2011

See also:

☞ Lucretius, On the Nature of Things (1st century B.C.), History of Science Online

"In De rerum natura (On the Nature of Things), the poet Lucretius (ca. 50 BC) fused the atomic theory of Democritus and Leucippus with the philosophy of Epicurus in order to argue against the existence of the gods. While ordinary humans might fear the thunderbolts of Jove or torments in the underworld after death, Lucretius advised his readers to take courage in the knowledge that death is merely a dissolution of the body, as atoms combine and reassemble according to chance as they move through the void. Against the Stoics, Aristotelians, and Neoplatonists, Lucretius argued for a mechanistic universe governed by chance. He also argued for a plurality of worlds (and these planets, like the Earth, need not be spherical) and a non-hierarchical universe. Despite the paucity of ancient readers persuaded by Lucretius’ arguments, his work was almost universally admired as a masterful example of Latin style.”

Titus Lucretius Carus (ca. 99 BCE – ca. 55 BCE) was a Roman poet and philosopher.

See also:

Stephen Greenblatt, The Answer Man, The New Yorker, Aug 8, 2011
Lucretius, Man Of Modern Mystery, NPR, Sep 19, 2011
☞ Christian Flow, Swerves, Harvard Magazine Jul-Aug 2011
Lucretius on the infinite universe, the beginning of things and the likelihood of extraterrestrial life, Lapidarium
Lucretius: ‘O unhappy race of men, when they ascribed actions to the gods’, Lapidarium

Nov
3rd
Thu
permalink

And Greece created Europe: the cultural legacy of a nation in crisis

        
                        The Acropolis in Athens. Photograph: Petros Giannakouris/AP

"As the eurozone crisis rumbles on, we should not forget that it was ancient Greek literary and artistic forms that shaped the cultural unity of the European continent.

Let us not forget that Europe began in Greece. The idea of the European continent as a cultural unity dates back to ancient Greece in more ways than one.

For a start, the Hellenes were the first people to define themselves as “western” as opposed to “eastern”. The separate city states of ancient Greece found a collective unity and sense of common nationhood at war with the Persian empire, and the classical heights of Greek culture were saturated with this sense of nationhood. The Parthenon that floats gloriously above modern Athens (while the best collection of sculpture from its frieze and pediments can be seen in the British Museum) was built as a symbol of Athenian and Hellenic resurrection after the Persian army razed the buildings that previously stood on the fortified sacred hill, the Acropolis.

In the years after 2001 when some spoke of a “culture war” between the west and Islam, it was fashionable to pick over this ancient Greek construction of an early European identity in opposition to the east. Certainly it recurs in the history of modern Greece, whose nationalism goes back to the war against Turkish rule in which Byron gave his life to the Greek cause and Delacroix lent his vivid imagination.

But western self-deconstruction can go too far. Ancient Greece really was different from the states and cultures that surrounded it, and its achievements defined a specifically European way of seeing the world. Greek literary and artistic forms would shape Europe in a way they did not shape other continents. The nude in art, for example, would be as central to the Renaissance as it was to ancient Athens. Even the mythology of Greece, and its gods, would survive the rise of Christianity to decorate Europe’s palaces. Tragic drama would survive and flourish, from Sophocles to Shakespeare.

Europeans have rediscovered their Greek legacy again and again, from Marsilio Ficino translating Plato to Picasso confronting the Minotaur.

Now that Greece is vilified, its attempt to reassert the democracy that is such a proud creation of ancient Athens is damned as a threat to the eurozone, and a great history of Hellenic Europe is reduced to repeated – and increasingly real – references to an economic “Greek tragedy”.

What sad times are these.”

Jonathan Jones, English journalist and art critic, And Greece created Europe: the cultural legacy of a nation in crisis, Guardian, 3 Nov 2011

Oct
25th
Tue
permalink

Iain McGilchrist on The Divided Brain and the Making of the Western World

                             

"Just as the human body represents a whole museum of organs, with a long evolutionary history behind them, so we should expect the mind to be organized in a similar way. (…) We receive along with our body a highly differentiated brain which brings with it its entire history, and when it becomes creative it creates out of this history – out of the history of mankind (…) that age-old natural history which has been transmitted in living form since the remotest times, namely the history of the brain structure."

Carl Jung cited in The Master and His Emissary, Yale University Press, 2009, p.8.

Renowned psychiatrist and writer Iain McGilchrist explains how the ‘divided brain’ has profoundly altered human behaviour, culture and society. He draws on a vast body of recent experimental brain research to reveal that the differences between the brain’s two hemispheres are profound.

The left hemisphere is detail-oriented, prefers mechanisms to living things, and is inclined to self-interest. It misunderstands whatever is not explicit, lacks empathy and is unreasonably certain of itself, whereas the right hemisphere has greater breadth, flexibility and generosity, but lacks certainty.

It is vital that the two hemispheres work together, but McGilchrist argues that the left hemisphere is increasingly taking precedence in the modern world, resulting in a society where a rigid and bureaucratic obsession with structure and self-interest hold sway.

RSA, 17th Nov 2010

Iain McGilchrist points out that the idea that “reason [is] in the left hemisphere and something like creativity and emotion [are] in the right hemisphere” is an unhelpful misconception. He states that “every single brain function is carried out by both hemispheres. Reason and emotion and imagination depend on the coming together of what both hemispheres contribute.” Nevertheless he does see an obvious dichotomy, and asks himself: “if the brain is all about making connections, why is it that it’s evolved with this whopping divide down the middle?”

Natasha Mitchell, "The Master and his Emissary: the divided brain and the reshaping of Western civilisation", 19 June 2010

      

"The author holds instead that each of the hemispheres of the brain has a different “take” on the world or produces a different “version” of the world, though under normal circumstances these work together. This, he says, is basically to do with attention. He illustrates this with the case of chicks which use the eye connected to the left hemisphere to attend to the fine detail of picking seeds from amongst grit, whilst the other eye attends to the broader threat from predators. According to the author, “The left hemisphere has its own agenda, to manipulate and use the world”; its world view is essentially that of a mechanism. The right has a broader outlook, “has no preconceptions, and simply looks out to the world for whatever might be. In other words it does not have any allegiance to any particular set of values.”

Staff, "Two worlds of the left and right brain (audio podcast)", BBC Radio 4, 14 November 2009

McGilchrist explains this more fully in a later interview for ABC Radio National’s All in the Mind programme, stating: “The right hemisphere sees a great deal, but in order to refine it, and to make sense of it in certain ways—-in order to be able to use what it understands of the world and to be able to manipulate the world—-it needs to delegate the job of simplifying it and turning it into a usable form to another part of the brain” [the left hemisphere]. Though he sees this as an essential “double act”, McGilchrist points to the problem that the left hemisphere has a “narrow, decontextualised and theoretically based model of the world which is self consistent and is therefore quite powerful” and to the problem of the left hemisphere’s lack of awareness of its own shortcomings; whilst in contrast, the right hemisphere is aware that it is in a symbiotic relationship.

How the brain has shaped our world

"The author describes the evolution of Western culture, as influenced by hemispheric brain functioning, from the ancient world, through the Renaissance and Reformation; the Enlightenment; Romanticism and Industrial Revolution; to the modern and postmodern worlds which, to our detriment, are becoming increasingly dominated by the left brain. According to McGilchrist, interviewed for ABC Radio National’s All in the Mind programme, rather than seeking to explain the social and cultural changes and structure of civilisation in terms of the brain — which would be reductionist — he is pointing to a wider, more inclusive perspective and greater reality in which there are two competing ways of thinking and being, and that in modern Western society we appear increasingly to be able to only entertain one viewpoint: that of the left hemisphere.

The author argues that the brain and the mind do not simply experience the world, but that the world we experience is a product or meeting of that which is outside us with our mind. The outcome, the nature of this world, is thus dependent upon “which mode of attention we bring to bear on the world

McGilchrist sees an occasional flowering of "the best of the right hemisphere and the best of the left hemisphere working together" in our history: as witnessed in Athens in the 6th century by activity in the humanities and in science and in ancient Rome during the Augustan era. However, he also sees that as time passes, the left hemisphere once again comes to dominate affairs and things slide back into “a more theoretical and conceptualised abstracted bureaucratic sort of view of the world. According to McGilchrist, the cooperative use of both left and right hemispheres diminished and became imbalanced in favour of the left in the time of the classical Greek philosophers Parmenides and Plato and in the late classical Roman era. This cooperation and openness were regained during the Renaissance 1,000 years later which brought “sudden efflorescence of creative life in the sciences and the arts”. However, with the Reformation, the early Enlightenment, and what has followed as rationalism has arisen, our world has once again become increasingly rigid, simplified and rule-bound.

Looking at more recent Western history, McGilchrist sees in the Industrial Revolution that for the first time artefacts were being made “very much to the way the left hemisphere sees the world — simple solids that are regular, repeated, not individual in the way that things that are made by hand are” and that a transformation of the environment in a similar vein followed on from that; that what was perceived inwardly was projected outwardly on a mass scale. The author argues that the scientific materialism which developed in the 19th century is still with us, at least in the biological sciences, though he sees physics as having moved on.

McGilchrist does not see modernism and postmodernism as being in opposition to this, but also “symptomatic of a shift towards the left hemisphere’s conception of the world”, taking the idea that there is no absolute truth and turning that into “there is no truth at all”, and he finds some of the movements’ works of art “symptomatic of people whose right hemisphere is not working very well.” McGilchrist cites the American psychologist Louis Sass, author of Madness and Modernism, pointing out that Sass “draws extensive parallels between the phenomena of modernism and postmodernism and of schizophrenia”, with things taken out of context and fragmented.”

The Master and His Emissary, Wiki

The Master and His Emissary

Whatever the relationship between consciousness and the brainunless the brain plays no role in bringing the world as we experience it into being, a position that must have few adherents – its structure has to be significant. It might even give us clues to understanding the structure of the world it mediates, the world we know. So, to ask a very simple question, why is the brain so clearly and profoundly divided? Why, for that matter, are the two cerebral hemispheres asymmetrical? Do they really differ in any important sense? If so, in what way? (…)

Enthusiasm for finding the key to hemisphere differences has waned, and it is no longer respectable for a neuroscientist to hypothesise on the subject. (…)

These beliefs could, without much violence to the facts, be characterised as versions of the idea that the left hemisphere is somehow gritty, rational, realistic but dull, and the right hemisphere airy-fairy and impressionistic, but creative and exciting; a formulation reminiscent of Sellar and Yeatman’s immortal distinction (in their parody of English history teaching, 1066 and All That) between the Roundheads – ‘Right and Repulsive’ – and the Cavaliers – ‘Wrong but Wromantic’. In reality, both hemispheres are crucially involved in reason, just as they are in language; both hemispheres play their part in creativity. Perhaps the most absurd of these popular misconceptions is that the left hemisphere, hard-nosed and logical, is somehow male, and the right hemisphere, dreamy and sensitive, is somehow female. (…)

V. S. Ramachandran, another well-known and highly regarded neuroscientist, accepts that the issue of hemisphere difference has been traduced, but concludes: ‘The existence of such a pop culture shouldn’t cloud the main issue – the notion that the two hemispheres may indeed be specialised for different functions. (…)

I believe there is, literally, a world of difference between the hemispheres. Understanding quite what that is has involved a journey through many apparently unrelated areas: not just neurology and psychology, but philosophy, literature and the arts, and even, to some extent, archaeology and anthropology. (…)

I have come to believe that the cerebral hemispheres differ in ways that have meaning. There is a plethora of well-substantiated findings that indicate that there are consistent differences – neuropsychological, anatomical, physiological and chemical, amongst others – between the hemispheres. But when I talk of ‘meaning’, it is not just that I believe there to be a coherent pattern to these differences. That is a necessary first step. I would go further, however, and suggest that such a coherent pattern of differences helps to explain aspects of human experience, and therefore means something in terms of our lives, and even helps explain the trajectory of our common lives in the Western world.

My thesis is that for us as human beings there are two fundamentally opposed realities, two different modes of experience; that each is of ultimate importance in bringing about the recognisably human world; and that their difference is rooted in the bihemispheric structure of the brain. It follows that the hemispheres need to co-operate, but I believe they are in fact involved in a sort of power struggle, and that this explains many aspects of contemporary Western culture. (…)

The brain has evolved, like the body in which it sits, and is in the process of evolving. But the evolution of the brain is different from the evolution of the body. In the brain, unlike in most other human organs, later developments do not so much replace earlier ones as add to, and build on top of, them. Thus the cortex, the outer shell that mediates most so-called higher functions of the brain, and certainly those of which we are conscious, arose out of the underlying subcortical structures which are concerned with biological regulation at an unconscious level; and the frontal lobes, the most recently evolved part of the neocortex, which occupy a much bigger part of the brain in humans than in our animal relatives, and which grow forwards from and ‘on top of ’ the rest of the cortex, mediate most of the sophisticated activities that mark us out as human – planning, decision making, perspective taking, self-control, and so on. In other words, the structure of the brain reflects its history: as an evolving dynamic system, in which one part evolves out of, and in response to, another. (…)

There is after all coherence to the way in which the correlates of our experience are grouped and organised in the brain, and we can see these ‘functions’ forming intelligible wholes, corresponding to areas of experience, and see how they relate to one another at the brain level, this casts some light on the structure and experience of our mental world. In this sense the brain is – in fact it has to be – a metaphor of the world. (…)

I believe that there are two fundamentally opposed realities rooted in the bihemispheric structure of the brain. But the relationship between them is no more symmetrical than that of the chambers of the heart – in fact, less so; more like that of the artist to the critic, or a king to his counsellor.

There is a story in Nietzsche that goes something like this. There was once a wise spiritual master, who was the ruler of a small but prosperous domain, and who was known for his selfless devotion to his people. As his people flourished and grew in number, the bounds of this small domain spread; and with it the need to trust implicitly the emissaries he sent to ensure the safety of its ever more distant parts. It was not just that it was impossible for him personally to order all that needed to be dealt with: as he wisely saw, he needed to keep his distance from, and remain ignorant of, such concerns. And so he nurtured and trained carefully his emissaries, in order that they could be trusted. Eventually, however, his cleverest and most ambitious vizier, the one he most trusted to do his work, began to see himself as the master, and used his position to advance his own wealth and influence. He saw his master’s temperance and forbearance as weakness, not wisdom, and on his missions on the master’s behalf, adopted his mantle as his own – the emissary became contemptuous of his master. And so it came about that the master was usurped, the people were duped, the domain became a tyranny; and eventually it collapsed in ruins.

The meaning of this story is as old as humanity, and resonates far from the sphere of political history. I believe, in fact, that it helps us understand something taking place inside ourselves, inside our very brains, and played out in the cultural history of the West, particularly over the last 500 years or so. (…)

I hold that, like the Master and his emissary in the story, though the cerebral hemispheres should co-operate, they have for some time been in a state of conflict. The subsequent battles between them are recorded in the history of philosophy, and played out in the seismic shifts that characterise the history of Western culture. At present the domain – our civilisation – finds itself in the hands of the vizier, who, however gifted, is effectively an ambitious regional bureaucrat with his own interests at heart. Meanwhile the Master, the one whose wisdom gave the people peace and security, is led away in chains. The Master is betrayed by his emissary.”

Iain McGilchrist, psychiatrist and writer, The Master and His Emissary, Yale University Press, 2009 Illustrations: 1), 2) Shalmor Avnon Amichay/Y&R Interactive

Iain McGilchrist: The Divided Brain | RSA animated

RSA, 17th Nov 2010

See also:

☞ Iain McGilchrist, The Battle Between the Brain’s Left and Right Hemispheres, WSJ.com, Jan 2, 2010
David Eagleman on how we constructs reality, time perception, and The Secret Lives of the Brain
Dean Buonomano on ‘Brain Bugs’ - Cognitive Flaws That ‘Shape Our Lives’
Timothy D. Wilson on The Social Psychological Narrative: ‘It’s not the objective environment that influences people, but their constructs of the world’
Mind and Brain tag on Lapidarium notes

Sep
19th
Mon
permalink

Steven Pinker on the History and decline of Violence

 

                                          Raphael, The Judgment of Solomon, (1518)

"Drawing on the work of the archaeologist Lawrence Keeley, Steven Pinker recently concluded that the chance of our ancient hunter-gatherer ancestors meeting a bloody end was somewhere between 15% and 60%. In the 20th century, which included two world wars and the mass killers Stalin and Hitler, the likelihood of a European or American dying a violent death was less than 1%.

Pinker shows that, with notable exceptions, the long-term trend for murder and violence has been going down since humans first developed agriculture 10,000 years ago. And it has dropped steeply since the Middle Ages. It may come as a surprise to fans of Inspector Morse but Oxford in the 1300s, Pinker tells us, was 110 times more murderous than it is today. With a nod to the German sociologist Norbert Elias, Pinker calls this movement away from killing the “civilising process”.”

Andrew Anthony, journalist, author, Steven Pinker: the optimistic voice of science, The Observer, 18 Sept 2011

"In sixteenth-century Paris, a popular form of entertainment was cat-burning, in which a cat was hoisted in a sling on a stage and slowly lowered into a fire. According to historian Norman Davies, "The spectators, including kings and queens, shrieked with laughter as the animals, howling with pain, were singed, roasted, and finally carbonized." Today, such sadism would be unthinkable in most of the world. This change in sensibilities is just one example of perhaps the most important and most underappreciated trend in the human saga: Violence has been in decline over long stretches of history, and today we are probably living in the most peaceful moment of our species’ time on earth.

In the decade of Darfur and Iraq, and shortly after the century of Stalin, Hitler, and Mao, the claim that violence has been diminishing may seem somewhere between hallucinatory and obscene. Yet recent studies that seek to quantify the historical ebb and flow of violence point to exactly that conclusion.

Some of the evidence has been under our nose all along. Conventional history has long shown that, in many ways, we have been getting kinder and gentler. Cruelty as entertainment, human sacrifice to indulge superstition, slavery as a labor-saving device, conquest as the mission statement of government, genocide as a means of acquiring real estate, torture and mutilation as routine punishment, the death penalty for misdemeanors and differences of opinion, assassination as the mechanism of political succession, rape as the spoils of war, pogroms as outlets for frustration, homicide as the major form of conflict resolution—all were unexceptionable features of life for most of human history. But, today, they are rare to nonexistent in the West, far less common elsewhere than they used to be, concealed when they do occur, and widely condemned when they are brought to light. (…)

The decline of violence is a fractal phenomenon, visible at the scale of millennia, centuries, decades, and years. It applies over several orders of magnitude of violence, from genocide to war to rioting to homicide to the treatment of children and animals. And it appears to be a worldwide trend, though not a homogeneous one. The leading edge has been in Western societies, especially England and Holland, and there seems to have been a tipping point at the onset of the Age of Reason in the early seventeenth century.

At the widest-angle view, one can see a whopping difference across the millennia that separate us from our pre-state ancestors. Contra leftist anthropologists who celebrate the noble savage, quantitative body-counts—such as the proportion of prehistoric skeletons with axemarks and embedded arrowheads or the proportion of men in a contemporary foraging tribe who die at the hands of other men—suggest that pre-state societies were far more violent than our own.

It is true that raids and battles killed a tiny percentage of the numbers that die in modern warfare. But, in tribal violence, the clashes are more frequent, the percentage of men in the population who fight is greater, and the rates of death per battle are higher. According to anthropologists like Lawrence Keeley, Stephen LeBlanc, Phillip Walker, and Bruce Knauft, these factors combine to yield population-wide rates of death in tribal warfare that dwarf those of modern times. If the wars of the twentieth century had killed the same proportion of the population that die in the wars of a typical tribal society, there would have been two billion deaths, not 100 million.

Political correctness from the other end of the ideological spectrum has also distorted many people’s conception of violence in early civilizations—namely, those featured in the Bible. This supposed source of moral values contains many celebrations of genocide, in which the Hebrews, egged on by God, slaughter every last resident of an invaded city. The Bible also prescribes death by stoning as the penalty for a long list of nonviolent infractions, including idolatry, blasphemy, homosexuality, adultery, disrespecting one’s parents, and picking up sticks on the Sabbath. The Hebrews, of course, were no more murderous than other tribes; one also finds frequent boasts of torture and genocide in the early histories of the Hindus, Christians, Muslims, and Chinese.

At the century scale, it is hard to find quantitative studies of deaths in warfare spanning medieval and modern times. Several historians have suggested that there has been an increase in the number of recorded wars across the centuries to the present, but, as political scientist James Payne has noted, this may show only that “the Associated Press is a more comprehensive source of information about battles around the world than were sixteenth-century monks.” Social histories of the West provide evidence of numerous barbaric practices that became obsolete in the last five centuries, such as slavery, amputation, blinding, branding, flaying, disembowelment, burning at the stake, breaking on the wheel, and so on. Meanwhile, for another kind of violence—homicide—the data are abundant and striking.

The criminologist Manuel Eisner has assembled hundreds of homicide estimates from Western European localities that kept records at some point between 1200 and the mid-1990s. In every country he analyzed, murder rates declined steeply—for example, from 24 homicides per 100,000 Englishmen in the fourteenth century to 0.6 per 100,000 by the early 1960s.

On the scale of decades, comprehensive data again paint a shockingly happy picture: Global violence has fallen steadily since the middle of the twentieth century. According to the Human Security Brief 2006, the number of battle deaths in interstate wars has declined from more than 65,000 per year in the 1950s to less than 2,000 per year in this decade. In Western Europe and the Americas, the second half of the century saw a steep decline in the number of wars, military coups, and deadly ethnic riots.

Zooming in by a further power of ten exposes yet another reduction. After the cold war, every part of the world saw a steep drop-off in state-based conflicts, and those that do occur are more likely to end in negotiated settlements rather than being fought to the bitter end. Meanwhile, according to political scientist Barbara Harff, between 1989 and 2005 the number of campaigns of mass killing of civilians decreased by 90 percent.

The decline of killing and cruelty poses several challenges to our ability to make sense of the world. To begin with, how could so many people be so wrong about something so important? Partly, it’s because of a cognitive illusion: We estimate the probability of an event from how easy it is to recall examples. Scenes of carnage are more likely to be relayed to our living rooms and burned into our memories than footage of people dying of old age. Partly, it’s an intellectual culture that is loath to admit that there could be anything good about the institutions of civilization and Western society. Partly, it’s the incentive structure of the activism and opinion markets: No one ever attracted followers and donations by announcing that things keep getting better. And part of the explanation lies in the phenomenon itself. The decline of violent behavior has been paralleled by a decline in attitudes that tolerate or glorify violence, and often the attitudes are in the lead. As deplorable as they are, the abuses at Abu Ghraib and the lethal injections of a few murderers in Texas are mild by the standards of atrocities in human history. But, from a contemporary vantage point, we see them as signs of how low our behavior can sink, not of how high our standards have risen.

The other major challenge posed by the decline of violence is how to explain it. A force that pushes in the same direction across many epochs, continents, and scales of social organization mocks our standard tools of causal explanation. The usual suspects—guns, drugs, the press, American culture—aren’t nearly up to the job. Nor could it possibly be explained by evolution in the biologist’s sense: Even if the meek could inherit the earth, natural selection could not favor the genes for meekness quickly enough. In any case, human nature has not changed so much as to have lost its taste for violence. Social psychologists find that at least 80 percent of people have fantasized about killing someone they don’t like. And modern humans still take pleasure in viewing violence, if we are to judge by the popularity of murder mysteries, Shakespearean dramas, Mel Gibson movies, video games, and hockey.

What has changed, of course, is people’s willingness to act on these fantasies. The sociologist Norbert Elias suggested that European modernity accelerated a “civilizing process” marked by increases in self-control, long-term planning, and sensitivity to the thoughts and feelings of others. These are precisely the functions that today’s cognitive neuroscientists attribute to the prefrontal cortex. But this only raises the question of why humans have increasingly exercised that part of their brains. No one knows why our behavior has come under the control of the better angels of our nature, but there are four plausible suggestions.

The first is that Thomas Hobbes got it right. Life in a state of nature is nasty, brutish, and short, not because of a primal thirst for blood but because of the inescapable logic of anarchy. Any beings with a modicum of self-interest may be tempted to invade their neighbors to steal their resources. The resulting fear of attack will tempt the neighbors to strike first in preemptive self-defense, which will in turn tempt the first group to strike against them preemptively, and so on. This danger can be defused by a policy of deterrence—don’t strike first, retaliate if struck—but, to guarantee its credibility, parties must avenge all insults and settle all scores, leading to cycles of bloody vendetta. These tragedies can be averted by a state with a monopoly on violence, because it can inflict disinterested penalties that eliminate the incentives for aggression, thereby defusing anxieties about preemptive attack and obviating the need to maintain a hair-trigger propensity for retaliation. Indeed, Eisner and Elias attribute the decline in European homicide to the transition from knightly warrior societies to the centralized governments of early modernity. And, today, violence continues to fester in zones of anarchy, such as frontier regions, failed states, collapsed empires, and territories contested by mafias, gangs, and other dealers of contraband.

Payne suggests another possibility: that the critical variable in the indulgence of violence is an overarching sense that life is cheap. When pain and early death are everyday features of one’s own life, one feels fewer compunctions about inflicting them on others. As technology and economic efficiency lengthen and improve our lives, we place a higher value on life in general.

A third theory, championed by Robert Wright, invokes the logic of non-zero-sum games: scenarios in which two agents can each come out ahead if they cooperate, such as trading goods, dividing up labor, or sharing the peace dividend that comes from laying down their arms. As people acquire know-how that they can share cheaply with others and develop technologies that allow them to spread their goods and ideas over larger territories at lower cost, their incentive to cooperate steadily increases, because other people become more valuable alive than dead.

Then there is the scenario sketched by philosopher Peter Singer. Evolution, he suggests, bequeathed people a small kernel of empathy, which by default they apply only within a narrow circle of friends and relations. Over the millennia, people’s moral circles have expanded to encompass larger and larger polities: the clan, the tribe, the nation, both sexes, other races, and even animals. The circle may have been pushed outward by expanding networks of reciprocity, à la Wright, but it might also be inflated by the inexorable logic of the golden rule: The more one knows and thinks about other living things, the harder it is to privilege one’s own interests over theirs. The empathy escalator may also be powered by cosmopolitanism, in which journalism, memoir, and realistic fiction make the inner lives of other people, and the contingent nature of one’s own station, more palpable—the feeling that “there but for fortune go I”.

Whatever its causes, the decline of violence has profound implications. It is not a license for complacency: We enjoy the peace we find today because people in past generations were appalled by the violence in their time and worked to end it, and so we should work to end the appalling violence in our time. Nor is it necessarily grounds for optimism about the immediate future, since the world has never before had national leaders who combine pre-modern sensibilities with modern weapons.

But the phenomenon does force us to rethink our understanding of violence. Man’s inhumanity to man has long been a subject for moralization. With the knowledge that something has driven it dramatically down, we can also treat it as a matter of cause and effect. Instead of asking, “Why is there war?” we might ask, “Why is there peace?” From the likelihood that states will commit genocide to the way that people treat cats, we must have been doing something right. And it would be nice to know what, exactly, it is.”

Steven Pinker, Canadian-American experimental psychologist, cognitive scientist, linguist and popular science author, Harvard College Professor, A History Of Violence, Edge, March 27, 2007 (First published in The New Republic, 3.19.2007)

Steven Pinker on the myth of violence

Steven Pinker charts the decline of violence from Biblical times to the present, and argues that, though it may seem illogical and even obscene, given Iraq and Darfur, we are living in the most peaceful time in our species’ existence.

Steven Pinker on the myth of violence, TED.com, Mar 2007

See also:

☞ Steven Pinker, A History of Violence Edge Master Class 2011, Edge, Sept 27, 2011
The Psychology of Violence - a fascinating look at a violent act and a modern rethink of the psychology of shame and honour in preventing it
☞ David Runciman, The Better Angels of Our Nature by Steven Pinker - review, The Guardian, Sept 22, 2011
Violence tag on Lapidarium notes

Sep
4th
Sun
permalink

Neal Gabler on The Elusive Big Idea - ‘We are living in a post ideas world where bold ideas are almost passé’

Ideas just aren’t what they used to be. Once upon a time, they could ignite fires of debate, stimulate other thoughts, incite revolutions and fundamentally change the ways we look at and think about the world.

They could penetrate the general culture and make celebrities out of thinkers — notably Albert Einstein, but also Reinhold Niebuhr, Daniel Bell, Betty Friedan, Carl Sagan and Stephen Jay Gould, to name a few. The ideas themselves could even be made famous: for instance, for “the end of ideology,” “the medium is the message,” “the feminine mystique,” “the Big Bang theory,” “the end of history.” A big idea could capture the cover of Time — “Is God Dead?” — and intellectuals like Norman Mailer, William F. Buckley Jr. and Gore Vidal would even occasionally be invited to the couches of late-night talk shows. How long ago that was. (…)

If our ideas seem smaller nowadays, it’s not because we are dumber than our forebears but because we just don’t care as much about ideas as they did. In effect, we are living in an increasingly post-idea world — a world in which big, thought-provoking ideas that can’t instantly be monetized are of so little intrinsic value that fewer people are generating them and fewer outlets are disseminating them, the Internet notwithstanding. Bold ideas are almost passé.

It is no secret, especially here in America, that we live in a post-Enlightenment age in which rationality, science, evidence, logical argument and debate have lost the battle in many sectors, and perhaps even in society generally, to superstition, faith, opinion and orthodoxy. While we continue to make giant technological advances, we may be the first generation to have turned back the epochal clock — to have gone backward intellectually from advanced modes of thinking into old modes of belief. But post-Enlightenment and post-idea, while related, are not exactly the same.

Post-Enlightenment refers to a style of thinking that no longer deploys the techniques of rational thought. Post-idea refers to thinking that is no longer done, regardless of the style. (…)

There is the retreat in universities from the real world, and an encouragement of and reward for the narrowest specialization rather than for daring — for tending potted plants rather than planting forests.

There is the eclipse of the public intellectual in the general media by the pundit who substitutes outrageousness for thoughtfulness, and the concomitant decline of the essay in general-interest magazines. And there is the rise of an increasingly visual culture, especially among the young — a form in which ideas are more difficult to express. (…)

We live in the much vaunted Age of Information. Courtesy of the Internet, we seem to have immediate access to anything that anyone could ever want to know. We are certainly the most informed generation in history, at least quantitatively. There are trillions upon trillions of bytes out there in the ether — so much to gather and to think about.

And that’s just the point. In the past, we collected information not simply to know things. That was only the beginning. We also collected information to convert it into something larger than facts and ultimately more useful — into ideas that made sense of the information. We sought not just to apprehend the world but to truly comprehend it, which is the primary function of ideas. Great ideas explain the world and one another to us.

Marx pointed out the relationship between the means of production and our social and political systems. Freud taught us to explore our minds as a way of understanding our emotions and behaviors. Einstein rewrote physics. More recently, McLuhan theorized about the nature of modern communication and its effect on modern life. These ideas enabled us to get our minds around our existence and attempt to answer the big, daunting questions of our lives.

But if information was once grist for ideas, over the last decade it has become competition for them. We are like the farmer who has too much wheat to make flour. We are inundated with so much information that we wouldn’t have time to process it even if we wanted to, and most of us don’t want to.

The collection itself is exhausting: what each of our friends is doing at that particular moment and then the next moment and the next one; who Jennifer Aniston is dating right now; which video is going viral on YouTube this hour; what Princess Letizia or Kate Middleton is wearing that day. In effect, we are living within the nimbus of an informational Gresham’s law in which trivial information pushes out significant information, but it is also an ideational Gresham’s law in which information, trivial or not, pushes out ideas.

We prefer knowing to thinking because knowing has more immediate value. It keeps us in the loop, keeps us connected to our friends and our cohort. Ideas are too airy, too impractical, too much work for too little reward. Few talk ideas. Everyone talks information, usually personal information. Where are you going? What are you doing? Whom are you seeing? These are today’s big questions.

It is certainly no accident that the post-idea world has sprung up alongside the social networking world. Even though there are sites and blogs dedicated to ideas, Twitter, Facebook, Myspace, Flickr, etc., the most popular sites on the Web, are basically information exchanges, designed to feed the insatiable information hunger, though this is hardly the kind of information that generates ideas. It is largely useless except insofar as it makes the possessor of the information feel, well, informed. Of course, one could argue that these sites are no different than conversation was for previous generations, and that conversation seldom generated big ideas either, and one would be right. (…)

An artist friend of mine recently lamented that he felt the art world was adrift because there were no longer great critics like Harold Rosenberg and Clement Greenberg to provide theories of art that could fructify the art and energize it. Another friend made a similar argument about politics. While the parties debate how much to cut the budget, he wondered where were the John Rawls and Robert Nozick who could elevate our politics.

One could certainly make the same argument about economics, where John Maynard Keynes remains the center of debate nearly 80 years after propounding his theory of government pump priming. This isn’t to say that the successors of Rosenberg, Rawls and Keynes don’t exist, only that if they do, they are not likely to get traction in a culture that has so little use for ideas, especially big, exciting, dangerous ones, and that’s true whether the ideas come from academics or others who are not part of elite organizations and who challenge the conventional wisdom. All thinkers are victims of information glut, and the ideas of today’s thinkers are also victims of that glut.

But it is especially true of big thinkers in the social sciences like the cognitive psychologist Steven Pinker, who has theorized on everything from the source of language to the role of genetics in human nature, or the biologist Richard Dawkins, who has had big and controversial ideas on everything from selfishness to God, or the psychologist Jonathan Haidt, who has been analyzing different moral systems and drawing fascinating conclusions about the relationship of morality to political beliefs. But because they are scientists and empiricists rather than generalists in the humanities, the place from which ideas were customarily popularized, they suffer a double whammy: not only the whammy against ideas generally but the whammy against science, which is typically regarded in the media as mystifying at best, incomprehensible at worst. A generation ago, these men would have made their way into popular magazines and onto television screens. Now they are crowded out by informational effluvium.

No doubt there will be those who say that the big ideas have migrated to the marketplace, but there is a vast difference between profit-making inventions and intellectually challenging thoughts. Entrepreneurs have plenty of ideas, and some, like Steven P. Jobs of Apple, have come up with some brilliant ideas in the “inventional” sense of the word.

Still, while these ideas may change the way we live, they rarely transform the way we think. They are material, not ideational. It is thinkers who are in short supply, and the situation probably isn’t going to change anytime soon.

We have become information narcissists, so uninterested in anything outside ourselves and our friendship circles or in any tidbit we cannot share with those friends that if a Marx or a Nietzsche were suddenly to appear, blasting his ideas, no one would pay the slightest attention, certainly not the general media, which have learned to service our narcissism.

What the future portends is more and more information — Everests of it. There won’t be anything we won’t know. But there will be no one thinking about it.

Think about that.”

Neal Gabler, a professor, journalist, author, film critic and political commentator, The Elusive Big Idea, The New York Times, August 14, 2011.

See also:

☞ The Kaleidoscopic Discovery Engine. ‘All scientific discoveries are in principle ‘multiples’’
Mark Pagel, Infinite Stupidity. Social evolution may have sculpted us not to be innovators and creators as much as to be copiers, Edge, Lapidarium, Dec 16, 2011
The Paradox of Contemporary Cultural History. We are clinging as never before to the familiar in matters of style and culture

Sep
3rd
Sat
permalink

Republic of Letters ☞ Exploring Correspondence and Intellectual Community in the Early Modern Period (1500-1800)


                                            The Republic of Letters

"Despite the wars and despite different religions. All the sciences, all the arts, thus received mutal assistance in this way: the academies formed this republic. (…) True scholars in each field drew closer the bonds of this great society of minds, spread everywhere and everywhere independent. This correspondence still remains; it is one of the consolations for the evils that ambition and politics spread across the Earth."

Voltaire, Le Siècle de Louis XIV cited in Dena Goodman, The Republic of letters: a cultural history of the French enlightenment, Cornell University Press, 1996, p. 20

Republic of Letters (Respublica literaria) is most commonly used to define intellectual communities in the late 17th and 18th century in Europe and America. It especially brought together the intellectuals of Age of Enlightenment, or “philosophes" as they were called in France. The Republic of Letters emerged in the 17th century as a self-proclaimed community of scholars and literary figures that stretched across national boundaries but respected differences in language and culture. These communities that transcended national boundaries formed the basis of a metaphysical Republic. (…)

As is evident from the term, the circulation of handwritten letters was necessary for its function because it enabled intellectuals to correspond with each other from great distances. All citizens of the 17th century Republic of Letters corresponded by letter, exchanged published papers and pamphlets, and considered it their duty to bring others into the Republic through the expansion of correspondence.” - (Wiki)

"[They] organized itself around cultural institutions (e. g. museums, libraries, academies) and research projects that collected, sorted, and dispersed knowledge. A pre-disciplinary community in which most of the modern disciplines developed, it was the ancestor to a wide range of intellectual societies from the seventeenth-century salons and eighteenth-century coffeehouses to the scientific academy or learned society and the modern research university.

Forged in the humanist culture of learning that promoted the ancient ideal of the republic as the place for free and continuous exchange of knowledge, the Republic of Letters was simultaneously an imagined community (a scholar’s utopia where differences, in theory, would not matter), an information network, and a dynamic platform from which a wide variety of intellectual projects – many of them with important ramifications for society, politics, and religion – were proposed, vetted, and executed. (…)

The Republic of Letters existed for almost four hundred years. Its scope encompassed all of Europe, but reached well beyond this region as western Europeans had more regular contact with and presence in Russia, Asia, Africa, and the Americas. In the sixteenth and seventeenth century merchants and missionaries helped to create global information networks and colonial outposts that transformed the geography of the Republic of Letters. By the eighteenth century we can speak of a trans-Atlantic republic of letters shaped by central figures such as Franklin and many others, north and south, who wrote and traveled across the Atlantic.”

"Recent scholarship has established that intellectuals across Europe came to see themselves, in the sixteenth, seventeenth and eighteenth centuries, as citizens of a transnational intellectual society—a Republic of Letters in which speech was free, rank depended on ability and achievement rather than birth, and scholars, philosophers and scientists could find common ground in intellectual inquiry even if they followed different faiths and belonged to different nations.”

— Anthony Grafton, Republic of Letters introduction, Stanford University

Republic of Letters Project

         
                                          (click image to explore)

Researchers map thousands of letters exchanged in the 18th century’s “Republic of Letters” and learn at a glance what it once took a lifetime of study to comprehend.

Mapping the Republic of Letters, Stanford University

See also:

☞ Dena Goodman, The Republic of letters: a cultural history of the French enlightenment, Cornell University Press, 1996
☞ April Shelford, Transforming the republic of letters: Pierre-Daniel Huet and European intellectual life, 1650-1720, University Rochester Press, 2007
New social media? Same old, same old, say Stanford experts, Stanford University News, Nov 2, 2011.
☞ Cynthia Haven, Hot new social media maybe not so new: plus ça change, plus c’est la même chose, Stanford University The Book Haven, Nov 2, 2011