Lapidarium notes RSS

Amira Skomorowska's notes

"Everything you can imagine is real."— Pablo Picasso

Lapidarium

Tags:

Africa
Age of information
Ancient
Anthropology
Art
Artificial intelligence
Astronomy
Atheism
Beauty
Biography
Books
China
Christianity
Civilization
Cognition, perception, relativity
Cognitive science
Collective intelligence
Communication
Consciousness
Creativity
Culture
Curiosity
Cyberspace
Democracy
Documentary
Drawing
Earth
Economy
Evolution
Friendship
Funny
Future
Genetics
Globalization
Happiness
History
Human being
Illustrations
Imagination
Individualism
Infographics
Information
Inspiration
Internet
Knowledge
Language
Learning
Life
Literature
Logic
Love
Mathematics
Media
Metaphor
Mind & Brain
Multiculturalism
Music
Networks
Neuroscience
Painting
Paradoxes
Patterns
Philosophy
Poetry
Politics
Physics
Psychology
Rationalism
Religions
Science
Science & Art
Self improvement
Semantics
Society
Sociology
Storytelling
Technology
The other
Time
Timeline
Traveling
Unconsciousness
Universe
USA
Video
Violence
Visualization


Pensieri a caso
Photography
A Box Of Stories
Reading Space
Homepage

Twitter
Facebook

Contact

Archive

Jul
21st
Sat
permalink

What Neuroscience Tells Us About Morality: 'Morality is a form of decision-making, and is based on emotions, not logic'

           

Morality is not the product of a mythical pure reason divorced from natural selection and the neural wiring that motivates the animal to sociability. It emerges from the human brain and its responses to real human needs, desires, and social experience; it depends on innate emotional responses, on reward circuitry that allows pleasure and fear to be associated with certain conditions, on cortical networks, hormones and neuropeptides. Its cognitive underpinnings owe more to case-based reasoning than to conformity to rules.”

Patricia Smith Churchland, Canadian-American philosopher and neuroscientist noted for her contributions to neurophilosophy and the philosophy of mind, in John Bickle, The Oxford Handbook of Philosophy and Neuroscience, Chapter 16 "Inference to the best decision", Oxford Handbooks, 2009, p.419.

"Although many philosophers used to dismiss the relevance of neuroscience on grounds that what mattered was “the software, not the hardware”, increasingly philosophers have come to recognize that understanding how the brain works is essential to understanding the mind."

Patricia Smith Churchland, introductory message at her homepage at the University of California, San Diego.

"Morality is a form of decision-making, and is based on emotions, not logic."

Jonah Lehrer, cited in delancey place, 2009

"Philosophers must take account of neuroscience in their investigations.

While [Patricia S.] Churchland's intellectual opponents over the years have suggested that you can understand the “software” of thinking, independently of the “hardware”—the brain structure and neuronal firings—that produced it, she has responded that this metaphor doesn't work with the brain: Hardware and software are intertwined to such an extent that all philosophy must be “neurophilosophy.” There’s no other way.

Churchland, professor emerita of philosophy at the University of California at San Diego, has been best known for her work on the nature of consciousness. But now, with a new book, Braintrust: What Neuroscience Tells Us About Morality (Princeton University Press), she is taking her perspective into fresh terrain: ethics. And the story she tells about morality is, as you’d expect, heavily biological, emphasizing the role of the peptide oxytocin, as well as related neurochemicals.

Oxytocin’s primary purpose appears to be in solidifying the bond between mother and infant, but Churchland argues—drawing on the work of biologists—that there are significant spillover effects: Bonds of empathy lubricated by oxytocin expand to include, first, more distant kin and then other members of one’s in-group. (Another neurochemical, aregenine vasopressin, plays a related role, as do endogenous opiates, which reinforce the appeal of cooperation by making it feel good.)

The biological picture contains other elements, of course, notably our large prefrontal cortexes, which help us to take stock of situations in ways that lower animals, driven by “fight or flight” impulses, cannot. But oxytocin and its cousin-compounds ground the human capacity for empathy. (When she learned of oxytocin’s power, Churchland writes in Braintrust, she thought: “This, perhaps, Hume might accept as the germ of ‘moral sentiment.’”)

From there, culture and society begin to make their presence felt, shaping larger moral systems: tit-for-tat retaliation helps keep freeloaders and abusers of empathic understanding in line. Adults pass along the rules for acceptable behavior—which is not to say “just” behavior, in any transcendent sense—to their children. Institutional structures arise to enforce norms among strangers within a culture, who can’t be expected to automatically trust each other.

These rules and institutions, crucially, will vary from place to place, and over time. “Some cultures accept infanticide for the disabled or unwanted,” she writes, without judgment. “Others consider it morally abhorrent; some consider a mouthful of the killed enemy’s flesh a requirement for a courageous warrior, others consider it barbaric.”

Hers is a bottom-up, biological story, but, in her telling, it also has implications for ethical theory. Morality turns out to be not a quest for overarching principles but rather a process and practice not very different from negotiating our way through day-to-day social life. Brain scans, she points out, show little to no difference between how the brain works when solving social problems and how it works when solving ethical dilemmas. (…)

[Churchland] thinks, with Aristotle’s argument that morality is not about rule-making but instead about the cultivation of moral sentiment through experience, training, and the following of role models. The biological story also confirms, she thinks, David Hume’s assertion that reason and the emotions cannot be disentangled. This view stands in sharp contrast to those philosophers who argue that instinctual reactions must be scrutinized by reason. The villains of her books are philosophical system-builders—whether that means Jeremy Bentham, with his ideas about maximizing aggregate utility (“the greatest good for the greatest number”), or Immanuel Kant, with his categorical imperatives (never lie!), or John Rawls, erector of A Theory of Justice.

Churchland thinks the search for what she invariably calls “exceptionless rules” has deformed modern moral philosophy. “There have been a lot of interesting attempts, and interesting insights, but the target is like perpetual youth or a perpetual-motion machine. You’re not going to find an exceptionless rule,” she says. “What seems more likely is that there is a basic platform that people share and that things shape themselves based on that platform, and based on ecology, and on certain needs and certain traditions.”

The upshot of that approach? “Sometimes there isn’t an answer in the moral domain, and sometimes we have to agree to disagree, and come together and arrive at a good solution about what we will live with.”

Owen Flanagan Jr., a professor of philosophy and neurobiology at Duke University and a friend of Churchland’s, adds, “There’s a long tradition in philosophy that morality is based on rule-following, or on intuitions that only specially positioned people can have. One of her main points is that that is just a completely wrong picture of the genealogical or descriptive story. The first thing to do is to emphasize our continuity with the animals.” In fact, Churchland believes that primates and even some birds have a moral sense, as she defines it, because they, too, are social problem-solvers.

Recognizing our continuity with a specific species of animal was a turning point in her thinking about morality, in recognizing that it could be tied to the hard and fast. “It all changed when I learned about the prairie voles,” she says—surely not a phrase John Rawls ever uttered.

She told the story at the natural-history museum, in late March. Montane voles and prairie voles are so similar “that naifs like me can’t tell them apart,” she told a standing-room-only audience (younger and hipper than the museum’s usual patrons—the word “neuroscience” these days is like catnip). But prairie voles mate for life, and montane voles do not. Among prairie voles, the males not only share parenting duties, they will even lick and nurture pups that aren’t their own. By contrast, male montane voles do not actively parent even their own offspring. What accounts for the difference? Researchers have found that the prairie voles, the sociable ones, have greater numbers of oxytocin receptors in certain regions of the brain. (And prairie voles that have had their oxytocin receptors blocked will not pair-bond.)

"As a philosopher, I was stunned," Churchland said, archly. "I thought that monogamous pair-bonding was something one determined for oneself, with a high level of consideration and maybe some Kantian reasoning thrown in. It turns out it is mediated by biology in a very real way.”

The biologist Sue Carter, now at the University of Illinois at Chicago, did some of the seminal work on voles, but oxytocin research on humans is now extensive as well. In a study of subjects playing a lab-based cooperative game in which the greatest benefits to two players would come if the first (the “investor”) gave a significant amount of money to the second (the “trustee”), subjects who had oxytocin sprayed into their noses donated more than twice as often as a control group, giving nearly one-fifth percent more each time.

Paul Zak, an economist at Claremont Graduate University, was an author of that study, as well as others that Churchland cites. He is working on a book called “The Moral Molecule” and describes himself as “in exactly the same camp” as Churchland.

Oxytocin works on the level of emotion,” he says. “You just get the feeling of right and wrong. It is less precise than a Kantian system, but it’s consistent with our evolved physiology as social creatures.”

The City University of New York Graduate Center philosopher Jesse Prinz, who appeared with Churchland at a Columbia University event the night after her museum lecture, has mostly praise for Churchland’s latest offering. “If you look at a lot of the work that’s been done on scientific approaches to morality—books written for a lay audience—it’s been about evolutionary psychology. And what we get again and again is a story about the importance of evolved tendencies to be altruistic. That’s a report on a particular pattern of behavior, and an evolutionary story to explain the behavior. But it’s not an account of the underlying mechanism. The idea that science has moved to a point where we can see two animals working together toward a collective end and know the brain mechanism that allows that is an extraordinary achievement.”

Nevertheless, he says, how to move from the possibility of collective action to “the specific human institution of moral rules is a bit of connective tissue that she isn’t giving us.”

Indeed, that’s one of the most striking aspects of Braintrust. After Churchland establishes the existence of a platform for moral decision-making, she describes the process through which moral decisions come to be made, but she says little about their content—why one path might be better than another. She offers the following description of a typical “moral” scenario. A farmer sees a deer breaching his neighbor’s fence and eating his apples while the neighbor is away. The farmer will not consult a Kantian rule book before deciding whether to help, she writes, but instead will weigh an array of factors: Would I want my neighbor to help me? Does my culture find such assistance praiseworthy or condescending? Am I faced with any pressing emergencies on my own farm? Churchland describes this process of moral decision-making as being driven by “constraint satisfaction.”

"What exactly constraint satisfaction is in neurobiological terms we do not yet understand,” she writes, “but roughly speaking it involves various factors with various weights and probabilities interacting so as to produce a suitable solution to a question.”

"Various" factors with "various" weights? Is that not a little vague? But Duke’s Owen Flanagan Jr. defends this highly pragmatic view of morality. "Where we get a lot of pushback from philosophers is that they’ll say, ‘If you go this naturalistic route that Flanagan and Churchland go, then you make ethics merely a theory of prudence.’ And the answer is, Yeah, you kind of do that. Morality doesn’t become any different than deciding what kind of bridge to build across a river. The reason we both think it makes sense is that the other stories”—that morality comes from God, or from philosophical intuition—”are just so implausible.”

Flanagan also thinks Churchland’s approach leads to a “more democratic” morality. "It’s ordinary people discussing the best thing to do in a given situation, given all the best information available at the moment." Churchland herself often underscores that democratic impulse, drawing on her own biography. She grew up on a farm, in the Okanagan Valley, in British Columbia. Speaking of her onetime neighbors, she says: "I got as much wisdom from some of those old farmers as I ever got from a seminar on moral philosophy.”

If building a bridge is the topic up for discussion, however, one can assume that most people think getting across the water is a sound idea. Yet mainstream philosophers object that such a sense of shared purpose cannot always be assumed in moral questions—and that therefore the analogy fails. (…)

Kahane says the complexity of human life demands a more intense and systematic analysis of moral questions than the average citizen might be capable of, at least if she’s limited to the basic tool kit of social skills.

Peter Railton, a philosophy professor at the University of Michigan at Ann Arbor, agrees. Our intuitions about how to get along with other people may have been shaped by our interactions within small groups (and between small groups). But we don’t live in small groups anymore, so we need some procedures through which we leverage our social skills into uncharted areas—and that is what the traditional academic philosophers, whom Churchland mostly rejects, work on. What are our obligations to future generations (concerning climate change, say)? What do we owe poor people on the other side of the globe (whom we might never have heard of, in our evolutionary past)?

For a more rudimentary example, consider that evolution quite likely trained us to treat “out groups” as our enemy. Philosophical argument, Railton says, can give reasons why members of the out-group are not, in fact, the malign and unusual creatures that we might instinctively think they are; we can thereby expand our circle of empathy.

Churchland’s response is that someone is indeed likely to have the insight that constant war against the out-group hurts both sides’ interests, but she thinks a politician, an economist, or a farmer-citizen is as likely to have that insight as a professional philosopher. (…)

But isn’t she, right there, sneaking in some moral principles that have nothing to do with oxytocin, namely the primacy of liberty over equality? In our interviews, she described Singer’s worldview as, in an important sense, unnatural. Applying the same standard to distant foreigners as we do to our own kith and kin runs counter to our most fundamental biological impulses.

But Oxford’s Kahane offers a counterargument: “‘Are humans capable of utilitarianism?’ is not a question that is answered by neuroscience,” he says. “We just need to test if people are able to live like that. Science may explain whether it is common for us to do, but that’s very different from saying what our limits are.”

Indeed, Peter Singer lives (more or less) the way he preaches, and chapters of an organization called Giving What We Can, whose members pledge to give a large portion of their earnings to charity, have popped up on several campuses. “If I can prevent hundreds of people from dying while still having the things that make life meaningful to me, that strikes me as a good idea that doesn’t go against ‘paradigmatically good sense’ or anything,” says Nick Beckstead, a fourth-year graduate student in philosophy and a founder of the group’s Rutgers chapter.

Another target in Churchland’s book is Jonathan Haidt, the University of Virginia psychologist who thinks he has identified several universal “foundations” of moral thought: protection of society’s vulnerable; fairness; loyalty to the in-group; respect for authority; and the importance of purity (a sanitary concern that evolves into the cultural ideal of sanctity). That strikes her as a nice list, but no more—a random collection of moral qualities that isn’t at all rooted in biology. During her museum talk, she described Haidt’s theory as a classic just-so story. “Maybe in the 70s, when evolutionary psychology was just becoming a thing, you could get away with saying”—here she adopted a flighty, sing-song voice—’It could have been, out there on the veldt, in Africa, 250,000 years ago that these were traits that were selected,’” she said. “But today you need evidence, actually.” (…)

The element of cultural relativism also remains somewhat mysterious in Churchland’s writings on morality. In some ways, her project dovetails with that of Sam Harris, the “New Atheist” (and neuroscience Ph.D.) who believes reason and neuroscience can replace woolly armchair philosophy and religion as guides to morality. But her defense of some practices of primitive tribes, including infanticide (in the context of scarcity) —as well the seizing of enemy women, in raids, to keep up the stock of mates— as “moral” within their own context, seems the opposite of his approach.

I reminded Churchland, who has served on panels with Harris, that he likes to put academics on the spot by asking if they think such practices as the early 19th-century Hindu tradition of burning widows on their husbands’ funeral pyres was objectively wrong.

So did she think so? First, she got irritated: “I don’t know why you’re asking that.” But, yes, she finally said, she does think that practice objectively wrong. “But frankly I don’t know enough about their values, and why they have that tradition, and I’m betting that Sam doesn’t either.”

"The example I like to use," she said, "rather than using an example from some other culture and just laughing at it, is the example from our own country, where it seems to me that the right to buy assault weapons really does not work for the well-being of most people. And I think that’s an objective matter."

At times, Churchland seems just to want to retreat from moral philosophical debate back to the pure science. “Really,” she said, “what I’m interested in is the biological platform. Then it’s an open question how we attack more complex problems of social life.”

— Christopher Shea writing about Patricia Smith Churchland, Canadian-American philosopher and neuroscientist noted for her contributions to neurophilosophy and the philosophy of mind, in Rule Breaker, The Chronicle of Higher Education, June 12, 2011. (Illustration: attributed to xkcd)

See also:

Jesse Prinz: Morality is a Culturally Conditioned Response
Sam Harris on the ‘selfish gene’ and moral behavior
Sam Harris on the moral formula: How facts inform our ethics
Morality tag on Lapidarium