Quantcast
Channel: Virtue Ethics
Viewing all 89 articles
Browse latest View live

The Obligation to Endure

$
0
0


Rachel Louise Carson (1907-1964)

Originally published in Silent Spring (1962)

The history of life on earth has been a history of interaction between living things and their surroundings. To a large extent, the physical form and the habits of the earth's vegetation and its animal life have been molded by the environment. Considering the whole span of earthly time, the opposite effect, in which life actually modifies its surroundings, has been relatively slight. Only within the moment of time represented by the present century has one species—man—acquired significant power to alter the nature of his world.

During the past quarter century this power has not only increased to one of disturbing magnitude but it has changed in character. The most alarming of all man's assaults upon the environment is the contamination of air, earth, rivers, and sea with dangerous and even lethal materials. This pollution is for the most part irrecoverable; the chain of evil it initiates not only in the world that must support life but in living tissues is for the most part irreversible. In this now universal contamination of the environment, chemicals are the sinister and little-recognized partners of radiation in changing the very nature of the world—the very nature of its life. Strontium 90, released through nuclear explosions into the air, comes to the earth in rain or drifts down as fallout, lodges in soil, enters into the grass or corn or wheat grown there, and in time takes up its abode in the bones of a human being, there to remain until his death. Similarly, chemicals sprayed on croplands or forests or gardens lie long in the soil, entering into living organisms, passing from one to another in a chain of poisoning and death. Or they pass mysteriously by underground streams until they emerge and, through the alchemy of air and sunlight, combine into new forms that kill vegetation, sicken cattle, and work unknown harm on those who drink from once pure wells. As Albert Schweitzer has said, "Man can hardly even recognize the devils of his own creation."

It took hundreds of millions of years to produce the life that now inhabits the earth—eons of time in which that developing and evolving and diversifying life reached a state of adjustment and balance with its surroundings. The environment, rigorously shaping and directing the life it supported, contained elements that were hostile as well as supporting. Certain rocks gave out dangerous radiation, even within the light of the sun, from which all life draws its energy, there were short-wave radiations with power to injure. Given time—time not in years but in millennia—life adjusts, and a balance has been reached. For time is the essential ingredient; but in the modern world there is no time.

The rapidity of change and the speed with which new situations are created follow the impetuous and heedless pace of man rather than the deliberate pace of nature. Radiation is no longer merely the background radiation of rocks, the bombardment of cosmic rays, the ultraviolet of the sun that have existed before there was any life on earth; radiation is now the unnatural creation of man's tampering with the atom. The chemicals to which life is asked to make its adjustment are no longer merely the calcium and silica and copper and all the rest of the minerals washed out of the rocks and carried in rivers to the sea; they are the synthetic creations of man's inventive mind, brewed in his laboratories, and having no counterparts in nature.

To adjust to these chemicals would require time on the scale that is nature's; it would require not merely the years of a man's life but the life of generations. And even this, were it by some miracle possible, would be futile, for the new chemicals come from our laboratories in an endless stream; almost five hundred annually find their way into actual use in the United States alone. The figure is staggering and its implications are not easily grasped—500 new chemicals to which the bodies of men and animals are required somehow to adapt each year, chemicals totally outside the limits of biologic experience.

Among them are many that are used in man's war against nature. Since the mid-1940's over 200 basic chemicals have been created for use in killing insects, weeds, rodents, and other organisms described in the modern vernacular as "pests"; and they are sold under several thousand different brand names.

These sprays, dusts, and aerosols are now applied almost universally to farms, gardens, forests, and homes—nonselective chemicals that have the power to kill every insect, the "good" and the "bad," to still the song of birds and the leaping of fish in the streams, to coat the leaves with a deadly film, and to linger on in the soil—all this though the intended target may be only a few weeds or insects. Can anyone believe it is possible to lay down such a barrage of poisons on the surface of the earth without making it unfit for all life? They should not be called "insecticides," but "biocides."

The whole process of spraying seems caught up in an endless spiral. Since DDT was released for civilian use, a process of escalation has been going on in which ever more toxic materials must be found. This has happened because insects, in a triumphant vindication of Darwin's principle of the survival of the fittest, have evolved super races immune to the particular insecticide used, hence a deadlier one has always to be developed—and then a deadlier one than that. It has happened also because, for reasons to be described later, destructive insects often undergo a “flareback”, or resurgence, after spraying, in numbers greater than before. Thus the chemical war is never won, and all life is caught in its violent crossfire.

Along with the possibility of the extinction of mankind, by nuclear war, the central problem of our age has therefore become the contamination of man’s total environment with such substances of incredible potential for harm – substances that accumulate in the tissues of plants and animals and even penetrate the germ cells to shatter or alter the very material of heredity upon which the shape of the future depends.

Some would-be architects of our future look toward a time when it will be possible to alter the human germ plasm by design. But we may easily be doing so now by inadvertence, for many chemicals, like radiation, bring about gene mutations. It is ironic to think that man might determine his own future by something so seemingly trivial as the choice of an insect spray.

All this has been risked – for what? Future historians may well be amazed by our distorted sense of proportion. How could intelligent beings seek to control a few unwanted species by a method that contaminated the entire environment and brought the threat of disease and death even to their own kind? Yet this is precisely what we have done. We have done it, moreover, for reasons that collapse the moment we examine them. We are told that the enormous and expanding use of pesticides is necessary to maintain farm production. Yet is our real problem not one of overproduction?  Our farms, despite measures to remove acreages from production and to pay farmers not to produce, have yielded such a staggering excess of crops that the American taxpayer in 1962 is paying out more than one billion dollars a year as the total carrying cost of the surplus-food storage program. And is the situation helped when one branch of the Agriculture Department tries to reduce production while another states, as it did in 1958, “It is believed generally that reduction of crop acreage under provisions of the Soil Bank will stimulate interest in use of chemicals to obtain maximum production on the land retained in crops.”

All this is not to say there is no insect problem and no need of control. I am saying, rather, that control must be geared to realities, not to mythical situations, and that the methods employed must be such that they do not destroy us along with the insects.

The problem whose attempted solution has brought such a train of disaster in its wake is an accompaniment of our modern way of life. Long before the age of man, insects inhabited the earth – a group of extraordinarily varied and adaptable beings. Over the course of time since man’s advent, a small percentage of the more than half a million species of insects have come into conflict with human welfare in two principal ways: as competitors for the food supply and as carriers of human disease.

Disease-carrying insects become important where human beings are crowded together, especially under conditions where sanitation is poor, as in time of natural disaster or war or in situations of extreme poverty and deprivation. Then control of some sort becomes necessary. It is a sobering fact, however, as we shall presently see, that the method of massive chemical control has had only limited success, and also threatens to worsen the very conditions it is intended to curb.

Under primitive agricultural conditions the farmer had few insect problems. These arose with the intensification of agriculture – the devotion of immense acreage to a single crop. Such a system set the stage for explosive increase in specific insect population. Single-crop farming does not take advantage of the principles by which nature works; it is agriculture as an engineer might conceive it to be. Nature has introduced great variety into the landscape, but man has displayed a passion for simplifying it. Thus he undoes the built-in checks and balances by which nature holds the species within bounds. One important natural check is limit on the amount of suitable habitat for each species. Obviously then, an insect that lives on wheat can build up its population to much higher levels on a farm devoted to wheat than on one in which wheat is intermingled with other crops to which the insect is not adapted.

The same thing happens in other situations. A generation or more ago, the towns of large areas of the United States lined their streets with the noble elm tree. Now the beauty they hopefully created is threatened with complete destruction as disease sweeps through the elms, carried by a beetle that would have only limited chance to build up large populations and to spread from tree to tree if the elms were only occasional trees in a richly diversified planting.

Another factor in the modern insect problem is one that must be viewed against a background of geologic and human history: the spreading of thousands of different kinds of organisms from their native homes to invade new territories. This worldwide migration has been studied and graphically described by British ecologist Charles Elton in his recent book The Ecology of Invasion. During the Cretaceous Period, some hundred million years ago, flooding seas cut many land bridges between continents and living things found themselves confined in what Elton calls “colossal separate nature reserves.” There, isolated from each others of their kind, they developed many new species. When some of the land masses were joined again, about 15 million years ago, these species began to move out into new territories – a movement that is not only still in progress but is now receiving considerable assistance from man.

The importation of plants in the primary agent in the modern spread of species, for animals have almost invariably gone along with the plants, quarantine being a comparatively recent an not completely effective innovation. The United States Office of Plant Introduction alone has introduced almost 200,000 species and varieties of plants from all over the world. Nearly half of the 180 or so major insect enemies of plants in the United States are accidental imports from abroad, and most of them have come as hitchhikers on plants.

In new territory, out of reach of the restraining hand of the natural enemies that kept down its members in its native land, an invading plant or animal is able to become enormously abundant. Thus it is no accident that our most troublesome insects are introduced species.

The invasions, both the naturally occurring and those dependent on human assistance, are likely to continue indefinitely. Quarantine and massive chemical campaigns are only extremely expensive ways of buying time. We are faced, according to Dr. Elton, “with a life-and-death need not just to find new technological means of suppressing this plant or that animal”; instead we need the basic knowledge of animal populations and their relations to their surroundings that will “promote an even balance and damp down the explosive power of outbreaks and new invasions.”

Much of the necessary knowledge is now available but we do not use it. We train ecologists in our universities and even employ them in our government agencies but we seldom take their advice. We allow the chemical death rain to fall as though there were no alternative, whereas in fact there are many, and our ingenuity could soon discover many more if given opportunity.

Have we fallen into a mesmerized state that makes us accept as inevitable that which is inferior or detrimental, as though having lost the will or the vision to demand that which is good? Such thinking, in the words of the ecologist Paul Shepard, “idealized life with only its head out of the water, inches above the limits of toleration of the corruption of its own environment…. Why should we tolerate a diet of weak poisons, a home in insipid surroundings, a circle of acquaintances who are not quite our enemies, the noise of motors with just enough relief to prevent insanity? Who would want to live in a world which is just not quite fatal?”

Yet such a world is pressed upon us. The crusade to create a chemically sterile, insect-free world seems to have engendered a fanatic zeal on the part of many specialists and most of the so-called control agencies. On every hand there is evidence that those engaged in spraying operations exercise a ruthless power. “The regulatory entomologist … function as prosecutor, judge and jury, tax assessor and collector and sheriff to enforce their own orders,” said Connecticut entomologist Neely Turner. The most flagrant abuses go unchecked in both state and federal agencies.

It is not my contention that chemical insecticides must never be used. I do contend that we have put poisonous and biologically potent chemicals indiscriminately into the hands of persons largely or wholly ignorant of their potentials for harm. We have subjected enormous numbers of people to contact with these poisons, without their consent and often without their knowledge. If the Bill of Rights contains no guarantee that a citizen shall be secure against lethal poisons distributed either by private individuals or by public officials, it is surely only because our forefathers, despite their considerable wisdom and foresight, could conceive of no such problem.

I contend, furthermore, that we have allowed these chemicals to be used with little or no advance investigation of their effect on soil, water, wildlife, and man himself. Future generations are unlikely to condone our lack of prudent concern for the integrity of the natural world that supports all life.

There is still very limited awareness of the nature of the threat. This is an era of specialists, each of whom sees his own problem and is unaware of or intolerant of the larger frame into which it fits. It is also an era dominated by industry, in which the right to make a dollar at whatever cost is seldom challenged. When the public protests, confronted with some obvious evidence of damaging results of pesticide applications, it is fed little tranquilizing pills of half-truth. We urgently need an end to these false assurances, to the sugar coating of unpalatable facts. It is the public that is being asked to assume the risks that the insect controllers calculate. The public must decide whether it wishes to continue on the present road, and it can do so only when in full possession of the facts. In the words of Jean Rostand, “The obligation to endure gives us the right to know.”


Money can't buy happiness

$
0
0
By Amy Novotney
American Psychological Association, July/August 2012, 
Vol. 43, No. 7, Page 24

Extremely wealthy people have their own set of concerns: anxiety about their children, uncertainty over their relationships and fears of isolation, finds research by Robert Kenny.

Most of what we think we know about people with a lot of money comes from television, movies and beach novels — and a lot of it is inaccurate, says Robert Kenny, EdD.

In an effort to remedy that, Kenny, a developmental psychologist and senior advisor at the Center on Wealth and Philanthropy at Boston College, is co-leading a research project on the aspirations, dilemmas and personal philosophies of people worth $25 million or more. Kenny and his colleagues surveyed approximately 165 households via an anonymous online survey and were surprised to find that while money eased many aspects of these people's lives, it made other aspects more difficult.
Dr. Robert Kenny

The Monitor spoke to Kenny about his findings and about the significance of his research for those of us who don't have a net worth of $25 million or more.

WHAT PROMPTED YOU TO STUDY WEALTHY FAMILIES?

We wanted to try to understand the deeper motivations of people in high net worth households. They are rarely questioned about this, and instead are asked whether they would like a Mercedes or a Lexus. Do they prefer Tiffany's or Cartier? Most surveys of high net worth households are marketing surveys to sell a product, so the questions that are asked are pretty narrow.

We decided to ask three major questions: First, we asked, "What is the greatest aspiration for your life?" As far as we can tell, no one has ever asked this population that question, yet there are assumptions made about this all the time. The second major question was, "What's your greatest aspiration for your children?" Our third question was, "What's your greatest aspiration for the world?" After each of the major questions we asked, "How does your money help you with your greatest aspiration?" and, "How does your money get in the way?"

WHAT DID YOU FIND?

People consistently said that their greatest aspiration in life was to be a good parent — not exactly the stereotype some might expect. When asked whether their money helps with that, they answered with all the obvious: good schools, travel, security, varied experiences. But when we asked how their money gets in the way, that was a payload. We received response after response on how money is not always helpful. They mentioned very specific concerns, such as the way their children would be treated by others and stereotyped as rich kids or trust fund babies, they wondered if their children would know if people really loved them or their money, whether they'd know if their achievements were because of their own skills, knowledge and talent or because they have a lot of money.

Some were concerned about motivation. They worried that if their children have enough money and don't have to worry about covering the mortgage, what will motivate them? How will they lead meaningful lives? This is where the money might get in the way and make things confusing, not necessarily better. Very few said they hoped their children made a lot of money, and not many said they were going to give all the money to charity and let their kids fend for themselves. They were, however, really interested in helping their children figure out how they could live a meaningful life. Even though they did not have to "make a living," they did need to make a life.

As for the respondents' aspirations for the world, they focused, once again, on how to help the youth in the world live healthy, meaningful and impactful lives. Their answers were consistently youth-focused: They were concerned about being good parents, they were concerned about their children and they were concerned about the children of the world in general. We found that to be very interesting, and even surprising because it runs contrary to so many of the stereotypes about this population.


WHAT HAD YOU EXPECTED TO HEAR?

One could expect that you might hear things like, "I wanted to make a lot of money and become financially independent and be able to do whatever I wanted to do whenever I wanted to do it." But very few said anything like that, although they appreciated the temporal freedom. It was so non-financially focused. I expected that when we asked them about their greatest aspiration for their children, we'd get a lot more people saying they wanted their children to be world leaders, but that's not what they said at all. People said, "I'd like them to think about how to make their world a better place." Not the world, their world — their community, their neighborhood, their family.

WHAT MIGHT PSYCHOLOGISTS FIND MOST INTERESTING ABOUT THIS WORK?

A net worth of $25 million or more brings temporal freedom, spatial freedom and sometimes psychological freedom, but it's not always easy. Eventually temporal freedom — the freedom to do anything you want — raises dilemmas about what the best way to use all your time might be. There's also spatial freedom: You get to build anything you want — a house, a business, a new nonprofit — and people often get lost or befuddled with all of their options. And you get choice. You can go to this restaurant or that one, this resort or that one, buy this car or that one. People can get overwhelmed by all the choices and possibilities, and the amount of freedom that they have.

Then the overwhelming question becomes: What is the best use of my time and resources? After a while one can actually become stymied and even dispirited. There are plenty of folks who are more than willing to make suggestions, but it takes a lot of individual work to develop the psychological freedom to make decisions. For most, that's not a problem because time and money are limited, so the choices are limited. Being willing to try to understand the challenges of having an oversupply of time and money can be difficult for therapists.

The takeaway from all of this is that there seemed to be a trend that said you can't buy your way out of the human condition. For example, one survey participant told me that he'd sold his business, made a lot of money off that and lived high for a while. He said, "You know, Bob, you can just buy so much stuff, and when you get to the point where you can just buy so much stuff, now what are you going to do?"

WHAT'S THE SIGNIFICANCE OF THIS RESEARCH FOR THE VAST MAJORITY OF US WHO AREN'T WEALTHY?

This research shows the rest of the world, who often think that if they just made one more bonus or sold one more item or got one more promotion, then their world and their family's world would be so much better, that this isn't necessarily true. There's another whole level of concerns that parents are going to have about their kids. One of those concerns is this feeling of isolation. That's actually a No. 1 concern for families with a high net worth — this sense of isolation — and the higher the wealth, the worse it gets. We know this is a very powerful feeling when it comes to one's overall sense of well-being, and these people feel very isolated because they have what most of the world thinks they want. But just because you have money doesn't mean you're not going to have a bad day every once in a while. But what you often lose when you have all this money is the friendships that support you through the difficult times.

WHAT HAVE YOU LEARNED THROUGH YOUR YEARS OF WORKING WITH PEOPLE WITH A HIGH NET WORTH?

I think the toughest part about both working with this population and being in this population is that as soon as you say they have a net worth of $25 million, someone will start playing the violin. Like, "Oh, cry me a river, you have all this money and it's causing problems?"

No one is saying, "Poor me, I have a lot of money." In fact, most of them are saying, "I love having a lot of money. But don't get me wrong, there are some downsides."


These people don't have to worry about whether they'll have enough to make the mortgage payment, and they feel very fortunate. But it isn't nirvana either. If their kids have access to a lot of money, and therefore a lot of drugs, that hurts just as much as if they don't have any money and their kids are doing drugs. It doesn't save you from any of that. It's still a parent who has a child who is hurting.

The Omnivore's Dilemma

$
0
0

Michael Pollan

Michael Pollan is the author of “The Omnivore's Dilemma: A Natural History of Four Meals”  which was named one of the ten best books of 2006 by the New York Times and the Washington Post. It also won the California Book Award, the Northern California Book Award, the James Beard Award for best food writing, and was a finalist for the National Book Critics Circle Award. He is also the author of “In Defense of Food: An Eater’s Manifesto”, “The Botany of Desire: A Plant's-Eye View of the World”, “A Place of My Own”, and “Second Nature”.

A contributing writer to the New York Times Magazine, Pollan is the recipient of numerous journalistic awards, including the James Beard Award for best magazine series in 2003 and the Reuters-I.U.C.N. 2000 Global Award for Environmental Journalism. His articles have been anthologized in Best American Science Writing, Best American Essays and the Norton Book of Nature Writing. Pollan served for many years as executive editor of Harper's Magazine and is now the Knight Professor of Science and Environmental Journalism at UC Berkeley.

Smog in our brains

$
0
0

By Kristen Weir
American Psychological Association 
July/August 2012, Vol 43, No. 7

Researchers are identifying startling connections between air pollution and decreased cognition and well-being.

That yellow haze of smog hovering over the skyline isn't just a stain on the view. It may also leave a mark on your mind.

Researchers have known since the 1970s that high levels of air pollution can harm both cardiovascular and respiratory health, increasing the risk of early death from heart and lung diseases. The effect of air pollution on cognition and mental well-being, however, has been less well understood. Now, evidence is mounting that dirty air is bad for your brain as well.

Over the past decade, researchers have found that high levels of air pollution may damage children's cognitive abilities, increase adults' risk of cognitive decline and possibly even contribute to depression.

"This should be taken seriously," says Paul Mohai, PhD, a professor in the University of Michigan's School of Natural Resources and the Environment who has studied the link between air pollution and academic performance in children. "I don't think the issue has gotten the visibility it deserves."     
      
Cognitive connections

Most research on air pollution has focused on a type of pollutant known as fine particulate matter. These tiny particles — 1/30th the width of a human hair — are spewed by power plants, factories, cars and trucks. Due to its known cardiovascular effects, particulate matter is one of six principal pollutants for which the Environmental Protection Agency (EPA) has established air quality standards.

It now seems likely that the harmful effects of particulate matter go beyond vascular damage. Jennifer Weuve, MPH, ScD, an assistant professor of internal medicine at Rush Medical College, found that older women who had been exposed to high levels of the pollutant experienced greater cognitive decline compared with other women their age (Archives of Internal Medicine, 2012). Weuve's team gathered data from the Nurses' Health Study Cognitive Cohort, a population that included more than 19,000 women across the United States, age 70 to 81. Using the women's address history, Weuve and her colleagues estimated their exposure to particulate matter over the previous seven to 14 years. The researchers found that long-term exposure to high levels of the pollution significantly worsened the women's cognitive decline, as measured by tests of cognitive skill.

Weuve and her colleagues investigated exposure to both fine particulate matter (the smallest particles, less than 2.5 micrometers in diameter) and coarse particulate matter (larger particles ranging from 2.5 to 10 micrometers in size).

"The conventional wisdom is that coarse particles aren't as important as fine particles" when it comes to human health, Weuve says. Previous studies in animals and human cadavers had shown that the smaller particles can more easily penetrate the body's defenses. "They can cross from the lung to the blood and, in some cases, travel up the axon of the olfactory nerve into the brain," she says. But Weuve's study held a surprise. She found that exposure to both fine and coarse particulate was associated with cognitive decline.

Weuve's results square with those of a similar study by Melinda Power, a doctoral candidate in epidemiology and environmental health at the Harvard School of Public Health. Power and her colleagues studied the link between black carbon — a type of particulate matter associated with diesel exhaust, a source of fine particles — and cognition in 680 older men in Boston (Environmental Health Perspectives, 2011). "Black carbon is essentially soot," Power says.

Power's team used black carbon exposure as a proxy for measuring overall traffic-related pollution. They estimated each man's black carbon exposure by cross-referencing their addresses with an established model that provides daily estimates of black carbon concentrations throughout the Boston area. Much like Weuve's results in older women, Power and colleagues found that men exposed to high levels of black carbon had reduced cognitive performance, equivalent to aging by about two years, as compared with men who'd had less black carbon exposure.

But while black carbon is a convenient marker of air pollution, it's too soon to say that it's what's causing the cognitive changes, Power says. "The problem is there are a lot of other things associated with traffic — noise, gases — so we can't say from this study that it's the particulate part of the air pollution that matters."

Still, the cumulative results of these studies suggest that air pollution deserves closer scrutiny as a risk factor for cognitive impairment and perhaps dementia.

"Many dementias are often preceded by a long period of cognitive decline. But we don't know very much about how to prevent or delay dementia," Weuve says. If it turns out that air pollution does contribute to cognitive decline and the onset of dementia, the finding could offer a tantalizing new way to think about preventing disease. "Air pollution is something that we can intervene on as a society at large, through technology, regulation and policy," she says.

Young minds

Research is also finding air-pollution-related harms to children's cognition. Shakira Franco Suglia, ScD, an assistant professor at Boston University's School of Public Health, and colleagues followed more than 200 Boston children from birth to an average age of 10. They found that kids exposed to greater levels of black carbon scored worse on tests of memory and verbal and nonverbal IQ (American Journal of Epidemiology, 2008).

More recently, Frederica Perera, DrPH, at the Columbia University Mailman School of Public Health, and colleagues followed children in New York City from before birth to age 6 or 7. They discovered that children who had been exposed to higher levels of urban air pollutants known as polycyclic aromatic hydrocarbons while in utero were more likely to experience attention problems and symptoms of anxiety and depression (Environmental Health Perspectives, 2012). These widespread chemicals are a byproduct of burning fossil fuels.

Meanwhile Mohai, at the University of Michigan, found that Michigan public schools located in areas with the highest industrial pollution levels had the lowest attendance rates and the greatest percentage of students who failed to meet state testing standards, even after controlling for socioeconomic differences and other confounding factors (Health Affairs, 2011). What's worse, the researchers analyzed the distribution of the state's public schools and found that nearly two-thirds were located in the more-polluted areas of their districts. Only about half of states have environmental quality policies for schools, Mohai says, "and those that do may not go far enough. More attention needs to be given to this issue."

Although Michigan and Massachusetts may experience areas of poor air quality, their pollution problems pale in comparison to those of Mexico City, for example. In a series of studies, Lilian Calderón-Garcidueñas, MD, PhD, a neuropathologist at the University of Montana and the National Institute of Pediatrics in Mexico City, has investigated the neurological effects of the city's infamous smog.

In early investigations, Calderón-Garcidueñas dissected the brains of dogs that had been exposed to air pollution of Mexico City and compared them with the brains of dogs from a less-polluted city. She found the Mexico City dogs' brains showed increased inflammation and pathology including amyloid plaques and neurofibrillary tangles, clumps of proteins that serve as a primary marker for Alzheimer's disease in humans (Toxicologic Pathology, 2003).

In follow-up research, Calderón-Garcidueñas turned her attention to Mexico's children. In one study, she examined 55 kids from Mexico City and 18 from the less-polluted city of Polotitlán. Magnetic resonance imagining scans revealed that the children exposed to urban pollution were significantly more likely to have brain inflammation and damaged tissue in the prefrontal cortex. Neuroinflammation, Calderón-Garcidueñas explains, disrupts the blood-brain barrier and is a key factor in many central nervous system disorders, including Alzheimer's disease and Parkinson's disease. Perhaps more troubling, though, the differences between the two groups of children weren't just anatomical. Compared with kids from cleaner Polotitlán, the Mexico City children scored lower on tests of memory, cognition and intelligence (Brain and Cognition, 2008).

Brain changes

It's becoming clearer that air pollution affects the brain, but plenty of questions remain. Randy Nelson, PhD, a professor of neuroscience at the Ohio State University, is using mouse studies to find some answers. With his doctoral student Laura Fonken and colleagues, he exposed mice to high levels of fine particulate air pollution five times a week, eight hours a day, to mimic the exposure a human commuter might receive if he or she lived in the suburbs and worked in a smoggy city (Molecular Psychiatry, 2011). After 10 months, they found that the mice that had been exposed to polluted air took longer to learn a maze task and made more mistakes than mice that had not breathed in the pollution.

Nelson also found that the pollutant-exposed mice showed signs of the rodent equivalent of depression. Mice said to express depressive-like symptoms give up swimming more quickly in a forced swim test and stop sipping sugar water that they normally find attractive. Both behaviors can be reversed with antidepressants. Nelson found that mice exposed to the polluted air scored higher on tests of depressive-like responses.

To find out more about the underlying cause of those behavioral changes, Nelson compared the brains of mice that had been exposed to dirty air with brains of mice that hadn't. He found a number of striking differences. For starters, mice exposed to particulate matter had increased levels of cytokines in the brain. (Cytokines are cell-signaling molecules that regulate the body's inflammatory response.) That wasn't entirely surprising, since previous studies investigating the cardiovascular effects of air pollution on mice had found widespread bodily inflammation in mice exposed to the pollution.

More surprisingly, Nelson also discovered physical changes to the nerve cells in the mouse hippocampus, a region known to play a role in spatial memory. Exposed mice had fewer spines on the tips of the neurons in this brain region. "Those [spines] form the connections to other cells," Nelson says. "So you have less dendritic complexity, and that's usually correlated with a poorer memory."

The changes are alarming and surprising, he says. "I never thought we'd actually see changes in brain structure."

Nelson's mice experienced quite high levels of pollution, on par with those seen in places such as Mexico City and Beijing, which rank higher on the pollution scale than U.S. cities. It's not yet clear whether the same changes would occur in mice exposed to pollution levels more typical of American cities. Another limitation, he notes, is that the animals in his study were genetically identical. Nelson says he'd like to see similar studies of wild-type mice to help tease out whether genetic differences might make some people more or less vulnerable to the effects of pollution. "I would suspect there are people who are wildly susceptible to this and people who are less so, or not at all," he says.

Few studies have investigated connections between depression and air pollution, but Nelson's wasn't the first. A study by Portuguese researchers explored the relationship between psychological health and living in industrial areas. They found that people who lived in areas associated with greater levels of air pollution scored higher on tests of anxiety and depression (Journal of Environmental Psychology, 2011).

Back in Ohio, Nelson plans to study how much — or how little — pollution is necessary to cause changes in the brain and behavior. He's also beginning to look at the effects of air pollution on pregnant mice and their offspring. Though more research is needed to fully understand how dirty air impairs the brain, he says, the picture that's emerging suggests reason for concern.

In the United States, the Environmental Protection Agency reviews the scientific basis for particulate matter standards every five years or so, and completed its last review in 2009.

To date, the EPA hasn't factored psychological research into their standards assessments, but that could change, according to a statement the EPA provided to the Monitor. "Additional research is necessary to assess the impact of ambient air pollutants on central nervous system function, such as cognitive processes, especially during critical windows of brain development. To this end, as the number of … studies continue to increase and add to the weight of overall evidence, future National Ambient Air Quality Standards assessments will again assess and address the adequacy of existing standards."

In the meantime, says Weuve, there's not much people can do to protect themselves, short of wearing special masks, installing special filtration systems in their homes and offices or moving to cities with less airborne pollution. "Ultimately, we're at the mercy of policy," she says.

The good news, Nelson says, is that the mental and cognitive effects of air pollution are finally beginning to receive attention from the mental health research community. "We sort of forget about these environmental insults," says Nelson. "Maybe we shouldn't."

Aristotle

$
0
0
By Russell M. Lawson
World History: Ancient and Medieval Eras


Aristotle is considered the greatest scientist and one of the greatest philosophers of the ancient world. A student of Plato, Aristotle was the teacher of Alexander the Great and the founder of the Peripatetic school of thought. His vast writings include Metaphysics, Physics, Nichomachean Ethics, Politics, and Poetics. Aristotle was one of the first empirical thinkers, though he generally relied on established methods of science: observation, collection and categorization of specimens, analysis of data, induction, and deduction. Aristotle's mastery of the subjects he studied gained him the reputation in subsequent centuries as an infallible guide to natural phenomena and philosophy. After 1500 CE, in light of new discoveries by Nicholas Copernicus, Galileo, Isaac Newton, and other scientists, many of Aristotle's theories were rejected; nevertheless, his influence on modern science is undeniable.

Aristotle was born in 384 BCE in the small town of Stagira in Thrace, a primitive outpost of Greek culture east of Macedonia. His father was a wealthy court physician to the kings of Macedonia, and Aristotle spent his early years at Pella, the capital of King Amyntas III and his successor King Philip II of Macedon. Aristotle, seeking to follow in his father's footsteps as a scientist and physician, journeyed south to Athens in 366. He studied at the Academy, Plato's school in Athens, where he became that philosopher's most famous student. At the Academy, Aristotle fit in as a wealthy aristocrat, but his Thracian and Macedonian background plagued him among condescending Athenians. In the end, Aristotle's superior intellect silenced all criticism.

From Plato, Aristotle learned of the universal truth, which Socrates termed "the Good." Plato taught his students at the Academy that the best means to approach an understanding of truth was through reason, the study of mathematics and music, intuition, and intense and deep contemplation. Aristotle, less the mystical and more the pragmatic thinker, broke from his teacher by adopting the scientific approach to human behavior, natural philosophy, natural science, ethics, and metaphysics. Aristotle also learned from Plato of being (ousia), the divine essence, from which all things derive. Aristotle did not abandon this religious interpretation of the ultimate reality but brought science to bear to discover and to understand it. For Aristotle, then, science is a pious act to discover the nature of goodness, justice, virtue, and being, and human experience is an essential matter for study, since the better sort of human beings echo being itself.

Upon Plato's death, Aristotle left what was no doubt a competitive situation among Plato's students, each jockeying to take the place of the master. Aristotle journeyed to a small kingdom in Asia Minor (present-day Turkey) where he became court philosopher to King Hermias. Aristotle married the king's daughter but soon fled (with his wife) upon the tragic assassination of the king. Aristotle ended up back in Macedonia in 343, this time as tutor to the royal prince Alexander (Alexander the Great). Legend has it that Philip II of Macedon enticed Aristotle to return to Pella, an intellectual and cultural backwater compared to Athens, with a tempting salary and a promise: Stagira having been destroyed and its population enslaved in one of Philip's campaigns, Philip proposed that in return for Aristotle's services the king would rebuild the town and bring the inhabitants out of slavery. Aristotle agreed to the terms.

Alexander eventually became king of Macedonia in 336 upon his father's assassination and then spent the next 13 years of his life conquering Greece, Asia Minor, Palestine, Egypt, Iran, Iraq, and Afghanistan—all of which made up the Persian Empire. Alexander was a warrior and conqueror who thought himself the heroic son of the king of the gods, Zeus. Nevertheless, Aristotle, who eschewed the life of a warrior, had been Alexander's teacher for three years during the years from 13 to 16, and below the surface of Alexander's actions are hints that he had adopted the life of a philosopher and that he thought of himself as a scientist, even a physician. Alexander, for example, composed letters to Aristotle that included samples of plant and animal life that he had gathered for his teacher's collection.


In the meantime, Aristotle had left Macedonia for Athens, where he opened his school, the Lyceum. The philosopher eventually broke with Alexander over the death of Aristotle's grandnephew Callisthenes, a philosopher and historian who accompanied Alexander's expedition. Callisthenes was implicated in a plot to assassinate the king and was executed. Even so, the Athenians associated Aristotle with Alexander, who was very unpopular in Athens. Upon Alexander's death in 323, the Athenians felt free enough to throw off the shackles imposed on them by Alexander—and one shackle was represented by Alexander's former teacher. Aristotle was eventually forced to flee the city and abandon his school. He died soon after, in 322 BCE.

Aristotle is perhaps best known today as a logician. He created a system of thought based on fundamental assumptions that one cannot doubt—the famous a priori truths. Whereas Plato believed that one must accomplish knowledge of truth by means of reason and intuition, Aristotle believed that the philosopher must observe particular phenomena to arrive at an understanding of reality, a scientific technique known as induction. Once truth is known through induction from the particular to the universal, the philosopher can engage in the process of deduction from the basis of the universal to arrive at other particular truths. Aristotle's system of logic is known as syllogism.

Aristotle also made contributions in metaphysics, the study of reality that transcends the physical world. Once again a priori truths are the basis for metaphysical studies. Aristotle assumed that there is a First Cause, an "unmoved mover," that he defined as actuality, in contrast to potency, or the potential, which represents movement. Aristotle argued that all reality can be explained according to cause and effect, act and potential. For example, time is an actual phenomenon—it has existence as a form or essence. Time acts upon human movement, providing a temporal context in which humans are born, live, and die, all the while measuring their lives according to the standard of time. Aristotle further argued in Metaphysics that one must distinguish between art and experience. Art as essence is based on abstract thought—what the Greeks termed the logos—whereas experience is based on a series of particular events occurring in time. In Poetics, Aristotle argued that poetry (art) explores universals and how things ought to be, while history (historia) explains the particulars of human existence and how things are. Wisdom represents the unification of art and experience.

Aristotle's treatise on natural science was Physics. Natural science, he wrote, is concerned with physical movement from the first principles of nature. Aristotle associated nature with the first cause. His unmoved mover was an amorphous divine force of creation which establishes the laws through which movement—plant, animal, and human—occurs. The four causal determinants expressed in nature are: 1) the material substance that forms a physical object; 2) the type or class of phenomenon (genos) to which an object belongs; 3) the cause of change in or movement of an object; and 4) the goal or purpose (telos) of movement.

Aristotle's categorizations had a profound impact on the formation of a vocabulary of science. His notion of type or class is the basis for the notion that a species in nature comprises a set genus. Aristotle's idea of goal or purpose forms the philosophical concept of teleology, the study of the end of natural phenomena.

In addition, Aristotle was one of the first students of the human psyche. He wrote treatises on dreams, memory, the senses, prophecy, sleep, and the soul. Aristotle believed that the soul is the actuality within the potency of the body and is the unmoved mover within each individual human, while the mind (nous) is an expression of the soul. Aristotle argued that each human soul is part of a universal whole which is a world soul, the ultimate actuality, and the first cause. Aristotle's study of dreams provided a rational explanation of what the ancients often considered a supernatural phenomenon. Aristotle argued that the only thing "divine" about a dream is that it is part of nature, which is itself the creation of God and hence divine. That events turn out according to one's dream is either coincidence or the result of the subtle impact of a dream on an individual's actions.


In zoological studies, Aristotle's contributions included the treatises Description of Animals, Parts of Animals, and Generation of Animals. In Parts of Animals, Aristotle noted that although animals are a less profound area of study than the metaphysical, nevertheless it is an inquiry accessible to anyone willing to explore natural history. Consistent with his Platonic background, Aristotle studied animals for the sake of understanding the whole of natural history. He assumed that the source of all good and beauty is the same source of animal and biological phenomena and that hence even animals mirror the divine.

In the study of ethics, Aristotle dealt with the question of how the ultimate basis of behavior, the set of rules that establishes the Good, can be understood according to science. Aristotle believed that the tools of science—observation, categorization, logic,and  induction—could be brought to bear on the study of human behavior. The scientist studies human behavior in its incredible variety of contexts to arrive at general laws of how humans act and how they should act: how humans act is the realm of the scientist, while how humans should act is the realm of the philosopher. Once again, Aristotle combined science and philosophy into one organized study. Aristotle believed that the ultimate end of human existence is happiness, which occurs when humans conform to the Good. The Good is accomplished when humans exercise reason in accordance with virtue. Aristotle studied human behavior to arrive at a definition of virtue, finding that it is an action performed for its own sake, that is, an action performed for the sake of the Good or an action performed out of principle. Aristotle believed that vice, the opposite of virtue, derives from actions committed for selfish reasons or for personal motives.

The Greek philosophers before and during Aristotle's time were the first political scientists. Aristotle's contribution, Politics, applied his philosophical methods and assumptions to the understanding of statecraft. He argued that the state is, as it were, the actual, while the citizens are the potential. The latter are the parts (the particulars) that made up the whole, or the universal body politic. Aristotle conceived of a pluralistic society operating according to natural laws based in part on reason and necessity, a social compact among people to promote security and serve the needs of survival. Within this concept of the state (which represents virtue) people move, act, and struggle for power and wealth. Aristotle argued, based on his experience at Athens, that slavery was justified because of the inferior intellect of slaves. Likewise, he assumed that women lacked the cognitive abilities of males and therefore should not participate in democracy. In The Athenian Constitution, Aristotle provided a detailed analysis of Athenian democracy, providing details into the life and political science of the great Athenian lawgiver Solon.

In the study of astronomy, Aristotle explored his ideas in On the Heavens. Based on observation, Aristotle established the spherical nature of the earth. Viewing a lunar eclipse, Aristotle detected a slight curvature of the shadow of the earth on the moon's surface. He also observed that the altitude of stars changes according to changes in latitude. In On the Heavens, Aristotle concluded that the earth's circumference is 400,000 stadia (40,000–50,000 miles, which was an overestimate of 45%). He advocated the view that there is more water than land on the earth's surface. Much of Aristotle's thought on astronomy, however, was erroneous, as observation with the naked eye was insufficient for the study of the nature of the stars and planets.

Aristotle's ideas were advocated and defended for centuries after the philosopher's death. Aristotle's disciples were known by the master's teaching style of walking about while engaged in discussion or disputation (from which the name "Peripatetic" derives). Theophrastus took over the helm of the Lyceum, Aristotle's school at Athens. He organized Aristotle's papers and writings and pursued Aristotle's theories and investigations in the physical and metaphysical worlds. After Theophrastus's death in 287 BCE, Strato assumed leadership of the Lyceum and the Peripatetic philosophers.

Further Reading
Bambrough, Renford, ed. and trans. The Philosophy of Aristotle. New York: New American Library, 1963; Barnes, Jonathan. Aristotle. Oxford: Oxford University Press, 1982; Schmitt, Charles B. Aristotle and the Renaissance. Cambridge: Harvard University Press, 1983; Turner, William. "Aristotle." Catholic Encyclopedia. New York: The Encyclopedia Press, 1913; Wheelwright, Philip, ed. and trans. Aristotle. New York: Odyssey Press, 1951.

MLA Citation
Lawson, Russell M. "Aristotle." World History: Ancient and Medieval Eras. ABC-CLIO, 2013. Web. 22 Dec. 2013.


The price of affluence

$
0
0
By Amy Novotney
American Psychological Association, 2009
Vol. 40, No. 1, Page 50

New research shows that privileged teens may be more self-centered—and depressed—than ever before.

Many of today's most unhappy teens probably made the honor roll last semester and plan to attend prestigious universities, according to research by psychologist Suniya Luthar, PhD, of Columbia University's Teachers College. In a series of studies, Luthar found that adolescents reared in suburban homes with an average family income of $120,000 report higher rates of depression, anxiety and substance abuse than any other socioeconomic group of young Americans today.

"Families living in poverty face enormous challenges," says Luthar, who has also studied mental health among low-income children. "But we can't assume that things are serene at the other end."

Privileged teens often have their own obstacles to overcome. Some say these problems may be due to an increasingly narcissistic society—as is evidenced by fame-hungry reality TV stars and solipsistic Web sites. Plus, says Harvard University's Dan Kindlon, PhD, families have shrunk and kids are now seen as more precious.

"It was kind of hard to think that the world revolved around you when you had eight brothers and sisters," says Kindlon, author of "Too Much of a Good Thing: Raising Children in an Indulgent Age" (Hyperion, 2001).

Others say the trouble may stem from parents who put too much emphasis on grades and performance, as opposed to a child's personal character.

"My experience with upper-middle-class moms is that they are worried sick about their kids," says San Francisco clinical psychologist Madeline Levine, PhD, author of "The Price of Privilege: How Parental Pressure and Material Advantage are Creating a Generation of Disconnected and Unhappy Kids" (HarperCollins, 2006).

While such parents are certainly well-meaning, it may take a toll on their children.

Generation all about me

When Levine first began lecturing to parents about child rearing, she titled her talk "Parenting the Average Child" and had a hard time attracting a crowd, she recalls. "Nobody believed they had an average child," she says.

But parents aren't the only ones insisting their children are special—their kids believe it as well, according to research by San Diego State University psychology professor Jean M. Twenge, PhD. She analyzed the Narcissistic Personality Inventory (NPI) scores of 16,475 American college students between 1979 and 2006 and found that one out of four students in recent generations show elevated rates of narcissism. In 1985, that number was only one in seven.

Some narcissistic traits—such as authority and self-sufficiency—can be healthy, says Robert Horton, PhD, a psychology professor at Wabash College in Crawfordsville, Ind. But too much self-absorption can often lead to interpersonal strife, he adds. Research shows that narcissists tend to be defensive, do not forgive easily and have trouble committing to romantic relationships and holding on to friendships. In other words, their egos can get in the way of true happiness, says Twenge.

"Narcissism is correlated with so many negative outcomes," says Twenge, whose research appeared in August's Journal of Personality (Vol. 76, No. 4). "Yet it seems to be something that is now relatively accepted in our culture."

Our culture's cult of celebrity may fuel the fire. In 2006, Drew Pinsky, MD—a radio host and psychiatry professor at the University of Southern California—teamed with USC psychologist Mark Young, PhD, to measure celebrities' narcissism levels. Two hundred well-known actors, musicians and comedians completed the NPI. The researchers found that celebrities were significantly more narcissistic than the average person. The study, published in the Journal of Research in Personality (Vol. 40, No. 5), also showed that reality television stars were among the most narcissistic of all celebrities.

"These shows are a showcase for narcissism, and they're portrayed as reality," Twenge says.

Psychologist Susan E. Linn, EdD, fears that today's fascination with wealthy celebrities and reality shows such as MTV's "My Super Sweet 16"—where a teen plans a million-dollar birthday party—contribute to normalizing this type of behavior. Kids immersed in this kind of media glitz feel unfulfilled or even like failures because they are not fabulously rich or famous, she notes.

"The combination of ubiquitous and sophisticated media and technology and unfettered commercialism is just a disaster for kids," says Linn, associate director of the Media Center at the Judge Baker Children's Center at Harvard University. "A constant barrage of images of wealth and narcissism promote unhealthy values and false expectations of what life should be like."



Harvard or bust

Psychologist Kali Trzesniewski, PhD, however, isn't convinced that narcissism is really on the rise. Her research, based on a data set of high school seniors from across the country as well as college students at the University of California, finds that students answer the NPI the same as their counterparts 30 years ago. She says what may seem like self-absorption is probably just more awareness of the numerous choices now available to them when it comes to what they want to do with their lives.

"Graduates entering the job market today have a lot of opportunities and a lot more jobs to choose from, so they have the freedom to be more selective," says Trzesniewski, a psychology professor at the University of Western Ontario, whose study appeared in February's Psychological Science (Vol. 19, No. 2). "That doesn't necessarily change their core beliefs."

Levine believes that what's actually driving upper-middle-class teens' mental health troubles is a fear of failure. Parents, she says, worry that their children won't make it in an increasingly competitive world, leading to an obsession over standardized test scores and getting their kids into the right schools.

"Parents are worried that if their children don't get into Harvard, they're going to be standing with a tin cup on the corner somewhere," Levine says.


On top of perfectionism, teens often can't deal with situations that don't go their way, perhaps because their parents protected them from disappointments earlier in life, Levine says. In fact, teens who indicated that their parents overemphasized their accomplishments were most likely to be depressed or anxious and use drugs, according to a 2005 study led by Luthar in Current Directions in Psychological Science (Vol. 14, No. 1).

What can parents do? Levine and Kindlon recommend that they give their children clear responsibilities to help out around the house and that families take part in community service activities together. Turning off the TV at least one night a week and monitoring Internet use are also important, says Linn. Such actions teach children the values that can lead to greater life satisfaction, says Levine, who also urges parents to stop obsessing about perfect grades and focus more on helping their children enjoy learning for its own sake.

And parents and psychologists alike should recognize that teens who seem to have it all may, in fact, lack the resources they need to find personal happiness.

"We've been a little remiss in assuming, without much examination, that children of privilege are immune to emotional distress and victimization," says Luthar. "Pain transcends demographics and family income."

Further reading

  • Kindlon, D. (2001). Too Much of a Good Thing: Raising Children in an Indulgent Age. New York, N.Y.: Hyperion.
  • Levine, M. (2006). The Price of Privilege: How Parental Pressure and Material Advantage Are Creating a Generation of Disconnected and Unhappy Kids. New York, N.Y.: HarperCollins.
  • Twenge, J.M. (2006). Generation Me: Why Today's Young Americans Are More Confident, Assertive, Entitled—and More Miserable Than Ever Before. New York, N.Y.: Free Press.


Rosalind Hursthouse's "On Virtue Ethics"

$
0
0
Rosalind Hursthouse's On Virtue Ethics Reviewed by Gilbert Harman, Department of Philosophy, Princeton University

(Read Chapter 6 of Hursthouse's book here. Also see this.)

 Virtue ethics is a type of ethical theory in which the notion of virtue or good character plays a central role. This splendid new book describes a “program” for the development of a particular (“Aristotelian”) form of virtue ethics. The book is intended to be used as a textbook, but should be read by anyone interested in moral philosophy. Hursthouse has been a major contributor to the development of virtue ethics and the program she describes, while making use of the many contributions of others, is very much her program, with numerous new ideas and insights.

The book has three parts. The first dispels common misunderstandings and explains how virtue ethics applies to complex moral issues. The second discusses moral motivation, especially the motivation involved in doing something because it is right. The third explains how questions about the objectivity of ethics are to be approached within virtue ethics.

Structure

Hursthouse’s virtue ethics takes as central the conception of a human being who possesses all ethical virtues of character and no vices or defects of character—”human being” rather than “person” because the relevant character traits are “natural” to the species.

To a first approximation, virtue ethics says that a right action is an action among those available that a perfectly virtuous human being would characteristically do under the circumstances. This is only a first approximation because of complications required in order accurately to describe certain moral dilemmas.

It is possible to be faced with a dilemma through having acted wrongly. In one of Hursthouse’s examples, a man, promising marriage, gets two women pregnant. Given that there is no way to fulfill all of his promises, what is the right thing for him to do? Distinguish two senses in which a course of action might be right—an action-guiding sense and an action-assessment sense. Something will be wrong with whatever the promiser does, so there is no way for him to do what is all right, or right in the action-assessment sense. But there may be a best or right choice for him to make in the circumstances, a choice that would be right in the action-guiding sense.

What is right in the action-guiding sense cannot always be identified as the choice that a perfectly virtuous human being would make in the circumstances, because sometimes a completely virtuous human being could never be in the relevant circumstances. Hursthouse believes that virtue ethics is still applicable, because she thinks that virtue ethics provides rules that can apply to such a case. However, although I see how virtue ethics can provide rules, it remains unclear to me how the rules provided could handle this particular situation. She says that every virtue of character yields a positive rule of action and every vice or defect of character yields a negative rule; so, virtue ethics allows for such rules as that one ought to tell the truth, one ought to keep ones promises, one ought to be kind to others and one should not act meanly, lie, or break promises. Where these simple rules conflict, Hursthouse proposes to “fine tune” them by considering what a virtuous human being would do in various circumstances. Perhaps this yields the right rules for circumstances no virtuous human being could be in, but I do not understand how.

She also notes that the promiser might use something that sounds like the terminology of virtue and vice in reasoning what to do. “Perhaps it would be callous to abandon A, but not to abandon B. Perhaps it would be more irresponsible to abandon A than to abandon B. . . . Then marrying A would be the morally right decision.” But in this instance the vices of callousness and irresponsibility are characteristics of possible actions rather than character traits of the agent. (No matter what the agent does, the agent will continue to have a bad character.) So, it remains unclear how these remarks fit together with the overall theory. In any event, Hursthouse also observes that a completely virtuous human being might find herself in a dilemma in which nothing that she does is right in the action-assessment sense. An example might be the situation in Sophie’s Choice in which a mother must chose which of her children is to be killed immediately and which possibly saved; if she fails to chose, they are both to be killed immediately. In such a case, there might be a decision that is right in the action-guiding sense—a decision that a fully virtuous agent would make in that situation—but the act cannot be a right act in the action-assessment sense, since it will not be all right.

The first part of On Virtue Ethics is concerned with the basic structure of this sort of virtue ethics, with considerable discussion of moral dilemmas or one or another sort. Inevitably, Hursthouse is unable to discuss every aspect of this structure. She explicitly sets aside issues of justice, for example.

I would have liked to see discussion of the worry that the virtue ethical characterization of right action is trivial because a fully virtuous human being must have perfect practical rationality. (Virtue is not just a matter of having the right ends, as in St. Paul’s or John Lennon’s idea that “All you need is love,” or Plato’s idea that all you need is a properly ordered soul. Practical rationality is needed also.) The worry is that there is no good way to characterize perfect practical rationality so as to guarantee that the fully virtuous human being will do the right thing, on the one hand, while not, on the other hand, reducing the basic principle of virtue ethics to the trivial claim that what is right is what would be done by someone who characteristically does what is right. Again, it may be that virtue ethics is able to avoid this trivialization of principle, but I do not see how.

Motivation

What is involved in doing something because it is right? Hursthouse answers that it is to act in the way a fully virtuous human being acts for the reasons that the fully virtuous human being acts on. She shows in marvelous detail that this answer agrees with common sense in a variety of cases.

Her answer also makes sense theoretically. A fully virtuous agent characteristically acts in a certain way precisely because the agent’s character leads the agent to act in that way. But for the act to be right just is for the agent’s character to be such as lead the agent to do that act. So, it follows from virtue ethics that the fully virtuous agent does the act because it is right.

It is not that the fully virtuous agent does the act because he or she thinks it is right. The agent may think, for example, “She needs my help.” On the other hand, if someone else does a similar act motivated by the thought that this is what the virtuous agent would do, the other human being does it because she thinks the act is right and does not in the same way do the act directly because it is right. Doing something directly because it is the right thing to do is not the same as doing it because one thinks it is the right thing to do.

Hursthouse says that moral motivation of this sort is a matter of degree. Children with little or no moral character gradually become adults with full moral character and capable of full moral motivation. Someone may be partly virtuous and partly not, in some ways virtuous and in some ways not. To the extent that an agent’s act results from a character that is relevantly similar to that of a fully virtuous human being, we can allow that the agent does something because it is right. Huck Finn may act from more or less virtuous character traits and so hide Jim from Jim’s slave owner because it is right to hide Jim, even though Huck thinks that it is wrong. On the other hand, Hursthouse says that a confirmed Nazi who does the right act on a particular occasion does not do it because it is right, given the great distance between the Nazi’s character and the character of a virtuous human being.

Objectivity

The third, most difficult and richest part of the book discusses whether virtue ethics has resources to determine objectively what the human virtues are. Doubts arise about this in part because different human beings in different cultures belonging to different traditions disagree about the virtues and about the relative importance of those virtues they agree about. For example, there are differences between Europeans and East Asians concerning the relative importance of prudential virtues of individual development as compared with social virtues of community. There are also disagreements about the virtues within a given society. Can we reasonably suppose that these are disagreements about objective matters of fact?

Many believe that such disagreements are not objective. Some think it is a matter of local convention what the right virtues are. Others think that one can choose what virtues to aspire to, where different human beings can be equally justified in choosing different virtues. But Hursthouse thinks it may be possible to find an objective basis for a single set of human virtues of character within a generally Aristotelian approach.

In this approach, judgments of good and defective character are to be assessed in terms of the biological, social, and rational nature of human beings. She begins her discussion of this issue by considering simple cases—judgments one might make about plants and animals. One might judge that a certain tree has good roots, that a particular tiger has a defective heart, that another tiger is a fine specimen, or that there is something wrong with a wolf that does not participate in the hunt with the other wolves. Hursthouse says such judgments are objective in that they are the sorts of judgments biologists might make in the course of describing various plants and animals.

She further says that the relevant features of plants and lower animals are to be assessed in relation to the contribution the features can be expected in general to make to the continued existence of individual plants or animals and to the preservation of the relevant species. For animals capable of feeling enjoyment and pain, features can also be assessed in relation to their tendency to make lives better in that respect. For social animals, features can be assessed in relation to their expected contribution to the functioning of the group.
The big question in this approach is whether such evaluation can be extended to human beings, who have rationality and act on reasons. Are there character traits that are in some sense “natural” to human beings that function well according to the same four criteria?

Suppose that there is a unique set of character traits which are natural to human beings and such that, if everyone has them, it is generally true that an individual’s having them promises to contribute to that individual’s preservation, the preservation of the human species, the function of social groups to which the individual belongs, and the flourishing of that individual and others. Then that set of character traits is the set of human virtues in this approach.

One way for this to fail would be that a satisfactory outcome for people would require some human beings to have one set of character traits while others had a different set, as in Nietzsche’s master and slave moralities, and somewhat as there are worker bees and queen bees. While Hursthouse thinks that this is a view within virtue ethics worth that needs to be taken seriously, she also thinks that we have not yet been given sufficient reason to give up on the existence of a single set of human virtues.

Another way in which the favored approach can fail is for it to turn out that no distribution of character traits will promote the flourishing of all human beings. Hursthouse argues that we do not have to accept the conclusion that human beings are in this sense just a “mess,” because, “When we look, in detail at why so many human beings are leading, and have led, such dreadful lives, we see that occasionally this is sheer bad luck, but characteristically, it is because either they, and/or their fellow and adjacent human beings, are defective in their possession and exercise of the virtues on the standard list.” She adds in a footnote, “I suppose that one of the reasons we find it so hard to come to terms with the Holocaust is that pre-Nazi German society looks so like our own at the same period, and we are forced to the unpalatable conclusion that if it happened there because of lack of virtue in its members, we must have been similarly lacking and might have gone the same way”(264).

On the other hand, it seems to me that thinking about this and related examples (Bosnia, Somalia), and about research in social psychology about the relative explanatory importance of individual character versus the situation in which a human being is placed, suggests that the very natural human tendency to think in terms of character traits may lead us in the wrong direction. It would seem that, to the extent that we are interested in improving the lot of mankind it might be better to put less emphasis on moral education and on building character and more emphasis on trying to arrange social institutions so that human beings are not placed in situations in which they will act badly.


I doubt that Hursthouse would dispute this conclusion. I am sure she agrees with the need to set up the right social institutions. So, perhaps the best way to think of her program in this respect is to claim that there are attainable institutions which would, if in place, encourage in participants the development of the relevant character traits, where these traits would tend to sustain and be sustained by the institutions. Alas, I have been able only to skim the surface of the many interesting issues discussed in this excellent book.(*)

(*) I am indebted to John Doris for helpful comments.

The Paradoxes of Human Rights

$
0
0

Costas Douzinas, Birkbeck Institute for the Humanities, University of London                     Constellations, An International Journal of Critical and Democratic Theory, Volume 20, Issue 1, pages 51-67, March 2013                                       

A new ideal has triumphed on the world stage: human rights. It unites traditional enemies, left and right, the pulpit and the state, the minister and the rebel, the developing world and the liberals of the West. The new world order, we are told, is genuinely liberal democratic. Ideological controversies of the past have given way to general agreement about the universality of western values and have placed human rights at the core of international law. After the collapse of communism, human rights have become the ideology after the end of ideologies, at the end of history, the morality of international relations, a way of conducting politics according to ethical norms.

And yet many doubts persist. The record of human rights violations since their ringing declarations at the end of the eighteenth century, after WWII and again since 1989 is quite appalling. If the twentieth century is the epoch of human rights, their triumph is, to say the least, something of a paradox. Our era has witnessed more violations of their principles than any previous, less “enlightened” one. Ours is the epoch of massacre, genocide, and ethnic cleansing. At no point in human history has there been a greater gap between the north and the south, between the poor and the rich in the developed world, or between the seduced and the excluded globally. Life expectancy at birth is around 45 years in sub-Saharan Africa but over 80 years in Northern Europe. No belief of progress allows us to ignore that never before in ‘peacetime’ and in absolute figures, have so many men, women, and children been subjugated, starved, or exterminated.

There is a second paradox: if the world has accepted a common humanitarian vision, have conflicts of ideology, religion, and ethnicity ceased? Obviously not. This means that human rights have no common meaning or that the term describes radically different phenomena. There is something more: human rights are perhaps the most important liberal legal institution. Liberal jurisprudence and political philosophy, however, have failed rather badly in their understanding of rights. Two hundred years of social theory and the three major ‘continents’ of thought, according to Louis Althusser, do not enter the annals of jurisprudence: Hegel, Marx, the post-Marxists, and the dialectic of struggle; Nietzsche, Foucault, and the analytics of power; Freud, the post-Freudians, psychoanalysis and subjectivity. As a result, jurisprudence and political philosophy return to the 18th century and update the social contract with ‘original positions’ and ‘veils of ignorance,’ the categorical imperative with ‘ideal speech’ situations and fundamental discourse principles all referring to individuals fully in control of themselves.

The mainstreaming of human rights and the rise of cosmopolitanism coincided with the emergence of what sociologists have called “globalization,” economists “neo-liberalism,” and political philosophers “post-democratic governance.” Is there a link between recent moralistic ideology, greedy capitalism and bio-political governmentality? My answer is a clear yes. Nationally, the bio-political form of power has increased the surveillance, disciplining, and control of life. Morality (and rights as morality's main building block in late capitalism) was always part of the dominant order, in close contact with each epoch's forms of power. Recently, however, rights have mutated from a relative defense against power to a modality of its operations. If rights express, promote and legalize individual desire, they have been contaminated by desire's nihilism. Internationally, the modernist edifice is undermined at the point when the completion of the decolonization process and the relative rise of the developing world create the prospect of a successful defense of its interests. The imposition of ‘cosmopolitan’ economic, cultural, legal, and military policies is an attempt to reassert western hegemony.

The wars of the new world order as well as the 2008 economic crisis and its political culmination in 2011 give us a unique opportunity to examine the post-1989 settlement. The best time to demystify ideology is when it enters into crisis. At this point, the taken for granted, “natural”, invisible premises of ideology come to the surface, become objectified, and can be understood for the first time as constructs. The ‘humanitarian’ interpretation of the Iraq and Afghanistan wars highlighted the absurdity of killing humans to ‘save’ humanity. The absence of human rights demands in Madrid, Athens, or Occupy Wall Street indicated their limited relevance for the most important movement of our times. In the wake of this world wave of protest, several major themes of political philosophy need to be re-visited.

This essay briefly presents an alternative approach to human rights built over a long period of campaigning and scholarship in a trilogy of books.1 It follows the insight that the term human rights, with its immense symbolic capital, has been co-opted to a large number of relatively independent discourses, practices, institutions and campaigns. As a result no global ‘theory’ of rights exists or can be created. Different theoretical perspectives and disciplinary approaches are therefore necessary. This article starts a short history of the idea of humanity and moves to the political, legal, philosophical, and psychological aspects of rights. To indicate this multi-layered approach, it puts forward an axiom and seven theses that re-write the standard liberal approach to rights.

The Human Rights Axiom

The end of human rights is to resist public and private domination and oppression. They lose that end when they become the political ideology or idolatry of neo-liberal capitalism or the contemporary version of the civilizing mission.

Thesis 1

The idea of ‘humanity’ has no fixed meaning and cannot act as the source of moral or legal rules. Historically, the idea has been used to classify people into the fully human, the lesser human, and the inhuman.

If ‘humanity’ is the normative source of moral and legal rules, do we know what ‘humanity’ is? Important philosophical and ontological questions are involved here. Let me have a brief look at its history.

Pre-modern societies did not develop a comprehensive idea of the human species. Free men were Athenians or Spartans, Romans or Carthaginians, but not members of humanity; they were Greeks or barbarians, but not humans. According to classical philosophy, a teleologically determined human nature distributes people across social hierarchies and roles and endows them with differentiated characteristics. The word humanitas appeared for the first time in the Roman Republic as a translation of the Greek word paideia. It was defined as eruditio et institutio in bonas artes (the closest modern equivalent is the German Bildung). The Romans inherited the concept from Stoicism and used it to distinguish between the homo humanus, the educated Roman who was conversant with Greek culture and philosophy and was subjected to the jus civile, and the homines barbari, who included the majority of the uneducated non-Roman inhabitants of the Empire. Humanity enters the western lexicon as an attribute and predicate of homo, as a term of separation and distinction. For Cicero as well as the younger Scipio, humanitas implies generosity, politeness, civilization, and culture and is opposed to barbarism and animality.2 “Only those who conform to certain standards are really men in the full sense, and fully merit the adjective ‘human’ or the attribute ‘humanity.’”3 Hannah Arendt puts it sarcastically: ‘a human being or homo in the original meaning of the word indicates someone outside the range of law and the body politic of the citizens, as for instance a slave – but certainly a politically irrelevant being.’4

If we now turn to the political and legal uses of humanitas, a similar history emerges. The concept ‘humanity’ has been consistently used to separate, distribute, and classify people into rulers, ruled, and excluded. ‘Humanity’ acts as a normative source for politics and law against a background of variable inhumanity. This strategy of political separation curiously entered the historical stage at the precise point when the first proper universalist conception of humanitas emerged in Christian theology, captured in the St Paul's statement, that there is no Greek or Jew, man or woman, free man or slave (Epistle to the Galatians 3:28). All people are equally part of humanity because they can be saved in God's plan of salvation and, secondly, because they share the attributes of humanity now sharply differentiated from a transcended divinity and a subhuman animality. For classical humanism, reason determines the human: man is a zoon logon echon or animale rationale. For Christian metaphysics, on the other hand, the immortal soul, both carried and imprisoned by the body, is the mark of humanity. The new idea of universal equality, unknown to the Greeks, entered the western world as a combination of classical and Christian metaphysics.

The divisive action of ‘humanity’ survived the invention of its spiritual equality. Pope, Emperor, Prince, and King, these representatives and disciples of God on earth were absolute rulers. Their subjects, the sub-jecti or sub-diti, take the law and their commands from their political superiors. More importantly, people will be saved in Christ only if they accept the faith, since non-Christians have no place in the providential plan. This radical divide and exclusion founded the ecumenical mission and proselytizing drive of Church and Empire. Christ's spiritual law of love turned into a battle cry: let us bring the pagans to the grace of God, let us make the singular event of Christ universal, let us impose the message of truth and love upon the whole world. The classical separation between Greek (or human) and barbarian was based on clearly demarcated territorial and linguistic frontiers. In the Christian empire, the frontier was internalized and split the known globe diagonally between the faithful and the heathen. The barbarians were no longer beyond the city as the city expanded to include the known world. They became ‘enemies within’ to be appropriately corrected or eliminated if they stubbornly refused spiritual or secular salvation.

The meaning of humanity after the conquest of the ‘New World’ was vigorously contested in one of the most important public debates in history. In April 1550, Charles V of Spain called a council of state in Valladolid to discuss the Spanish attitude towards the vanquished Indians of Mexico. The philosopher Ginés de Sepulveda and the Bishop Bartholomé de las Casas, two major figures of the Spanish Enlightenment, debated on opposite sides. Sepulveda, who had just translated Aristotle's Politics into Spanish, argued that “the Spaniards rule with perfect right over the barbarians who, in prudence, talent, virtue, humanity are as inferior to the Spaniards as children to adults, women to men, the savage and cruel to the mild and gentle, I might say as monkey to men.”5 The Spanish crown should feel no qualms in dealing with Indian evil. The Indians could be enslaved and treated as barbarian and savage slaves in order to be civilized and proselytized.

Las Casas disagreed. The Indians have well-established customs and settled ways of life, he argued, they value prudence and have the ability to govern and organize families and cities. They have the Christian virtues of gentleness, peacefulness, simplicity, humility, generosity, and patience, and are waiting to be converted. They look like our father Adam before the Fall, wrote las Casas in his Apologia, they are ‘unwitting’ Christians. In an early definition of humanism, las Casas argued that “all the people of the world are humans under the only one definition of all humans and of each one, that is that they are rational…Thus all races of humankind are one.”6 His arguments combined Christian theology and political utility. Respecting local customs is good morality but also good politics: the Indians would convert to Christianity (las Casas’ main concern) but also accept the authority of the Crown and replenish its coffers, if they were made to feel that their traditions, laws, and cultures are respected. But las Casas’ Christian universalism was, like all universalisms, exclusive. He repeatedly condemned “Turks and Moors, the veritable barbarian outcasts of the nations” since they cannot be seen as “unwitting” Christians. An “empirical” universalism of superiority and hierarchy (Sepulveda) and a normative one of truth and love (las Casas) end up being not very different. As Tzvetan Todorov pithily remarks, there is “violence in the conviction that one possesses the truth oneself, whereas this is not the case for others, and that one must furthermore impose that truth on those others.”7

The conflicting interpretations of humanity by Sepulveda and las Casas capture the dominant ideologies of Western empires, imperialisms, and colonialisms. At one end, the (racial) other is inhuman or subhuman. This justifies enslavement, atrocities, and even annihilation as strategies of the civilizing mission. At the other end, conquest, occupation, and forceful conversion are strategies of spiritual or material development, of progress and integration of the innocent, naïve, undeveloped others into the main body of humanity.

These two definitions and strategies towards otherness act as supports of western subjectivity. The helplessness, passivity, and inferiority of the “undeveloped” others turns them into our narcissistic mirror-image and potential double. These unfortunates are the infants of humanity. They are victimized and sacrificed by their own radical evildoers; they are rescued by the West who helps them grow, develop and become our likeness. Because the victim is our mirror image, we know what his interest is and impose it “for his own good.” At the other end, the irrational, cruel, victimizing others are projections of the Other of our unconscious. As Slavoj Žižek puts it, “there is a kind of passive exposure to an overwhelming Otherness, which is the very basis of being human…[the inhuman] is marked by a terrifying excess which, although it negates what we understand as ‘humanity’ is inherent to being human.”8 We have called this abysmal other lurking in the psyche and unsettling the ego various names: God or Satan, barbarian or foreigner, in psychoanalysis the death drive or the Real. Today they have become the “axis of evil,” the “rogue state,” the “bogus refugee,” or the “illegal” migrant. They are contemporary heirs to Sepulveda's “monkeys,” epochal representatives of inhumanity.

A comparison of the cognitive strategies associated with the Latinate humanitas and the Greek anthropos is instructive. The humanity of humanism (and of the academic Humanities9) unites knowing subject and known object following the protocols of self-reflection. The anthropos of physical and social anthropology, on the other hand, is the object only of cognition. Physical anthropology examines bodies, senses, and emotions, the material supports of life. Social anthropology studies diverse non-western peoples, societies, and cultures, but not the human species in its essence or totality. These peoples emerged out of and became the object of observation and study through discovery, conquest, and colonization in the new world, Africa, Asia, or in the peripheries of Europe. As Nishitani Osamu puts it, humanity and anthropos signify two asymmetrical regimes of knowledge.10 Humanity is civilization, anthropos is outside or before civilization. In our globalized world, the minor literatures of anthropos are examined by comparative literature, which compares “civilization” with lesser cultures.

The gradual decline of Western dominance is changing these hierarchies. Similarly, the disquiet with a normative universalism, based on a false conception of humanity, indicates the rise of local, concrete, and context-bound normativities.

In conclusion, because ‘humanity’ has no fixed meaning, it cannot act as a source of norms. Its meaning and scope keeps changing according to political and ideological priorities. The continuously changing conceptions of humanity are the best manifestations of the metaphysics of an age. Perhaps the time has come for anthropos to replace the human. Perhaps the rights to come will be anthropic (to coin a term) rather than human, expressing and promoting singularities and differences instead of the sameness and equivalences of hitherto dominant identities.

Thesis 2

Power and morality, empire and cosmopolitanism, sovereignty and rights, law and desire are not fatal enemies. Instead, a historically specific amalgam of power and morality forms the structuring order of each epoch and society.

We will explore the strong internal connection between these superficially antagonistic principles, at the point of their emergence in the late 18th century here and in the post-1989 order in the next part.

The religious grounding of humanity was undermined by the liberal political philosophies of early modernity. The foundation of humanity was transferred from God to (human) nature. Human nature has been interpreted as an empirical fact, a normative value, or both. Science has driven the first approach. The mark of humanity has been variously sought in language, reason or evolution. Man as species existence emerged as a result of legal and political innovations. The idea of humanity is the creation of humanism, with legal humanism at the forefront. Indeed the great 18th century revolutions and declarations paradigmatically manifest and helped construct modern universalism. And yet, at the heart of humanism, humanity remained a strategy of division and classification.

We can follow briefly this contradictory process, which both proclaims the universal and excludes the local in the text of the French Declaration of the Rights of Man and Citizen, the manifesto of modernity. Article 1, the progenitor of normative universalism, states that ‘men are born and remain free and equal of right’ a claim repeated in the inaugural article of the 1948 Universal Declaration of Human Rights. Equality and liberty are declared natural entitlements and independent of governments, epochal, and local factors. And yet the Declaration is categorically clear about the real source of universal rights. Article 2 states that ‘the aim of any political association is to preserve the natural and inalienable rights of man’ and Article 3 proceeds to define this association: ‘The principle of all Sovereignty lies essentially with the nation.’

‘Natural’ and eternal rights are declared on behalf of the universal “man.” However these rights do not pre-exist but were created by the Declaration. A new type of political association, the sovereign nation and its state and a new type of ‘man’, the national citizen, came into existence and became the beneficiary of rights. In a paradoxical fashion, the declaration of universal principle established local sovereignty. From that point, statehood and territory follow a national principle and belong to a dual time. If the declaration inaugurated modernity, it also started nationalism and its consequences: genocide, ethnic and civil war, ethnic cleansing, minorities, refugees, the stateless. The spatial principle is clear: every state and territory should have its unique dominant nation and every nation should have its own state – a catastrophic development for peace as its extreme application since 1989 has shown.

The new temporal principle replaced religious eschatology with a historical teleology, which promised the future suturing of humanity and nation. This teleology has two possible variants: either the nation imposes its rule on humanity or universalism undermines parochial divides and identities. Both variants became apparent when the Romans turned Stoic cosmopolitanism into the imperial legal regulation of jus gentium. In France, the first alternative appeared in the Napoleonic war, which allegedly spread the civilizing influence through conquest and occupation (according to Hegel, Napoleon was the world spirit on horseback); while the second was the beginning of a modern cosmopolitanism, in which slavery was abolished and colonial people were given political rights for a limited time after the Revolution. From the imperial deformation of Stoic cosmopolitanism to the current use of human rights to legitimize Western global hegemony, every normative universalism has decayed into imperial globalism. The split between normative and empirical humanity resists its healing, precisely because universal normativity has been invariably defined by a part of humanity.

The universal humanity of liberal constitutions was the normative ground of division and exclusion. A gap was opened between universal “man,” the ontological principle of modernity, and national citizen, its political instantiation and the real beneficiary of rights. The nation-state came into existence through the exclusion of other people and nations. The modern subject reaches her humanity by acquiring political rights of citizenship, which guarantee her admission to the universal human nature by excluding from that status others. The alien as a non-citizen is the modern barbarian. He does not have rights because he is not part of the state and he is a lesser human being because he is not a citizen. One is a man to greater or lesser degree because one is a citizen to a greater or lesser degree. The alien is the gap between man and citizen.

In our globalised world, not to have citizenship, to be stateless or a refugee, is the worst fate. Strictly speaking, human rights do not exist: if they are given to people on account of their humanity and not of some lower level group membership, then refugees, the sans papiers migrants and prisoners in Guatanamo Bay and similar detention centers, who have little if any legal protection, should be their main beneficiaries. They have few, if any, rights. They are legally abandoned, bare life, the homines sacri of the new world order.

The epochal move to the subject is driven and exemplified by legal personality. As species existence, the “man” of the rights of man appears without gender, color, history, or tradition. He has no needs or desires, he is an empty vessel united with all others through three abstract traits: free will, reason, and the soul (now the mind) — the universal elements of human essence. This minimum of humanity allows “man” to claim autonomy, moral responsibility, and legal subjectivity. At the same time, the empirical man who actually enjoys the ‘rights of man’ is a man all too man: a well-off, heterosexual, white, urban male who condenses in his person the abstract dignity of humanity and the real prerogatives of belonging to the community of the powerful. A second exclusion therefore conditions humanism, humanity and its rights. Mankind excludes improper men, that is, men of no property or propriety, humans without rhyme and reason, women, racial, and ethnic sexual minorities. Rights construct humans against a variable inhumanity or anthropology. Indeed these “inhuman conditions of humanity,” as Pheng Cheah has called them, act as quasi-transcendental preconditions of modern life.11

The contemporary history of human rights can be seen as the ongoing and always failing struggle to close the gap between the abstract man and the concrete citizen; to add flesh, blood and sex to the pale outline of the ‘human’ and extend the dignities and privileges of the powerful (the characteristics of normative humanity) to empirical humanity. This has not happened however and is unlikely to be achieved through the action of rights.

Thesis 3

The post-1989 order combines an economic system that generates huge structural inequalities and oppression with a juridico-political ideology promising dignity and equality. This major instability is contributing to its demise.

Why and how did this combination of neo-liberal capitalism and humanitarianism emerge? Capitalism has always moralized the economy and applied a gloss of righteousness to profit-making and unregulated competition precisely because it is so hard to believe. From Adam Smith's ‘hidden hand’ to the assertion that unrestrained egotism promotes the common good or that beneficial effects ‘trickle down’ if the rich get even bigger tax breaks, capitalism has consistently tried to claim the moral high ground.12

Similarly, human rights and their dissemination are not simply the result of the liberal or charitable disposition of the West. The predominantly negative meaning of freedom as the absence of external constraints – a euphemism for keeping state regulation of the economy at a minimum – has dominated the Western conception of human rights and turned them into the perfect companion of neo-liberalism. Global moral and civic rules are the necessary companion of the globalization of economic production and consumption, of the completion of world capitalism that follows neo-liberal dogmas. Over the last 30 years, we have witnessed, without much comment, the creation of global legal rules regulating the world capitalist economy, including rules on investment, trade, aid, and intellectual property. Robert Cooper has called it the voluntary imperialism of the global economy. “It is operated by an international consortium of financial Institutions such as the IMF and the World Bank…These institutions…make demands, which increasingly emphasise good governance. If states wish to benefit, they must open themselves up to the interference of international organisations and foreign states.” Cooper concludes that “what is needed then is a new kind of imperialism, one acceptable to a world of human rights and cosmopolitan values.”13

The (implicit) promise to the developing world is that the violent or voluntary adoption of the market-led, neo-liberal model of good governance and limited rights will inexorably lead to Western economic standards. This is fraudulent. Historically, the Western ability to turn the protection of formal rights into a limited guarantee of material, economic, and social rights was partly based on huge transfers from the colonies to the metropolis. While universal morality militates in favor of reverse flows, Western policies on development aid and Third World debt indicate that this is not politically feasible. Indeed, the successive crises and re-arrangements of neoliberal capitalism lead to dispossession and displacement of family farming by agribusiness, to forced migration and urbanization. These processes expand the number of people without skills, status, or the basics for existence. They become human debris, the waste-life, the bottom billions. This neo-colonial attitude has now been extended from the periphery to the European core. Greece, Portugal, Ireland, and Spain have been subjected to the rigors of the neoliberal “Washington Consensus” of austerity and destruction of the welfare state, despite its failure in the developing world. More than half the young people of Spain and Greece are permanently unemployed and a whole generation is being destroyed. But this gene-cide, to coin a term, has not generated a human rights campaign.

As Immanuel Wallerstein put it, “if all humans have equal rights, and all the peoples have equal rights, then we cannot maintain the kind of inegalitarian system that the capitalist world economy has always been and always will be.”14 When the unbridgeability of the gap between the missionary statements on equality and dignity and the bleak reality of obscene inequality becomes apparent, human rights will lead to new and uncontrollable types of tension and conflict. Spanish soldiers met the advancing Napoleonic armies shouting “Down with freedom!” Today people meet the ‘peacekeepers’ of the new world order with cries of “Down with human rights!”

Social and political systems become hegemonic by turning their ideological priorities into universal principles and values. In the new world order, human rights are the perfect candidate for this role. Their core principles, interpreted negatively and economically, promote neo-liberal capitalist penetration. Under a different construction, their abstract provisions could subject the inequalities and indignities of late capitalism to withering attack. But this cannot happen as long as they are used by the dominant powers to spread the ‘values’ of an ideology based on the nihilism and insatiability of desire.

Despite differences in content, colonialism and the human rights movement form a continuum, episodes in the same drama, which started with the great discoveries of the new world and is now carried out in the streets of Iraq and Afghanistan: bringing civilization to the barbarians. The claim to spread Reason and Christianity gave western empires their sense of superiority and their universalizing impetus. The urge is still there; the ideas have been redefined but the belief in the universality of our world-view remains as strong as that of the colonialists. There is little difference between imposing reason and good governance and proselytizing for Christianity and human rights. They are both part of the cultural package of the West, aggressive and redemptive at the same time.

Thesis 4

Universalism and communitarianism rather than being opponents are two types of humanism dependent on each other. They are confronted by the ontology of singular equality

The debate about the meaning of humanity as the ground normative source is conducted between universalists and communitarians. The universalist claims that cultural values and moral norms should pass a test of universal applicability and logical consistency and often concludes that, if there is one moral truth but many errors, it is incumbent upon its agents to impose it on others.

Communitarians start from the obvious observation that values are context-bound and try to impose them on those who disagree with the oppressiveness of tradition. Both principles, when they become absolute essences and define the meaning and value of humanity without remainder, can find everything that resists them expendable.

Kosovo is a good example. The proud Serbians killed and ‘cleansed’ ethnic Albanians in order to protect the integrity of the ‘cradle’ of their nation (interestingly, like most wild nationalisms, celebrating a historic defeat). NATO bombers killed people in Belgrade and Kosovo from 35,000 feet in order to defend the rights of humanity. Both positions exemplify, perhaps in different ways, the contemporary metaphysical urge: they have made an axiomatic decision as to what constitutes the essence of humanity and follow it with a stubborn disregard for alternatives. They are the contemporary expressions of a humanism that defines the ‘essence’ of humanity all the way to its end, as telos and finish. To paraphrase Emanuel Levinas, to save the human we must defeat this type of humanism.

The individualism of universal principles forgets that every person is a world and comes into existence in common with others, that we are all in community. Every human is a singular being, unique in her existence as an unrepeatable concatenation of past encounters, desires, and dreams with future projections, expectations, and plans. Every single person forms a phenomenological cosmos of meaning and intentionality, in relations of desire conversation and recognition with others. Being in common is an integral part of being self: self is exposed to the other, it is posed in exteriority, the other is part of the intimacy of self. My face is “always exposed to others, always turned toward an other and faced by him or her never facing myself.”15

Indeed being in community with others is the opposite of common being or of belonging to an essential community. Communitarians, on the other hand, define community through the commonality of tradition, history, and culture, the various past crystallizations whose inescapable weight determines present possibilities. The essence of the communitarian community is often to compel or ‘allow’ people to find their ‘essence,’ common ‘humanity’ now defined as the spirit of the nation or of the people or the leader. We have to follow traditional values and exclude what is alien and other. Community as communion accepts human rights only to the extent that they help submerge the I into the We, all the way till death, the point of ‘absolute communion’ with dead tradition.16

Both universal morality and cultural identity express different aspects of human experience. Their comparison in the abstract is futile and their differences are not pronounced. When a state adopts ‘universal’ human rights, it will interpret and apply them, if at all, according to local legal procedures and moral principles, making the universal the handmaiden of the particular. The reverse is also true: even those legal systems that jealously guard traditional rights and cultural practices against the encroachment of the universal are already contaminated by it. All rights and principles, even if parochial in their content, share the universalizing impetus of their form. In this sense, rights carry the seed of the dissolution of community and the only defense is to resist the idea of rights altogether, something impossible in global neo-liberalism. The claims of universality and tradition, rather than standing opposed in mortal combat, have become uneasy allies, whose fragile liaison has been sanctioned by the World Bank.

From our perspective, humanity cannot act as a normative principle.. Humanity is not a property shared. It is discernible in the incessant surprising of the human condition and its exposure to an undecided open future. Its function lies not in a philosophical essence but in its non-essence, in the endless process of re-definition and the necessary but impossible attempt to escape external determination. Humanity has no foundation and no end; it is the definition of groundlessness.

Thesis 5

In advanced capitalist societies, human rights de-politicize politics.

Rights form the terrain on which people are distributed into rulers, ruled, and excluded. Power's mode of operation is revealed, if we observe which people are given or deprived of which rights at which particular place or point in time. In this sense, human rights both conceal and affirm the dominant structure of a period and help combat it. Marx was the first to realize the paradoxical nature of rights. Natural rights emerged as a symbol of universal emancipation, but they were at the same time a powerful weapon in the hands of the rising capitalist class, securing and naturalizing emerging dominant economic and social relations. They were used to take out of political challenge the central institutions of capitalism such as religion, property, contractual relations and the family, thus providing the best protection possible. Ideologies, private interests, and egotistical concerns appear natural, normal, and for the public good when they are glossed over by rights vocabulary. As Marx inimitably put it, “freedom, equality, property and Bentham.”17

Early human rights were historical victories of groups and individuals against state power while at the same time promoting a new type of domination. As Giorgio Agamben argues, they “simultaneously prepared a tacit but increasing inscription of individuals’ lives within the state order, thus offering a new and more dreadful foundation for the very sovereign power from which they wanted to liberate themselves.”18 In late capitalism, with its proliferating bio-political regulation, the endlessly multiplying rights paradoxically increase power's investment on bodies.

If classical natural rights protected property and religion by making them ‘apolitical’, the main effect of rights today is to depoliticize politics itself. Let us introduce a key distinction in recent political philosophy between politics (la politique) and the political (le politique). According to Chantal Mouffe, politics is the terrain of routine political life, the activity of debating, lobbying, and horse-trading that takes places around Westminster and Capitol Hill.19 The ‘political,’ on the other hand, refers to the way in which the social bond is instituted and concerns deep rifts in society. The political is the expression and articulation of the irreducibility of social conflict. Politics organizes the practices and institutions through which order is created, normalizing social co-existence in the context of conflict provided by the political.

This deep antagonism is the result of the tension between the structured social body, where every group has its role, function, and place, and what Jacques Rancière calls “the part of no part.” Groups that have been radically excluded from the social order; they are invisible, outside the established sense of what exists and is acceptable. Politics proper erupts only when an excluded part demands to be included and must change the rules of inclusion to achieve that. When they succeed, a new political subject is constituted, in excess to the hierarchized and visible group of groups and a division is put in the pre-existing common sense.20

What is the role of human rights in this division between politics and the political? Right claims reinforce rather than challenge established arrangements. The claimant accepts the established power and distribution orders and transforms the political claim into a demand for admission to the law. The role of law is to transform social and political tensions into a set of solvable problems regulated by rules and hand them over to rule experts. The rights claimant is the opposite of the revolutionaries of the early declarations, whose task was to change the overall design of the law. To this extent, his actions abandon the original commitment of rights to resist and oppose oppression and domination. The ‘excessive’ subjects, who stand for the universal from a position of exclusion, have been replaced by social and identity groups seeking recognition and limited re-distribution.

In the new world order the right-claims of the excluded are foreclosed by political, legal, and military means. Economic migrants, refugees, prisoners of the war on terror, the sans papiers, inhabitants of African camps, these ‘one use humans’ are the indispensable precondition of human rights but, at the same time, they are the living, or rather dying, proof of their impossibility. Successful human rights struggles have undoubtedly improved the lives of people by marginal re-arrangements of social hierarchies and non-threatening re-distributions of the social product. But their effect is to de-politicize conflict and remove the possibility of radical change.

We can conclude that human rights claims and struggles bring to the surface the exclusion, domination and exploitation, and inescapable strife that permeates social and political life. But, at the same time, they conceal the deep roots of strife and domination by framing struggle and resistance in the terms of legal and individual remedies which, if successful, lead to small individual improvements and a marginal re-arrangement of the social edifice. Can human rights re-activate a politics of resistance? The intrinsic link between early natural rights, (religious) transcendence, and political radicalism opened the possibility. It is still active in parts of the world not fully incorporated in the biopolitical operations of power. But only just. The metaphysics of the age is that of the deconstruction of essence and meaning, the closing of the divide between ideal and real, the subjection of the universal to the dominant particular. Economic globalization and semiotic monolingualism are carrying this task out in practice; its intellectual apologists do it in theory. The political and moral duty of the critic is to keep the rift open and to discover and fight for transcendence in immanence.

Thesis 6

In advanced capitalist societies, human rights become strategies for the publicization and legalization of (insatiable) individual desire.

Liberal theories from Immanuel Kant to John Rawls present the self as a solitary and rational entity endowed with natural characteristics and rights and in full control of himself. Rights to life, liberty, and property are presented as integral to humanity's well-being. The social contract (or its heuristic restatement through the “original position”) creates society and government but preserves these rights and makes them binding on government. Rights and today human rights are pre-social, they belong to humans precisely because they are humans. We use this natural patrimony as tools or instruments to confront the outside world, to defend our interests, and to pursue our life plans

This position is sharply contrasted by Hegelian and Marxist dialectics, hermeneutics and psychoanalysis. The human self is not a stable and isolated entity that, once formed, goes into the world and acts according to pre-arranged motives and intentions. Self is created through constant interactions with others, the subject is always inter-subjective. My identity is constructed in an ongoing dialogue and struggle for recognition, in which others (both people and institutions) acknowledge certain characteristics, attributes, and traits as mine, helping create my own sense of self. Identity emerges out of this conversation and struggle with others which follows the dialectic of desire. Law is a tool and effect of this dialectic; human rights acknowledge the constitutive role of desire.

Hegel's basic idea can be put simply. The self is both separate from and dependent upon the external world. Dependence on the not-I, both the object and the other person, makes the self realize that he is not complete but lacking and that he is constantly driven by desire. Life is a continuous struggle to overcome the foreignness of the other person or object. Survival depends on overcoming this radical split from the not-I, while maintaining the sense of uniqueness of self.21

Identity is therefore dynamic always on the move. I am in ongoing dialogue with others, a conversation that keeps changing others and re-drawing my own self-image. Human rights do not belong to humans and do not follow the dictates of humanity; they construct humans. A human being is someone who can successfully claim human rights and the group of rights we have determines how “human” we are; our identity depends on the bunch of rights we can successfully mobilize in relations with others. If this is the case, rights must be linked with deep-seated psychological functions and needs. From the heights of Hegelian dialectics, we now move to the much darker territory of Freudian psychoanalysis.

Jus vitam institutare, the law constitutes life, states a Roman maxim. For psychoanalysis it remains true. We become independent, speaking subjects by entering the symbolic order of language and law. But this first ‘symbolic castration’ must be supplemented by a second that makes us legal subjects. It introduces us into the social contract leaving behind the family life of protection, love, and care. The symbolic order imposes upon us the demands of social life. God, King, or the Sovereign act as universal fathers, representing an omnipotent and unitary social power, which places us in the social division of labor. If, according to Jacques Lacan, the name of the father makes us speaking subjects, the name of the Sovereign turns us into legal subjects and citizens.

This second entry into the law denies, like symbolic castration, the perceived wholeness of family intimacy and replaces it with partial recognitions and incomplete entitlements. Rights by their nature cannot treat the whole person. In law, a person is never a complete being but a persona, ritual or theatrical mask, that hides his or her face under a combination of partial rights. The legal subject is a combination of overlapping and conflicting rights and duties; they are law's blessing and curse. Rights are manifestations of individual desire as well as tools of societal bonding. Following the standard Lacanian division, rights have symbolic, imaginary, and real aspects. Their symbolic function places us in the social division of labor, hierarchy, and exclusion, the imaginary gives us a (false) sense of wholeness while the real disrupts the pleasures of the symbolic and the falsifications of the imaginary. Psychoanalysis offers the most advanced explanation of the constitutive and contradictory work of rights.

The symbolic function of rights bestows legal personality and introduces people to independence away from the intimacy of family. Law and rights construct a formal structure, which allocates us to a place in a matrix of relations strictly indifferent to the needs or desires of flesh and blood people. Legal rights offer the minimum recognition of abstract humanity, formal equivalence and moral responsibility, irrespective of individual characteristics. At the same time, they place people on a grid of distinct and hierarchical roles and functions, of prohibitions, entitlements and exclusions. Social and economic rights add a layer of difference to abstract similarity; they recognize gender, race, religion, and sexuality, in part moving recognition from the abstract equality of humanity to differentiated qualities, characteristics, and predications. Human rights may promise universal happiness but their empirical existence and enforcement depends on genealogies, hierarchies of power and contingencies that allocate the necessary resources ignoring and dismissing expectations or needs. The legal person that rights and duties construct resembles a caricature of the actual human self. The face has been replaced by an image in the cubist style; the nose comes out of the mouth, eyes protrude on the sides, forehead and chin are reversed. It projects a three-dimensional object onto a flat canvas.

The integrity of self denied by the symbolic order of rights returns in the imaginary. Human rights promise an end to conflict, social peace and well-being (the pursuit of happiness was an early promise in the American Declaration of Independence). A society of rights offers an ideal place, a stage and supplement for the ideal ego. As a man of rights, I see myself as someone with dignity, respect, and self-respect, at peace with the world. A society that guarantees rights is a good place, peaceful and affluent, a social order made for and fitting the individual who stands at its center. A legal system that protects rights is rationally coherent and closed (Ronald Dworkin calls it a “seamless web”), morally good (it has principles and the consequent “right” answers to all “hard” problems), pragmatically efficient.

The imaginary domain of rights creates an immediate, imaged and imagined bond, between the subject, her ideal ego, and the world. Human rights project a fantasy of wholeness, which unites body and soul into an integrated self. It is a beautiful self that fits in a good world, a society made for the subject. The anticipated completeness, the projected future integrity that underpins present identity is non-existent and impossible however and, moreover, differs from person to person and from community to community. Our imaginary identification with a good society accepts too easily that the language, signs and images of human rights are (or can become) our reality. The right to work, people assert, exists since it is written in the Universal Declaration, the international Covenants, the Constitution, the law, the statements of politicians. Billions of people have no food, no employment, no education, or health care – but this brutal fact does not weaken the assertion of the ideal. The necessary replacement of materiality by signs, of needs and desires by words and images makes people believe that the mere existence of legal texts and institutions, with little performance or action, affects and completes bodies.

The imaginary promoted by human rights enthusiasts presents a world made for my sake, in which the law meets (or ought to and will meet) my desires. This happy identification with the social and legal system is based on misrecognition. The world is indifferent to my being, happiness or travails. The law is not coherent or just. Morality is not law's business and peace is always temporary and precarious, never perpetual. The state of eu zein or well-being, the terminal point of human rights, is always deferred, its promise postponed its performance impossible. For the middle classes, to be sure, human rights are birth-right and patrimony. For the unfortunates of the world, on the other hand, they are only vague promises, fake supports for offering obedience, with their delivery permanently frustrated. Like the heaven of Christianity, human rights form a receding horizon that allows people to endure daily humiliations and subjugations.

The imaginary of rights is gradually replacing social justice. The decolonization struggles, the civil rights and counter-cultural movements fought for an ideal society based on justice and equality. In the human rights age, the pursuit of collective material welfare has given way to individual gratification and the avoidance of evil. The rights imaginary goes into overdrive when it turns images into “reality,” when legal clauses and terms replace food and shelter, when weasel words become the garb and grab of power. Rights emphasize the individual, his autonomy, and his place in the world. Like all imaginary identifications, they repress the recognition that the subject is inter-subjective and that the economic and social order is strictly indifferent to the fate of any particular individual. According to Louis Althusser, ideology is not “false consciousness” but is made up of ways of living, practices, and experiences that misrecognize our place in the world. It is “the imaginary relationship of individuals to their real conditions of existence.” In this sense, human rights are ideology at its strongest but one very different from that of Michael Ignatieff.22

Finally, the symbolic and imaginary operation of rights finds its limit in the real. We hover around the vortex of the real: the lack at the core of subjectivity both causes our projects to fail and creates the drive to continue the effort. When we make a demand, we not only ask the other to fulfill a need but also to offer us unreserved love. An infant, who asks for his mother's breast, needs food but also asks for his mother's attention and love. Desire is always the desire of the other and signifies precisely the excess of demand over need. Each time my need for an object enters language and addresses the other, it is the request for recognition and love. But this demand for wholeness and unqualified recognition cannot be met by the big Other (language, law, the state) or the other person. The big Other is the cause and symbol of lack. The other person cannot offer what the subject lacks because he is also lacking. In our appeal to the other, we confront lack, a lack that can neither be filled nor fully symbolized.

Rights allow us to express our needs in language by formulating them as a demand. A human rights claim involves two demands addressed to the other: a specific request in relation to one aspect of the claimant's personality or status (such as to be left alone, not to suffer in one's bodily integrity, and to be treated equally), but, in addition, a much wider demand to have one's whole identity recognized in its specific characteristics. When a person of color claims, for example, that the rejection of a job application amounted to a denial of her human right to non-discrimination, she makes two related but relatively independent claims. The rejection is both to an unfair denial of the applicant's need for a job but also it denigrates her wider identity. Every right therefore links a need of a part of the body or personality with what exceeds need, the desire that the claimant be recognized and loved as a whole and complete person.

The subject of rights tries to find the missing object that will fill lack and turn him into a complete integral being in the desire of the other. But this object does not exist and cannot be possessed. Rights offer the hope that subject and society can become whole: ‘if only my attributes and characteristics were given legal recognition, I would be happy’; ‘if only the demands of human dignity and equality were fully enforced, society would be just.’ But desire cannot be fulfilled. Rights become a fantastic supplement that arouses but never satiates the subject's desire. Rights always agitate for more rights. They lead to new areas of claim and entitlement that again and again prove insufficient.

Today human rights have become the mark of civility. But their success is limited. No right can earn me the full recognition and love of the other. No bill of rights can complete the struggle for a just society. Indeed the more rights we introduce, the greater the pressure is to legislate for more, to enforce them better, to turn the person into an infinite collector of rights, and to turn humanity into an endlessly proliferating mosaic of laws. The law keeps colonizing life and the social world, while the endless spiral of more rights, acquisitions, and possessions fuels the subject's imagination and dominates the symbolic world. Rights become the reward for psychological lack and political impotence. Fully positivized rights and legalized desire extinguish the self-creating potential of human rights. They become the symptom of all-devouring desire – a sign of the Sovereign or the individual – and at the same time its partial cure. In a strange and paradoxical twist, the more rights we have the more insecure we feel.

But there is one right that is closely linked with the real of radical desire: the right to resistance and revolt. This right is close to the death drive, to the repressed call to transcend the distributions of the symbolic order and the genteel pleasures of the imaginary for something closer to our destructive and creative inner kernel. Taking risks and not giving up on your desire is the ethical call of psychoanalysis. Resistance and revolution is their social equivalent. In the same way that the impossible and disavowed real organizes the psyche, the right to resistance forms the void at the heart of the system of law, which protects it from sclerosis and ossification.23

We can conclude that rights are about recognition (symbolic) and distribution (imaginary); except that there is a right to resistance/revolt.

Thesis 7

For a cosmopolitanism to come (or the idea of communism).

Against imperial arrogance and cosmopolitan naivety, we must insist that global neo-liberal capitalism and human-rights-for-export are part of the same project. The two must be uncoupled; human rights can contribute little to the struggle against capitalist exploitation and political domination. Their promotion by western states and humanitarians turns them into a palliative: it is useful for a limited protection of individuals but it can blunt political resistance. Human rights can re-claim their redemptive role in the hands and imagination of those who return them to the tradition of resistance and struggle against the advice of the preachers of moralism, suffering humanity, and humanitarian philanthropy.

Liberal equality as a regulative principle has failed to close the gap between rich and poor. Equality must become an axiomatic presupposition: People are free and equal; equality is not the effect but the premise of action. Whatever denies this simple truth creates a right and duty of resistance. The equality of legal rights has consistently supported inequality; axiomatic equality (each counts as one in all relevant groups) is the impossible boundary of rights culture. It means that healthcare is due to everyone who needs it, irrespective of means; that rights to residence and work belong to all who find themselves in a part of the world irrespective of nationality; that political activities can be freely engaged by all irrespective of citizenship and against the explicit prohibitions of human rights law.

The combination of the right to resistance and axiomatic equality projects a humanity opposed both to universal individualism and communitarian closure. In the age of globalization, of mondialization we suffer from a poverty of world. Each one is a cosmos but we no longer have a world, only a series of disconnected situations. Everyone a world: a knot of past events and stories, people and encounters, desires and dreams. This is also the point of ekstasis, of opening up and moving away, immortals in our mortality, symbolically finite but imaginatively infinite. The cosmopolitan capitalists promise to make us citizens of the world under a global sovereign and a well-defined and terminal humanity. This is the universalization of the lack of world, the imperialism and empiricism to which every cosmopolitanism falls.

But we should not give up the universalizing impetus of the imaginary, the cosmos that uproots every polis, disturbs every filiation, contests all sovereignty and hegemony. Resistance and radical equality map out an imaginary domain of rights which is uncannily close to utopia. According to Ernst Bloch, the present foreshadows a future not yet and, one should add, not ever possible. The future projection of an order in which man is no longer a “degraded, enslaved, abandoned or, despised being” links the best traditions of the past with a powerful “reminiscence of the future.”24 It disturbs the linear concept of time and, like psychoanalysis, it imagines the present in the image of a prefigured beautiful future, which however will never come to be. In this sense, the imaginary domain is necessarily utopian, non-existing. And yet, this non-place or nothingness grounds our sense of identity, in the same way that utopia helps create a sense of social identity. We have re-discovered in Tunisia and Tahrir Square, in Madrid's Puerta del Sol and Athens’ Syntagma Square what goes beyond and against liberal cosmopolitanism, the principle of its excess. This is the promise of the cosmopolitanism to come – or the idea of communism.25

The cosmopolitanism to come is neither the terrain of nations nor an alliance of classes, although it draws from the treasure of solidarity. Dissatisfaction with the nation, state, and the inter-national comes from a bond between singularities, which cannot be turned into essential humanity, nation, or state. The cosmos to come is the world of each unique one, of whoever or anyone; the polis, the infinite encounters of singularities. What binds me to a Palestinian, a sans papiers migrant, or an unemployed youth is not membership of humanity, nation, state, or community but a bond that cannot be contained in the dominant interpretations of humanity and cosmos or of polis and state.

Law, the principle of the polis, prescribes what constitutes a reasonable order by accepting and validating some parts of collective life, while banning, excluding others, making them invisible. Law and rights link language with things or beings; they nominate what exists and condemn the rest to invisibility and marginality. As the formal and dominant decision about existence, law carries huge ontological power. Radical desire, on the other hand is the longing for what has been banned and declared impossible by the law; what confronts past catastrophes and incorporates the promise of the future.

The axiom of equality and the right to resistance prepare militant subjects in the ongoing struggle between justice and injustice. This being together of singularities in resistance is constructed here and now with friends and strangers in acts of hospitality, in cities of resistance, Cairo, Madrid, Athens.

NOTES
1- Costas Douzinas, The End of Human Rights (Oxford: Hart, 2000); Costas Douzinas and Adam Gearey, Critical Jurisprudence (Oxford: Hart, 2005); Costas Douzinas, Human Rights and Empire (Abingdon: Routledge, 2007). This essay summarizes and moves forward this alternative approach to rights. The final part of this work entitled The Radical Philosophy of Right will be published by Routledge in 2014.
2- Hannah Arendt, On Revolution (New York: Viking Press, 1965), 107.
3- B.L. Ullman, “What are the Humanities?” Journal of Higher Education 17/6 (1946), at 302.
4- H.C. Baldry, The Unity of Mankind in Greek Thought, (Cambridge: Cambridge University Press, 1965), 201.
5- Gin'es de Sepulveda, Democrates Segundo of De las Justas Causa de la Guerra contra los Indios (Madrid: Institute Fransisco de Vitoria, 1951), 33 quoted in Tzvetan Todorov, The Conquest of America trans. Richard Howard (Norman: University of Oklahoma Press, 1999), 153.
6- Bartholomé de las Casas, Obras Completas, Vol. 7 (Madrid: Alianza Editorial, 1922), 536–7.
7- Todorov, The Conquest of America 166, 168.
8- Slavoj Žižek, “Against Human Rights 56,” New Left Review (July-August 2005), 34.
9- Costas Douzinas, “For a Humanities of Resistance,” Critical Legal Thinking, December 7, 2010
10- Nishitani Otamu, “Anthropos and Humanity: Two Western Concepts of ‘Human Being’” in Naoki Sakai and Jon Solomon (eds.), Translation, Biopolitics, Colonial Difference (Hong Kong: Hong Kong University Press, 2006), 259–274.
11- Pheng Cheah, Inhuman Conditions (Cambridge Mass: Harvard University Press, 2006), Chapter 7.
12- Jean-Claude Michéa, The Realm of Lesser Evil trans. David Fernbach (Cambridge and Malden: Polity Press, 2009), Chapter 3.
13- Robert Cooper, “The New Liberal Imperialism,” The Observer (April 1 2002), 3.
14- Immanuel Wallerstein, “The Insurmountable Contradictions of Liberalism” Southern Atlantic Quarterly (1995), 176–7.
15- Jean-Luc Nancy, The Inoperative Community (Minneapolis: University of Minnesota Press, 1991), xxxviii.
16- Ibid.
17- Karl Marx, Capital, Volume One (Harmondsworth: Penguin, 1976), 280.
18- Giorgio Agamben, Homo Sacer: Sovereign Power and Bare Life (Stanford University Press, 1998), 121.
19- Chantal Mouffe, On the Political (London: Routledge, 2005), 8–9.
20- Jacques Rancière, Disagreement. trans. Julie Rose (Minneapolis: University of Minnesota Press, 1998); “Who is the Subject of the Rights of Man?” in “And Justice for All?” Ian Balfour and Eduardo Cadava, special issue, eds., South Atlantic Quarterly, 103, no. 2–3 (2004), 297.
21- Costas Douzinas, “Identity, Recognition, Rights or What Can Hegel Teach Us About Human Rights?” Journal of Law and Society 29 (2002), 379–405.
22- Michael Ignatieff, Human Rights as Politics and Ideology (Princeton and Oxford: Princeton University Press, 2001).
23- Costas Douzinas, “Adikia: On Communism and Rights,” in The Idea of Communism Costas Douzinas and Slavoj Žižek eds (London: Verso, 2010), 81–100.
24- Ernst Bloch, Natural Law and Human History trans. J.D. Schmidt (Cambridge Mass.: MIT Press, 1988), xxviii.
25- Costas Douzinas, Philosophy and Resistance in the Crisis (Cambridge, Polity, 2013), Chapters 9, 10, and 11.

The Paradox of Knowing

$
0
0
David Dunning, Department of Psychology, Uris Hall, Cornell University, Ithaca, New York            
The Psychologist, The British Psychological Society, Volume 26 – Part 6 – Pages:414-417 (June 2013)

Why do we have greater insight into others than ourselves?

People appear to know other people better than they know themselves, at least when it comes to predicting future behaviour and achievement. Why? People display a rather accurate grasp of human nature in general, knowing how social behaviour is shaped by situational and internal constraints. They just exempt themselves from this understanding, thinking instead that their own actions are more a product of their agency, intentions, and free will – a phenomenon we term ‘misguided exceptionalism’. How does this relate to cultural differences in self-insight? And are there areas of human life where people may still know themselves better than they know other people?

To know others is wisdom, to know one’s self is enlightenment.                                                   Chinese philosopher Lao Tzu

For the past twenty-odd years, the main discovery in my lab has been finding out just how unenlightened people are, at least in the terms that Lao Tzu put it. People appear to harbour many and frequent false beliefs about their own competence, character, place in the social world, and future (Dunning, 2005; Dunning et al., 2004). If ‘knowing yourself’ is a task that many philosophers and social commentators – from both Western and Eastern traditions – have exhorted people to accomplish, it appears that very few are taking the advice seriously enough to succeed.

But here is the rub. Although people may not possess much enlightenment, according to Lao Tzu’s criteria, they do instead seem to display a lot of wisdom. At least when it comes to making predictions about the future, people achieve more accuracy forecasting what their peers will do than what they themselves will do. Through their predictions, they seem to possess a rough but valid wisdom about the general dynamics of human nature and how it is reflected in people’s actions. They just fail to display the same sagacity when it comes to understanding their own personal dynamics. As psychologists, they appear to be much better social psychologists than self-psychologists.

The ‘holier-than-thou’ phenomenon

The ‘holier-than-thou’ phenomenon in behavioural prediction perhaps best illustrates this paradox of greater insight into other people than the self. The phenomenon is defined as people predicting they are far more likely to engage in socially desirable acts than their peers. Across several studies, we have asked people to forecast how they will behave in situations that have an ethical, civic or altruistic tone. For example, we ask whether they will donate to charity, or cooperate with another person in an experiment, or vote in an upcoming election. We also ask them the likelihood that their peers will do the same. Consistently, we find that respondents claim that they are much more likely to act in a socially desirable way than their peers are (Balcetis & Dunning, 2008, 2013; Epley & Dunning, 2000, 2006).

But here is the key twist: We then expose an equivalent set of respondents to the actual situation, to see which prediction – self or peer – better anticipates the true rate at which people ‘do the right thing’. Do self-predictions better anticipate the rate that people act in desirable ways, with people, thus, showing undue cynicism about the character of their peers? Or do peer predictions prove more accurate, demonstrating that people believe too much in their better selves? In our studies we find that people’s peer predictions are the more accurate ones. Self-predictions, in contrast, are wildly optimistic. For example, in one study, a full 90 per cent of students in a large-lecture psychology class eligible to vote in an upcoming US presidential election said that they would. They then provided another student with some relevant information about themselves, such as how interested they were in the election and how pleased would they be if their favoured candidate won. Peers given such information predicted that only 67 per cent of respondents would vote. Actual voting rate among those respondents when the election arrived: 61 per cent (Epley & Dunning, 2000, Study 2).

Time and again we have seen such a pattern. For example, 83 per cent of students forecast that they would buy a daffodil for charity in an upcoming drive for the American Cancer Society, but that only 56 per cent of their peers would. When we check back, we found that only 43 per cent had done so (Epley & Dunning, 2000, Study 1). In a Prisoner’s Dilemma game played in the lab, 84 per cent of participants said they would cooperate rather than betray their partner, but that only 64 per cent would do likewise. The actual cooperation rate was 61 per cent (Epley & Dunning, Study 2).

Accuracy as correlation

But wait, a careful reader might say. People might prove overconfident about their own behaviour, but surely they know more about themselves than other people do. This accuracy just reveals itself in a different way. Namely, if we look instead at the correlation between people’s predictions and their actions, we might find a stronger relationship for self-predictions than for peers. More specifically, people may overpredict the chance that they will vote. But those who say they will vote will still be much more likely to vote than those who say they will not. Forecasts from peers will fail to separate voters from nonvoters so successfully.

This assertion is plausible, but it surprisingly fails empirical test. When we look at accuracy from a correlational perspective, we find that peers at least equal overall the accuracy rates of those making self-predictions (see also Spain et al., 2000; Vazire & Mehl, 2008). In one of our voting studies, peers who received just five scant pieces of information about another person’s view of an upcoming election predicted that person just as well (r = .48) as did people predicting their own actions (r = .51) in correlational terms. Other researchers report similar findings: All it takes is a few pieces of information for a peer to achieve accuracy rates that equal the self. The behaviour can be a performance in an upcoming exam (Helzer & Dunning, 2012) or performance on IQ tests (Borkenau & Liebler, 1993).

And, if the action is one that people find significant, and if peers are familiar with the person in question, then peer prediction begins to outdo self-prediction. Roommates and parents, for example, outpredict how long a person’s college romance will last, relative to self-prediction (MacDonald & Ross, 1999). Ratings of supervisors and peers outclass self-ratings in predicting how well surgical residents will do on their final surgical exams (Riscucci et al., 1989). Ratings of peers do better at predicting who will receive a promotion in the Navy early relative to self-impressions (Bass & Yammarino, 1991).

Misguided exceptionalism

Taken together, all this research suggests that people tend to possess useful insight when it comes to understanding human nature. But this research also suggests that people fail to apply this wisdom to the self. In a sense, people exempt themselves from whatever valid psychological understanding they have about their friends and contemporaries. Instead, they tend to think of themselves as special, as responding to a different psychological dynamic. The rules that govern other people’s psychology fail to apply to them. We have come to call this tendency misguided exceptionalism.

What is it about their understanding of other people that respondents exempt themselves from? We contend, with data, that people recognise that others tend to be constrained in what they do. There are forces, both internal and external to the individual, which are out of their control but that influence how they behave. The smell of freshly-baked chocolate chip cookies does break people’s willpower.

The opinions of the crowd place pressures on other people to conform.
But these constraints are for other people. When it comes to our own behaviour, we tend to emphasise instead our own agency, the force of our own character, and what we aspire, intend or plan to do. Relative to others, we believe that our actions are largely a product of our own intentions, aspirations and free will (Buehler et al., 1994; Critcher & Dunning, 2013; Koehler & Poon, 2006; Kruger & Gilovich, 2004; Peetz & Buehler, 2009). We consider ourselves free agents generally immune to the constraints that dictate other people’s actions.

Much recent empirical work reveals this differential emphasis for the self. People think their futures are more wide-open and unpredictable, and that their intentions and desires will be more important authors of their futures than similar intentions and desires will be for other people (Pronin & Kugler, 2010). When predicting their own exam performance, people emphasise (actually, too much, it turns out) their aspiration level, that is, the score they are working to achieve (Helzer & Dunning, 2012), but they emphasise instead a person’s past achievement (appropriately, it turns out) in predictions of others. College students consider their future potential – or, rather, the person they are aiming to be – to be a bigger part of themselves than it is in other people (Williams & Gilovich, 2008; Williams et al., 2012). People predicting who will give to charity consider the prediction to be one about a person’s character and attitudes – that is, until they confront a chance to give themselves, in which case they switch to emphasising situational factors in their accounts of giving (Balcetis & Dunning, 2008).

College students harbour reservations about excessive drinking, but not
recognising that others also feel this same reluctance, they go along
with the crowd


Misunderstanding situations

Ultimately, this misguided exceptionalism and overemphasis on individual agency means that people fail to apply an accurate understanding of human nature to themselves, one that would make their predictions more accurate. People, for example, are surprisingly good at understanding how situational circumstances influence people’s behaviour. In one study, we described a ‘bystander apathy’ study to students. Students were shown an experiment in which a research assistant accidentally spilled a box of jigsaw puzzle pieces. These students were then asked the likelihood that they would help pick the pieces up relative to the percentage of other students who would help. Of key importance, participants were shown two variations of this basic situation – one in which they were alone versus one in which they were sitting in a group of three people.

Those familiar with social psychology will recognise that people are more likely to help when they are alone rather than in a group (Latané & Darley, 1970). In the group, people are seized by the inertia of not knowing immediately whether to help, and thus taking their cue to do nothing based on the fact that everyone else, lost in the same indecision, ends up doing nothing, too. But would our participants show insight into this principle? Not according to their self-predictions. Participants stated that they would be roughly 90 per cent likely to help either alone or in the group. They did, though, concede that other people would be influenced, and that the rate of helping would go down 22 per cent (from 72 per cent to 50 per cent) among other people by introducing the group. Of key import, when we ran the study for real, we found that placing people in a group had a 27 per cent impact (from 50 per cent down to 23 per cent) on actual behaviour. Again, peer predictions largely anticipated this impact. Self-predictions did not (Balcetis & Dunning, 2013).

This belief that self-behaviour ‘floats’ above the impact of situational circumstances and constraints can lead people to forgo decisions that would actually help them. Consider the task of staying within a monthly budget. In one study, participants were offered a service that would provide them with savings tips plus a constant monitoring of their finance. For themselves, participants felt the service would be superfluous. It would have almost zero impact on their ability to achieve their budget goals. What mattered for them instead was the strength of their intentions to save money (Koehler et al., 2011).

But, in reality, a random sample of participants assigned to the service was roughly 11 per cent more likely to reach their budget goals. And, a group of participants asked to judge the impact of the service on other people estimated that the service would matter; that others would be 17 per cent more likely to reach their goals. Again, predictions about others better reflected reality than predictions about the self, in that people could recognise the impact of an important situational aid on others, but felt they themselves were immune to those influences (Koehler et al., 2011).

Cultural influences

This overemphasis on the self’s agency suggests possible cultural differences in the holier-than-thou effect. And, indeed, such cultural differences arise. It is the individualist cultures of Western Europe and North America that emphasise autonomy, agency and the imposition of will onto the environment (Fiske et al., 1998; Markus & Kitayama, 1991). Far Eastern cultures, such as Japan, emphasise instead interdependence, social roles and group harmony – that is, social constraints on the self. Might those cultures, thus, be relatively immune to the ‘holier’ phenomenon?

Across several studies, we have found that people from collectivist cultures display much less self-error than did those from individualist ones. For example, young children attending a summer school on Mallorca were asked how many candies they would donate to other children if they were asked, as well as how many candies other children on average would donate.

A week later, the children were actually asked to donate. Children from more individualist countries (e.g. Britain) donated many fewer candies than they had predicted, but those from more collectivist countries (e.g. Spain) donated on average just as many as they had predicted. Both groups were accurate in their predictions about their peers (Balcetis et al., 2008).



Does the self have any advantage?

Extant psychological research, however, does suggest one area where this general story about self- and social insight will reverse. People may be wiser when it comes to predicting the public and observable actions of others rather than self, but they do appear to have privileged insight into aspects of the self that are not available for other people to view. People know that below the surface of their public appearance is a private individual who feels doubt, anxiety, inhibition and ambivalence that he or she may not let wholly come to the surface (Spain et al., 2000; Vazire, 2010; Vazire & Carlson, 2010, 2011). Of course, this individual does not see this roiling interior life in others.

As a consequence, people may lack awareness that what’s inside themselves is similarly churning and stirring within others. Thus, for example, people often consider themselves more shy, self-critical, and indecisive than other people (Miller & McFarland, 1987). College students harbour reservations about excessive drinking, but not recognising that others also feel this same reluctance, they go along with the crowd to excess on a Saturday night (Prentice & Miller, 1993). In a similar vein, college students harbour much more discomfort about casual sex than they believe their peers do, with each sex overestimating the comfort level of the other sex when it comes to ‘hooking up’ (Lambert et al., 2003).

Concluding remarks

Thus, current psychological research suggests that people may be wise, at least when it comes to understanding and anticipating other people, but they stand in the way of letting this wisdom lead to their own enlightenment. However, if research reveals this problem, it also suggests a potential solution to it. What we presume about other people’s behaviour and futures is likely a valuable indicator of what awaits us in the same situation – and may be much better indicator of our future than any scenario we are spinning directly about ourselves. When predictions matter, we should not spend a great deal of time predicting what we think we will do. Instead, we should ask what other people are likely to do. Or, we should hand the prediction of our own future over to another person who knows a little about us.

Whatever we do, we should note that perhaps we are, indeed, uniquely special individuals, but that it is too easy to overemphasise that fact. In anticipating the future, we should be mindful of the continuity that lies between our self-nature and the nature of others. It is in recognising this continuity that we realise the path that leads to our wisdom may be a pretty good path to our enlightenment, too. At the very least, that thought does remind one of another Chinese proverb that has survived the centuries, perhaps best indicating its worth – that to know what lies for us along the road ahead, we should be sure to ask those coming back.

References
Balcetis, E. & Dunning, D. (2008). A mile in moccasins: How situational experience reduces dispositionism in social judgment. Personality and Social Psychology Bulletin, 34, 102–114.
Balcetis, E. & Dunning, D. (2013). Considering the situation: Why people are better social psychologists than self-psychologists. Self and Identity, 12, 1–15.
Balcetis, E., Dunning, D. & Miller, R.L. (2008). Do collectivists ‘know themselves’ better than individualists? Journal of Personality and Social Psychology, 95, 1252–1267.
Bass, B.M. & Yammarino, F.J. (1991). Congruence of self and others’ leadership ratings of Naval officers for understanding successful performance. Applied Psychology, 40, 437–454.
Buehler, R., Griffin, D. & Ross, M. (1994). Exploring the ‘planning fallacy’. Journal of Personality and Social Psychology, 67, 366–381.
Borkenau, P. & Liebler, A. (1993). Convergence of stranger ratings of personality and intelligence with self-ratings, partner ratings, and measured intelligence. Journal of Personality and Social Psychology, 65, 546–553.
Critcher, C.R. & Dunning, D. (2013). Predicting persons’ goodness versus a person’s goodness: Forecasts diverge for populations versus individuals. Journal of Personality and Social Psychology, 104, 28–44.
Dunning, D. (2005). Self-insight: Roadblocks and detours on the path to knowing thyself. New York: Psychology Press.
Dunning, D., Heath, C. & Suls, J. (2004). Flawed self-assessment: Implications for health, education, and the workplace. Psychological Science in the Public Interest, 5, 71–106.
Epley, N. & Dunning, D. (2000). Feeling ‘holier than thou’: Are self-serving assessments produced by errors in self or social prediction? Journal of Personality and Social Psychology, 79, 861–875.
Epley, N. & Dunning, D. (2006). The mixed blessings of self-knowledge in behavioral prediction. Personality and Social Psychology Bulletin, 32, 641–655.
Fiske, A., Kitayama, S., Markus, H.R. & Nisbett, R.E. (1998). The cultural matrix of social psychology. In D. Gilbert, S. Fiske & G. Lindzey (Eds.) The handbook of social psychology (4th edn, pp.915–981). San Francisco: McGraw-Hill.
Helzer, E.G. & Dunning, D. (2012). Why and when peer prediction is superior to self-prediction. Journal of Personality and Social Psychology, 103, 38–53.
Koehler, D.J. & Poon, C.S.K. (2006). Self-predictions overweight the strength of current intentions. Journal of Experimental Social Psychology, 42, 517–524.
Koehler, D.J., White, R.J. & John, L.K. (2011). Good intentions, optimistic self-predictions, and missed opportunities. Social Psychological and Personality Science, 2, 90–96.
Kruger, J. & Gilovich, T. (2004). Actions and intentions in self-assessments: The road to self-enhancement is paved with good intentions. Personality and Social Psychology Bulletin, 30, 328–339.
Lambert, T.A., Kahn, A.S. & Apple, K.J. (2003). Pluralistic ignorance and hooking up. Journal of Sex Research, 40, 129–133.
Latané, B. & Darley, J. (1970). The unresponsive bystander: Why doesn't he help? New York: Appleton-Century-Crofts.
MacDonald, T.K. & Ross, M. (1999). Assessing the accuracy of predictions about dating relationships. Personality and Social Psychology Bulletin, 25, 1417–1429.
Markus, H.R. & Kitayama, S. (1991). Culture and the self. Psychological Review, 98, 224–253.
Miller, D.T. & McFarland, C. (1987). Pluralistic ignorance: When similarity is interpreted as dissimilarity. Journal of Personality and Social Psychology, 53, 298–305.
Peetz, J. & Buehler, R. (2009). Is there a budget fallacy? Personality and Social Psychology Bulletin, 35, 1579–1591.
Prentice, D.A. & Miller, D.T. (1993). Pluralistic ignorance and alcohol use on campus: Some consequences of misperceiving the social norm. Journal of Personality and Social Psychology, 64, 243–356.
Pronin, E. & Kugler, M.B. (2010). People believe they have more free will than others. Proceedings in the National Academy of Sciences, 107, 22469–22474.
Risucci, D.A., Tortolano, A.J. & Ward, R.J. (1989). Ratings of surgical residents by self, supervisors and peers. Surgical Gynecology and Obstetrics, 169, 519–526.
Spain, J.S., Eaton, L.G. & Funder, D.C. (2000). Perspectives on personality. Journal of Personality, 68, 837–867.
Vazire, S. (2010). Who knows what about a person? Journal of Personality and Social Psychology, 98, 281–300.
Vazire, S. & Carlson, E.N. (2010). Self-knowledge of personality. Social and Personality Psychology Compass, 4, 605–620.
Vazire, S. & Carlson, E.N. (2011). Others sometimes know us better than we know ourselves. Current Directions in Psychological Science, 20, 104–108.
Vazire, S., & Mehl, M.R. (2008). Knowing me, knowing you. Journal of Personality and Social Psychology, 95, 1202–1216.
Williams, E.F. & Gilovich, T. (2008). Conceptions of the self and others across time. Personality and Social Psychology Bulletin, 34, 1037–1046.

Williams, E., Gilovich, T. & Dunning, D. (2012). Being all that you can be. Personality and Social Psychology Bulletin, 38, 143–154.

The Humanities in an Absolutist World

$
0
0


Roscoe Pound (1870-1964) Law School, Harvard University
The Classical Journal, Vol. 39, No. 1 (Oct., 1943), pp. 1-14

Man’s significant achievement is civilization, the continual raising of human powers to a higher unfolding, a continually increasing mastery of, or control over, external or physical nature and over internal or human nature. Civilization is an accumulative activity. Both its aspects, control of physical nature and control of human nature, are added to from generation to generation and the whole is an accumulation of ages. In the present, the progress of control  over  physical  nature,  of  harnessing  external  nature  to man's use, has been so rapid and has been carried so far beyond what had been taken to be the limit of human powers, that it has all but blinded us to the other side, the control of internal nature. But in truth the two are interdependent. It is the control over internal or human nature which has made possible the division of labor by which the harnessing of physical nature has been made possible. If men were subject to constant aggression from their fellows, if they could not safely assume that they could go about their daily tasks free from attack, there could not be the experiment and research and investigation which have enabled man to inherit the earth and to maintain and increase that inheritance. The accumulation from generation to generation would be dissipated if it were not for the check upon man's destructive instincts which is achieved through accumulated control of internal nature. But the control over external nature relieves the pressure of the environment in which man lives and enables the accumulated control over internal nature to persist and increase.

In the history of civilization the outstanding period, from the standpoint of control over internal nature, is classical antiquity, the Greek-Hellenistic-Roman civilization, which happily kept no small degree of continuity during the Middle Ages, and was revived at the Renaissance. This period is as marked for one side of civilization as the nineteenth century and the present are likely to be held in the future for the other side. Indeed, the civilization of ancient Greece, carried on in the Hellenistic era and established for the world by the organizing and administrative genius of the Romans, is a decisive element in the civilization of today.

Art, letters, oratory, philosophy, history writing, are an inheritance from the Greeks. Law, administration, politics, are an inheritance from the Romans. The Greeks even worked out the field tactics to which the military science of today has reverted. Greek and Latin are a preponderant element in the languages which derive from Western Europe. Thus they enter decisively into our thinking, writing, and speaking, and thus into our doing. The last of the Caesars fell a generation ago. But the principles of adjusting human relations and ordering human conduct worked out in theory by Greek philosophers and made into law by Roman jurists of the days of the first Caesars govern in the tribunals of today. Latin was the universal language from the establishment of Roman hegemony and of Roman law as the law of the world for at least nineteen hundred years. All modern literature in all languages is full of allusions to the classics; of allusions to persons and events and stories out of the poets and dramatists and historians of Greece and Rome. One who knows nothing of the great authors of antiquity is cut off from the great authors of the modern world as well. To take but one example, a generation which grows up without anyone knowing Horace, has missed something irreplaceable. To cease to teach the classics is to deprive the oncoming generation of opportunity of fruitful contact with a decisive element in the civilization in which it is to live. A generation cut off from its inherited past is no master of its present. What men do is conditioned by the materials with which they must work in doing it. On one side of our civilization these are for the most significant part materials bequeathed to us by the Greeks and the Romans.

But we are told that we are entering upon a new era. The past is to be canceled. We are to begin with a clean slate. Our accumulated control over external nature has gone so far that there remains only the task of making it available for universal human contentment. Then there will be no occasion for control over internal nature. The causes of envy and strife are to go with want and fear. Mankind will settle down to a passive enjoyment of the material goods of existence and will neither require nor desire anything more.

There are abundant signs of a significant change from the ideas and ideals and values which governed in the immediate past. It is not, however, a change to something wholly new. It is largely a reversion to something with which the student of classical antiquity is well acquainted; to modes of thought against which Socrates argued with the sophists, about which Plato and Aristotle wrote in founding a science of politics, about which Stoics debated with Epicureans, which Christianity put down, for a time at least, when it closed the skeptical and Epicurean schools of philosophy.

Whatever the confident self-styled advanced thinkers of today may be looking forward to, the immediate actual result is a cult of force. We seem to be listening again to Thrasymachus, who argued that the shepherd protects the sheep in order to shear them for wool and slaughter them for mutton, and in the same way the political ruler protects the governed in order to be able to despoil them. The sophists are coming into their own in ethics, and Machiavelli is hailed as a prophet in a realism which in law and in politics takes force to be the reality and those who wield the force of politically organized society, as the representatives of force, to be the actualities of the legal order and of the political order. A favorite phrase of the realist is "the brute facts"; a phrase used not in sadness that there should be such facts, but with a certain relish, as if brutality were the test of reality and the discovery of brute facts argued superior intelligence and discernment. In practice this makes force a test of significance. The significant things in the world are force and the satisfaction of material wants. Education must be shaped to the exigencies of these. Nothing else is to be taught or learned. Such a doctrine carried into practice, a regime to that pattern, would indeed give us a new world. But it would be new by reverting to a very old type.

 Biologists tell us that what they call giantism in an organism is a sign of decadence. When the organism has developed to giant proportions, the next step is decline and the ultimate step is fall. In the same way, there are times in the history of civilization when things seem to have become too big for men to manage them. They get out of hand. The social order ceases to function efficiently. There is a gradual breakdown, followed after a time of chaos and anarchy by a gradual rebuilding of a social order, which in tum may develop a bigness beyond human powers of management and so break down. It may be significant that today the air is full of grandiose schemes for world organization.

The Hellenistic world was in such an era. The greater and richer part of the civilized world had been swallowed up in the empire of Alexander. An age of independent city-states was succeeded by one of great military empires ruled autocratically. Later, the Roman hegemony, in which, as it culminated in the Empire, every free man in the civilized world was a Roman citizen, the law of the city of Rome had become the law of the world, and all political authority was centralized in the first citizen of Rome, was another era of the same kind. It is significant that the first citizen of such a state became a military autocrat. The mark of thinking of such times is likely to be disillusionment. Epicureanism arose in the period of the successors of Alexander, and grew increasingly strong in the Hellenistic era. It throve in the corresponding period of Roman history, the Empire from Augustus to Diocletian and Constantine. It was the most firmly intrenched of the Greek schools of philosophy, although it has contributed the least to the general progress of thought. It was so well fitted to a period of bigness and incipient decay that the Epicureans were the last school to give way before the rise of Christianity. When the schools of philosophy were abolished, they were the most widespread and tenacious of the anti-Christian sects.

Today, in another era of unmanageable bigness, we come upon tenacious give-it-up philosophies once more. Epicurus was wholly indifferent to the form of political organization of society. The real point in existence was to lead a happy life. If he lived under a wise ruler, the man seeking a happy life need have no fear of being disturbed. He could pursue a serene, untroubled existence. If the ruler was a tyrant, the wise man, like Br'er Rabbit, would "jes' lie low" and so escape the tyrant's notice and live an undisturbed life of happiness. Today what Epicurus put as happiness, current social philosophies put as security. The ideal is an undisturbed enjoyment of the means of satisfying material wants. Put concretely it seems to be a vested right in a life job with an assured maximum wage, fixed short hours, allowing much time for leisure at stated periods, a prohibiting of anyone from an overactivity which might give him an advantage, and compelling all to a regimented minimum exertion that would obviate the exciting of envy, and a guaranteed pension at the age of sixty, dispensing with the need of providing one's own reserve. This is the ideal existence Epicurus pictured-the condition of a happy life, the condition of perfect mental equilibrium, neither perturbed nor perturbable. In contrast, the last century identified security with liberty. Men sought security from interference with their activities. They sought to be secure against aggression so that they might freely do their part in the division of labor in a competitive economic order. They sought to be secure against governmental action except so far as was necessary to free them from aggressions of others. Now, instead of seeking to be secure against government, men expect to be made secure by government. But they expect to be secure in a new way; not to be secure in their activities but to be secure against necessity of activity, to be secure in satisfaction of their material wants with a minimum of required individual activity.

Very likely the change reflects the exigencies of a bigger and more crowded world. Possibly it is due in part to the development of luxury, leading to disinclination to the free competitive carving out of a place for oneself which the last century took for happiness. At any rate, freedom from worry about what one can achieve, renouncing of ambition to do things, and acceptance of political events as they may happen, go together as an accepted philosophy of wise living, as they did in the social philosophy of Epicurus.

Marxian economic realism has much in common with the Epicurean social philosophy. The static ideal of a happy life is to be attained as we get rid of classes. It is assumed that when property is abolished all competition between human beings  will ease. Everyone will live undisturbed, without ambition, without envy, and so freed from strife. Once the class struggle has been brought to an end, Marx looked forward to the same social ethical result as Epicurus. But there is nothing in the history of civilization or in experience of human relations in a crowded world to warrant such assumptions. We may be sure that after property is abolished men will still want and claim to use things which cannot be used by more than one or by more than one at a time. It is not likely that there will always be enough at all times of every material good of existence to enable everyone at every moment to have or do all that he can wish, so that no contentions can arise as to possession or use and enjoyment . Nor is it likely in any time which we can foresee that there will be no conflicts or overlappings of the desires and demands involved in the individual life. Such ideas, however, seem to go with bigness such as the economic unification of the world has brought about in the present century.

Along with the disillusioned or give-it-up philosophies of such a time there goes a changed attitude toward government. Instead of wanting to do things, men want to have things done for them, and they turn to government to do for them what they require for a happy life. But they have no wish to be active in government. They turn to absolute political ideas. Eras of bigness and autocracy have gone together. Today while we all do lip service to democracy there is a manifest turning to autocracy. The democracy is to be an absolute democracy. Those who wield its authority are not to be hampered by constitutions or laws or law. What they do is to be law because they do it. They are to be free to make us all happy by an absolute power to pass on the goods of existence to us by such measure of values as suits them.

Such ideas of a happy life, and of politically organized society as the means of assuring that happy life, require an omnicompetent government. They require a government with absolute power to carry out the plan of an undisturbed life of serenity, free from all envy, want, or worry, by control of all activity no less than of all material goods. The restless must be held down, the active must be taught to keep quiet in a passive happiness, those inclined to question the economic order must be taught to accept the regime of security in which their material wants are satisfied. Hence such a polity must of necessity take over education. Men are to be educated to fit into the regime of government-provided material happiness. Those things which will tend to achieve and maintain such a regime are to be taught. All else is to be given up. Either it will hinder the bringing about and making permanent of the new regime or it will tend to impair it when established. There is no place for any of it in the ideal regime.

Applied to international relations, the give-it-up philosophies must be wonderfully heartening doctrine for dictators. Applied to internal administration they are proving wonderfully heartening doctrine for bureaucrats. Can we doubt that a sense of helplessness in the Hellenistic era and again in the era of the later Roman Empire led to general acceptance of a philosophy that taught to let the government run itself or the governors run it in their own way? Can we doubt that a sense of helplessness in our time, a feeling of helplessness to make international relations conform to ideals, leads to acquiescence in theories of force; or that difficulty in an overcrowded world to make adjustments of private relations according to law achieve ideal results, leads to a theory of a law as simply a threat of state force and hence of law as whatever officials do in applying that force?

But if we are moved at times to feel helpless and give up to power and force, those who wield the force of politically organized society have no misgivings. They have supreme confidence that the omnicompetence of the state means the omnicompetence of the officials who act in the name and by the authority of the state, and are ready, assuming themselves to be ex-officio experts, to prescribe detailed regulations for every human activity.

We recognize such conditions when we look at them as they are manifest in the older parts of the world. We have not been prepared to see them as they have been developing gradually but steadily in our own polity. As a leader in American legal education has put it, it is simply a question of what we expect government to do. If we expect it to provide for all our wants by a benevolent paternal care and maternal solicitude, we must expect to surrender to it all responsibility and invest it-and that means those persons who carry it on-with all power. Such a regime is fostered by the exigencies of war. But it was growing long before the war and independent of war conditions. The give-it-up philosophies were taught and preached before and apart from the war. They have been urged by a strong group in both English and American institutions of learning and are propagated today by teachers who advocate an unrestrained administrative power over liberty and property.

What is happening, what is to happen, to the humanities in such a time?
In this connection we must note another characteristic of the time, namely, distrust of reason. In this respect also the thought of today is akin to that of Epicurus. We are taught by the psychological realists that consciously or unconsciously men do what they wish to do and then justify what they have done by reasons conjured up by a desire to be reasonable, which nevertheless are not the real determinants of their behavior. Consequently, by not distinguishing reason from reasons, reason comes to be regarded as a mere name for specious justifying to oneself of what one desires to do and does accordingly. Reason is taken to be illusion. The reality is taken to be the wish, achieved by force or by the force of a politically organized society. This is brought out notably in the difference between the biographies of the last century and those of today. The biographies of the last century were taken up with what their subject did and how he did it. They assume that he had reasons for what he did which were consistent with his purposes and professions, and that his mistakes were due to miscalculation, unless the evidence constrains a different conclusion. The biographies of today are taken up with their subject's hidden motives; if not very creditable, so much the better as the biographer sees it. The evidence does not disclose the motives. The assumed motives interpret the evidence. If the biographer can show that George Washington's motives may be made out to have been not always very creditable, it only goes to show that his actions were after all merely phenomena and to remind us that it is unscientific to apply our subjective ideas of praise and blame to phenomena.

At any rate, we can find one powerful antidote to such teachings in the humanities, and it is perhaps for that reason that the advocates of so-called realism would suppress the teaching of them. At the beginning of the present century the German Emperor objected to the education which, he said, trained the youth to be young Greeks and Romans instead of to be modem Germans. But the results of education to be Germans ought to give us pause if we think to make Americans by an education that seeks to make Americans to a pattern of a land given up to satisfaction of material wants provided by a regime of absolute government .

But I hear people say, the aggregate of knowledge has become so vast that teaching must be confined to those things that count in the world of today. There are translations of the classics available in English and those whose interests lead them to explore the writings of antiquity can find what they seek in those translations. It is a waste of the time that must be given to the things of today to study difficult dead languages in order to find what translations have made accessible in modern languages. The time is needed for the natural and physical sciences, which teach us how to harness more of external nature to producing the material goods of human existence, and to the social sciences, which are to teach us how those goods are to be made to satisfy human desires. Here we have three fallacious propositions: (1) that education is only the acquisition of knowledge, (2) that even the best translation is or can be a substitute for the original of a classic, and (3) that the social sciences are so far advanced that we may rely upon them for objective judgments of the social order and of the problems and phenomena of ethics and economics and politics and jurisprudence. We have to learn the formulas of the social scientists as we once learned the formulated dogmas of the natural and physical sciences. Let us look at these propositions.

Knowledge as such is worth little without knowing how to use it. It is likely to be so up-to-date that it is out of date tomorrow .Discrimination, reasoned judgment, and creative thinking must work upon knowledge to make it fruitful. No one can approach a mastery of all the details of knowledge in even the narrowest field. But he can attain the wisdom that will enable him to lay hold upon those details when and where he requires them and to make something of them. Without this, the study of up-to-date subjects as merely so many tracts of knowledge is futile. Very likely the supposed facts will have ceased to be so regarded by scientists as soon as they have been learned. The wise scholar, however, knows how to find them as they stand at the moment and appraise them for his purposes, and he can often do this although he approaches a subject in which he never had a formal course.

Wisdom is not gained by the use of translations. It is not acquired when students write confidently about Aristotle without having read or being able to read a line of him. It is not developed by slovenly use of language such as follows from never having been compelled to compare the same thought expressed in two languages and brought to see how different it may appear unless the translator is sure of the words no less than of the idea. What teacher of today has not seen confused thought bred of loose writing, due to lack of the disciplined use of words which is acquired by learning the languages from which even our scientific terminology is derived? What teacher has not encountered the type of student who wants to write a thesis on poetic usage and expects to use Pope's Iliad to show him the usage of Homer? Who has not met students of church history who cannot read the New Testament in the original, students writing on medieval philosophy and essaying to criticize a great thinker who cannot read a word of Thomas Aquinas in the tongue in which he wrote, students of legal history who cannot read Magna Carta as it was written, students of history who must take the significant historical documents at second or third hand? I have too often witnessed the pathetic struggles of would-be students of our legal history to handle the monuments of our law in the Middle Ages with no adequate grasp of the language in which they were written. I shall not soon forget the graduate student who thought he could read the Code of Justinian by the light of nature and was astonished to find that conventus did not, as he supposed, mean convent but meant agreement. Nor are such things confined to students. Who of us has not had occasion to feel for the earnest teacher who missed the fundamentals of his education in school and college and now is found struggling to gain what too late he perceives he sorely needs? A great injustice had been done to all of these by leading them to think they were acquiring an adequate foundation for what they desired to do, and leaving them to discover their mistake too late.

Even now, when the majority of those who go to our colleges have had some training in Latin, the teacher has learned to expect some almost incredible atrocities due to ignorance abetted by carelessness. In my last twenty years of law teaching I have become used to being told that in a proceeding in rem the rem must be before the court. I have ceased to be shocked when a college graduate tells me that son assaultdemesneis Anglo-Saxon, that inpais is Latin, and that non compos mentis is French. I can even keep a straight face when a law student, a college graduate, reading in the books about the doctrine of the Good Samaritan cases, asks me who the Good "Sarmatian" was. My friends in other lines tell me of the entomologist describing a new insect who thought confluentawas the feminine of conjluens, or the botanist who wished to coin a word for "downward-directed" and with no knowledge of Greek consulted a Greek dictionary and coined barithynetic– I suppose for katithynetic. I have been told of a student of dramatics who spoke of "Andromash," and we have all heard "chaos" pronounced "chouse" and "Chloe" pronounced "Shlowie" by those who held degrees in arts. Those who perpetrate such things lack much more than a knowledge of the classical languages. They have failed to learn what to do with the materials with which they must work. We may be sure that these slovenlinesses will not be the only ones of which they will be guilty. But what will there be when no one who studies history or law or entomology or botany or dramatics knows any better? It won't do to say, for example, that a law dictionary will tell the law student what he needs. One must know something even to use a dictionary. When it comes about that no one is taught in his teachable years the languages and literatures which are at the foundation of what we say and write, our terminology in every branch of learning must become chaotic, and loose writing lead to loose thinking, and a general loss of morale in scholarship, of which we see abundant symptoms already today.

We are told, however, that those things which are not indispensable must in education in a democracy give way to those which are indispensable. As to this one must make three observations. In the first place, it assumes that democracy requires a common training for all, a training in the mechanic arts and the sciences behind them, and in social sciences on the model of the physical sciences. No one is to be allowed an opportunity of development outside of this program of preparation for material production and politics. Secondly, it assumes that education is complete on leaving school, and hence that there need be no preparation for scholarly self-development of an element needed in any other than a stagnant or enslaved population. Third, it assumes that the social sciences are or can be such as the physical and natural sciences are; that ultimate truths as to economics and politics and sociology are impartible by teaching, and that knowledge of these truths is essential to a democratically organized people.

I have no quarrel with the social sciences. I am now in my forty­ fourth year of teaching jurisprudence, and for forty of those years have taught it from the sociological standpoint. I have urged the importance of ethics and economics and politics and sociology in connection with law in forty years of law-school teaching. But I do not deceive myself as to those so-called sciences. So far as they are not descriptive, they are in continual flux. In the nature of things they cannot be sciences in the sense of physics or chemistry or astronomy. They have been organized as philosophies, have been worked out on the lines of geometry, have been remade to theories of history, have had their period of positivism, have turned to social psychology, and are now in an era of neo-Kantian methodology in some hands and of economic determinism or psychological realism or relativist skepticism or phenomenological intuitionism in other hands. They do not impart wisdom; they need to be approached with acquired wisdom. Nothing of what was taught as economics, political science, or sociology when I was an undergraduate is held or taught today. Since I left college, sociology has gone through four, or perhaps even five, phases. Indeed, those who have gone furthest in these sciences in the immediate past were not originally trained in them. They are not foundation subjects. They belong in the superstructure.

Notice how extremes meet in a time of reaction to absolutist political ideas. In an autocracy men are to be trained in the physical and natural sciences so as to promote material production. They are to be trained in the social sciences so as to promote passive obedience. In an absolutist democracy men are to be trained in the physical and natural sciences because those sciences have to do with the means of satisfying material wants. They are to be trained in the social sciences because those sciences have to do with politically organized society as an organization of force whereby satisfaction of material wants is to be attained. As an important personage in our government has told us, the rising generation must be taught what government can do for them. The relegation of the humanities to a back shelf, proposed by the Kaiser at the beginning of the present century, has been taken over to be urged as a program of a democracy. Such ideas go along with the rise of absolute theories of government throughout the world. An omnicompetent government is to tell us what we shall be suffered to teach, and the oncoming generation is to be suffered to learn nothing that does not belong to a regime of satisfying material wants by the force of a political organization of society. It is assumed that there is nothing in life but the satisfaction of material wants and force as a means of securing satisfaction of them.

America was colonized in a similar period of absolutist political ideas-in the era of the Tudor and Stuart monarchy in England, of the old regime of which the rule of Louis XIV was the type in France, of the monarchy set up by Charles V in Spain, of the establishment of the absolute rule of the Hapsburgs in Austria. England of the Puritan Revolution shook these ideas violently and at the Revolution of 1688 definitely cast them off for two centuries. The colonists who came to America settled in the wilderness in order to escape them. When we settled our own polity at the end of the eighteenth century, we established it as a constitutional democracy, carefully guarded against the reposing of unlimited power anywhere. Moreover, these early Americans, because they did not believe in an omnicompetent government or superman rulers, set up institutions for liberal education. Within six years after their arrival in the wilderness in the new world, the founders of Massachusetts set up a college in order that there might continue to be a learned ministry after their ministers who had come from the English universities were laid in the dust. As our country expanded in its westward extension across the continent, state after state in its organic law provided for a state university in order that liberal learning might be the opportunity of every one. It was not till our era of expansion was over and one of industrialization began that state institutions for mechanical education were more and more established. But these for a generation did not greatly disturb the humanities. The movement to displace them is a phenomenon of the era of bigness.

Outward forms of government are no panacea. We can't do better than we try to do. If we are content to lapse into a revived Epicureanism, if we are content to seek nothing more than a general condition of undisturbed passivity under the benevolent care of an omnicompetent government, we can very well leave education to the sciences which have to do with providing the material goods of existence and those which teach us how the government secures or is to secure them for us. If we are not content with being, as Horace put it, pigs of the drove of Epicurus, but seek to live active, human lives, even at some risk of envy and strife and wish for things unattainable, we must stand firm against projects which will cut our people off from the great heritage of the past and deny them the opportunity of contact with the best that men have thought and written in the history of civilization.

I cannot think that, when what is meant by the displacement of the humanities is brought home to them, the intelligent people of America will consent to bow the knee to Baal. I am confident that, as Milton put it, we shall be able to speak words of persuasion to abundance of reasonable men, once we make plain the plausible fallacy behind the idea of teaching only the indispensables, and that the physical and the social sciences are the indispensables . We can have a democracy without having a people devoted solely to production and consumption. Those who are fighting to preserve the humanities are working for a democracy that can endure. One which sinks into materialistic apathy must in the end go the way of the peoples which have succumbed to the perils of mere bigness in the past.

The new liberal imperialism

$
0
0

Robert Cooper 
Observer.co.uk, Sunday 7 April 2002

Senior British diplomat Robert Cooper has helped to shape British Prime Minister Tony Blair's calls for a new internationalism and a new doctrine of humanitarian intervention which would place limits on state sovereignty. This article contains the full text of Cooper's essay on "the postmodern state". Cooper's call for a new liberal imperialism and admission of the need for double standards in foreign policy have outraged the left but the essay offers a rare and candid unofficial insight into the thinking behind British strategy on Afghanistan, and Iraq.

In 1989 the political systems of three centuries came to an end in Europe: the balance-of-power and the imperial urge. That year marked not just the end of the Cold War, but also, and more significantly, the end of a state system in Europe which dated from the Thirty Years War. September 11 showed us one of the implications of the change.

To understand the present, we must first understand the past, for the past is still with us. International order used to be based either on hegemony or on balance. Hegemony came first. In the ancient world, order meant empire. Those within the empire had order, culture and civilisation. Outside it lay barbarians, chaos and disorder. The image of peace and order through a single hegemonic power centre has remained strong ever since. Empires, however, are ill-designed for promoting change. Holding the empire together - and it is the essence of empires that they are diverse - usually requires an authoritarian political style; innovation, especially in society and politics, would lead to instability. Historically, empires have generally been static.

In Europe, a middle way was found between the stasis of chaos and the stasis of empire, namely the small state. The small state succeeded in establishing sovereignty, but only within a geographically limited jurisdiction. Thus domestic order was purchased at the price of international anarchy. The competition between the small states of Europe was a source of progress, but the system was also constantly threatened by a relapse into chaos on one side and by the hegemony of a single power on the other. The solution to this was the balance-of-power, a system of counter-balancing alliances which became seen as the condition of liberty in Europe. Coalitions were successfully put together to thwart the hegemonic ambitions firstly of Spain, then of France, and finally of Germany.

But the balance-of-power system too had an inherent instability, the ever-present risk of war, and it was this that eventually caused it to collapse. German unification in 1871 created a state too powerful to be balanced by any European alliance; technological changes raised the costs of war to an unbearable level; and the development of mass society and democratic politics, rendered impossible the amoral calculating mindset necessary to make the balance of power system function. Nevertheless, in the absence of any obvious alternative it persisted, and what emerged in 1945 was not so much a new system as the culmination of the old one. The old multi-lateral balance-of-power in Europe became a bilateral balance of terror worldwide, a final simplification of the balance of power. But it was not built to last. The balance of power never suited the more universalistic, moralist spirit of the late twentieth century.

The second half of the twentieth Century has seen not just the end of the balance of power but also the waning of the imperial urge: in some degree the two go together. A world that started the century divided among European empires finishes it with all or almost all of them gone: the Ottoman, German, Austrian, French , British and finally Soviet Empires are now no more than a memory. This leaves us with two new types of state: first there are now states - often former colonies - where in some sense the state has almost ceased to exist a 'premodern' zone where the state has failed and a Hobbesian war of all against all is underway (countries such as Somalia and, until recently, Afghanistan). Second, there are the post imperial, postmodern states who no longer think of security primarily in terms of conquest. And thirdly, of course there remain the traditional "modern" states who behave as states always have, following Machiavellian principles and raison d'ètat (one thinks of countries such as India, Pakistan and China).

The postmodern system in which we Europeans live does not rely on balance; nor does it emphasise sovereignty or the separation of domestic and foreign affairs. The European Union has become a highly developed system for mutual interference in each other's domestic affairs, right down to beer and sausages. The CFE Treaty, under which parties to the treaty have to notify the location of their heavy weapons and allow inspections, subjects areas close to the core of sovereignty to international constraints. It is important to realise what an extraordinary revolution this is. It mirrors the paradox of the nuclear age, that in order to defend yourself, you had to be prepared to destroy yourself. The shared interest of European countries in avoiding a nuclear catastrophe has proved enough to overcome the normal strategic logic of distrust and concealment. Mutual vulnerability has become mutual transparency.

The main characteristics of the postmodern world are as follows:
· The breaking down of the distinction between domestic and foreign affairs.
· Mutual interference in (traditional) domestic affairs and mutual surveillance.
· The rejection of force for resolving disputes and the consequent codification of self-enforced rules of behaviour.
· The growing irrelevance of borders: this has come about both through the changing role of the state, but also through missiles, motor cars and satellites.
· Security is based on transparency, mutual openness, interdependence and mutual vulnerability.

The conception of an International Criminal Court is a striking example of the postmodern breakdown of the distinction between domestic and foreign affairs. In the postmodern world, raison d'ètat and the amorality of Machiavelli's theories of statecraft, which defined international relations in the modern era, have been replaced by a moral consciousness that applies to international relations as well as to domestic affairs: hence the renewed interest in what constitutes a just war.

While such a system does deal with the problems that made the balance-of-power unworkable, it does not entail the demise of the nation state. While economy, law-making and defence may be increasingly embedded in international frameworks, and the borders of territory may be less important, identity and democratic institutions remain primarily national. Thus traditional states will remain the fundamental unit of international relations for the foreseeable future, even though some of them may have ceased to behave in traditional ways.

What is the origin of this basic change in the state system? The fundamental point is that "the world's grown honest". A large number of the most powerful states no longer want to fight or conquer. It is this that gives rise to both the pre-modern and postmodern worlds. Imperialism in the traditional sense is dead, at least among the Western powers.

If this is true, it follows that we should not think of the EU or even NATO as the root cause of the half century of peace we have enjoyed in Western Europe. The basic fact is that Western European countries no longer want to fight each other. NATO and the EU have, nevertheless, played an important role in reinforcing and sustaining this position. NATO's most valuable contribution has been the openness it has created. NATO was, and is a massive intra-western confidence-building measure. It was NATO and the EU that provided the framework within which Germany could be reunited without posing a threat to the rest of Europe as its original unification had in 1871. Both give rise to thousands of meetings of ministers and officials, so that all those concerned with decisions involving war and peace know each other well. Compared with the past, this represents a quality and stability of political relations never known before.

The EU is the most developed example of a postmodern system. It represents security through transparency, and transparency through interdependence. The EU is more a transnational than a supra-national system, a voluntary association of states rather than the subordination of states to a central power. The dream of a European state is one left from a previous age. It rests on the assumption that nation states are fundamentally dangerous and that the only way to tame the anarchy of nations is to impose hegemony on them. But if the nation-state is a problem then the super-state is certainly not a solution.

European states are not the only members of the postmodern world. Outside Europe, Canada is certainly a postmodern state; Japan is by inclination a postmodern state, but its location prevents it developing more fully in this direction. The USA is the more doubtful case since it is not clear that the US government or Congress accepts either the necessity or desirability of interdependence, or its corollaries of openness, mutual surveillance and mutual interference, to the same extent as most European governments now do. Elsewhere, what in Europe has become a reality is in many other parts of the world an aspiration. ASEAN, NAFTA, MERCOSUR and even OAU suggest at least the desire for a postmodern environment, and though this wish is unlikely to be realised quickly, imitation is undoubtedly easier than invention.

Within the postmodern world, there are no security threats in the traditional sense; that is to say, its members do not consider invading each other. Whereas in the modern world , following Clausewitz' dictum war is an instrument of policy in the postmodern world it is a sign of policy failure. But while the members of the postmodern world may not represent a danger to one another, both the modern and pre-modern zones pose threats.

The threat from the modern world is the most familiar. Here, the classical state system, from which the postmodern world has only recently emerged, remains intact, and continues to operate by the principles of empire and the supremacy of national interest. If there is to be stability it will come from a balance among the aggressive forces. It is notable how few are the areas of the world where such a balance exists. And how sharp the risk is that in some areas there may soon be a nuclear element in the equation.

The challenge to the postmodern world is to get used to the idea of double standards. Among ourselves, we operate on the basis of laws and open cooperative security. But when dealing with more old-fashioned kinds of states outside the postmodern continent of Europe, we need to revert to the rougher methods of an earlier era - force, pre-emptive attack, deception, whatever is necessary to deal with those who still live in the nineteenth century world of every state for itself. Among ourselves, we keep the law but when we are operating in the jungle, we must also use the laws of the jungle. In the prolonged period of peace in Europe, there has been a temptation to neglect our defences, both physical and psychological. This represents one of the great dangers of the postmodern state.

The challenge posed by the pre-modern world is a new one. The pre-modern world is a world of failed states. Here the state no longer fulfils Weber's criterion of having the monopoly on the legitimate use of force. Either it has lost the legitimacy or it has lost the monopoly of the use of force; often the two go together. Examples of total collapse are relatively rare, but the number of countries at risk grows all the time. Some areas of the former Soviet Union are candidates, including Chechnya. All of the world's major drug-producing areas are part of the pre-modern world. Until recently there was no real sovereign authority in Afghanistan; nor is there in upcountry Burma or in some parts of South America, where drug barons threaten the state's monopoly on force. All over Africa countries are at risk. No area of the world is without its dangerous cases. In such areas chaos is the norm and war is a way of life. In so far as there is a government it operates in a way similar to an organised crime syndicate.

The premodern state may be too weak even to secure its home territory, let alone pose a threat internationally, but it can provide a base for non-state actors who may represent a danger to the postmodern world. If non-state actors, notably drug, crime, or terrorist syndicates take to using premodern bases for attacks on the more orderly parts of the world, then the organised states may eventually have to respond. If they become too dangerous for established states to tolerate, it is possible to imagine a defensive imperialism. It is not going too far to view the West's response to Afghanistan in this light.

How should we deal with the pre-modern chaos? To become involved in a zone of chaos is risky; if the intervention is prolonged it may become unsustainable in public opinion; if the intervention is unsuccessful it may be damaging to the government that ordered it. But the risks of letting countries rot, as the West did Afghanistan, may be even greater.



What form should intervention take? The most logical way to deal with chaos, and the one most employed in the past is colonisation. But colonisation is unacceptable to postmodern states (and, as it happens, to some modern states too). It is precisely because of the death of imperialism that we are seeing the emergence of the pre-modern world. Empire and imperialism are words that have become a form of abuse in the postmodern world. Today, there are no colonial powers willing to take on the job, though the opportunities, perhaps even the need for colonisation is as great as it ever was in the nineteenth century. Those left out of the global economy risk falling into a vicious circle. Weak government means disorder and that means falling investment. In the 1950s, South Korea had a lower GNP per head than Zambia: the one has achieved membership of the global economy, the other has not.

All the conditions for imperialism are there, but both the supply and demand for imperialism have dried up. And yet the weak still need the strong and the strong still need an orderly world. A world in which the efficient and well governed export stability and liberty, and which is open for investment and growth - all of this seems eminently desirable.

What is needed then is a new kind of imperialism, one acceptable to a world of human rights and cosmopolitan values. We can already discern its outline: an imperialism which, like all imperialism, aims to bring order and organisation but which rests today on the voluntary principle.

Postmodern imperialism takes two forms. First there is the voluntary imperialism of the global economy. This is usually operated by an international consortium through International Financial Institutions such as the IMF and the World Bank - it is characteristic of the new imperialism that it is multilateral. These institutions provide help to states wishing to find their way back into the global economy and into the virtuous circle of investment and prosperity. In return they make demands which, they hope, address the political and economic failures that have contributed to the original need for assistance. Aid theology today increasingly emphasises governance. If states wish to benefit, they must open themselves up to the interference of international organisations and foreign states (just as, for different reasons, the postmodern world has also opened itself up.)

The second form of postmodern imperialism might be called the imperialism of neighbours. Instability in your neighbourhood poses threats which no state can ignore. Misgovernment, ethnic violence and crime in the Balkans poses a threat to Europe. The response has been to create something like a voluntary UN protectorate in Bosnia and Kosovo. It is no surprise that in both cases the High Representative is European. Europe provides most of the aid that keeps Bosnia and Kosovo running and most of the soldiers (though the US presence is an indispensable stabilising factor). In a further unprecedented move, the EU has offered unilateral free-market access to all the countries of the former Yugoslavia for all products including most agricultural produce. It is not just soldiers that come from the international community; it is police, judges, prison officers, central bankers and others. Elections are organised and monitored by the Organisation for Security and Cooperation in Europe (OSCE). Local police are financed and trained by the UN. As auxiliaries to this effort - in many areas indispensable to it - are over a hundred NGOs.

One additional point needs to be made. It is dangerous if a neighbouring state is taken over in some way by organised or disorganised crime - which is what state collapse usually amounts to. But Usama bin Laden has now demonstrated for those who had not already realised, that today all the world is, potentially at least, our neighbour.

The Balkans are a special case. Elsewhere in Central and Eastern Europe the EU is engaged in a programme which will eventually lead to massive enlargement. In the past empires have imposed their laws and systems of government; in this case no one is imposing anything. Instead, a voluntary movement of self-imposition is taking place. While you are a candidate for EU membership you have to accept what is given - a whole mass of laws and regulations - as subject countries once did. But the prize is that once you are inside you will have a voice in the commonwealth. If this process is a kind of voluntary imperialism, the end state might be describes as a cooperative empire. 'Commonwealth' might indeed not be a bad name.

The postmodern EU offers a vision of cooperative empire, a common liberty and a common security without the ethnic domination and centralised absolutism to which past empires have been subject, but also without the ethnic exclusiveness that is the hallmark of the nation state - inappropriate in an era without borders and unworkable in regions such as the Balkans. A cooperative empire might be the domestic political framework that best matches the altered substance of the postmodern state: a framework in which each has a share in the government, in which no single country dominates and in which the governing principles are not ethnic but legal. The lightest of touches will be required from the centre; the 'imperial bureaucracy' must be under control, accountable, and the servant, not the master, of the commonwealth. Such an institution must be as dedicated to liberty and democracy as its constituent parts. Like Rome, this commonwealth would provide its citizens with some of its laws, some coins and the occasional road.


That perhaps is the vision. Can it be realised? Only time will tell. The question is how much time there may be. In the modern world the secret race to acquire nuclear weapons goes on. In the premodern world the interests of organised crime - including international terrorism - grow greater and faster than the state. There may not be much time left.


Teleological Explanation

$
0
0
James Bogen 
The Oxford Companion to Philosophy (2 ed.)

From the Greek word for goal, task, completion, or perfection. Teleological explanations attempt to account for things and features by appeal to their contribution to optimal states, or the normal functioning, or the attainment of goals, of wholes or systems they belong to. Socrates' story (in Plato's Phaedo) of how he wanted to understand things in terms of what is best is an early discussion of teleology. Another is Aristotle's discussion of ‘final cause’ explanations in terms of that for the sake of which something is, acts, or is acted upon. Such explanations are parodied in Voltaire's Candide.
                                                                                                                            
There are many cases in which an item's contribution to a desirable result does not explain its occurrence. For example, what spring rain does for crops does not explain why it rains in the spring. But suppose we discovered that some object's features were designed and maintained by an intelligent creator to enable it to accomplish some purpose. Then an understanding of a feature's contribution to that purpose could help us explain its presence without mistakenly assuming that everything is as it is because of the effects it causes. There are many things (e.g. well-designed clocks in good working order) known to have been produced by intelligent manufacturers for well-understood purposes, whose features can, therefore, be explained in this way. But if all teleological explanation presupposes intelligent design, only creationists could accept teleological explanations of natural things, and only conspiracy theorists could accept teleological explanations of economic and social phenomena.

Teleological explanations which do not presuppose that what is to be explained is the work of an intelligent agent are to be found in biology, economics, and elsewhere. Their justification typically involves two components: an analysis of the function of the item to be explained and an aetiological account.

Functional analysis seeks to determine what contribution the item to be explained makes to some main activity, to the proper functioning, or to the well-being or preservation, of the organism, object, or system it belongs to. For example, given what is known about the contribution of normal blood circulation to the main activities and the well-being of animals with hearts, the structure and behaviour of the heart lead physiologists to identify its function with its contribution to circulation. Given the function of part of an organism, the function of a subpart (e.g. some nerve-ending in the heart) can be identified with its contribution—if any—to the function of the part (e.g. stimulating heart contractions). Important empirical problems in biology and the social sciences and equally important conceptual problems in the philosophy of science arise from questions about the evaluation of ascriptions of purposes and functions.


Functional analysis cannot explain a feature's presence without an aetiological account which explains how the feature came to be where we find it. In natural-selection explanations, aetiological accounts typically appeal to (a) genetic transmission mechanisms by which features are passed from one generation to the next and (b) selection mechanisms (e.g. environmental pressures) because of which organisms with the feature to be explained have a better chance to reproduce than organisms which lack it. The justification of teleological explanations in sociobiology, anthropology, economics, and elsewhere typically assumes the possibility of finding accounts of transmission and selection mechanisms roughly analogous to (a) and (b).

Bibliography

- A. Ariew, R. Cummins, and M. Perlman (eds.), Functions (Oxford, 2002).
- Morton O. Beckner, Biological Ways of Thought (Berkeley, Calif., 1968), chs. 6–8.
- Larry Wright, ‘Functions’, Christopher Bourse, ‘Wright on Functions’ Robert Cummins, ‘Functional Analysis’ (along with further references to standard literature), in Elliott Sober (ed.), Conceptual Issues in Evolutionary Biology (Cambridge, Mass., 1984).

What Is a Brand?

$
0
0
Slavoj Zizek  .  Originally published in PLAYBOY, January 2014, page 121 


Marketing Redefines Our Lives in Strange New Ways


Here is an old Polish anti-communist joke: "Socialism is the synthesis of the highest achievements of all previous historical epochs. From tribal society, it took barbarism. From antiquity, it took slavery. From feudalism, it took relations of domination. From capitalism, it took exploitation. And from socialism, it took the name."

Is it not similar with brand names? Imagine a totally outsourced company—a company like, say, Nike that outsources its material production to Asian or Central American contractors, the distribution of its products to retailers, its financial dealings to a consultant, its marketing strategy and publicity to an ad agency, the design of its products to a designer. And on top of that, it borrows money from a bank to finance its activity. Nike would be nothing "in itself"—nothing other than the pure brand mark "Nike," an empty sign that connotes experiences pertaining to a certain lifestyle, something like "the Nike touch." What unites a multitude of properties into a single object is ultimately its brand name—the brand name indicates the mysterious je ne sais quoi that makes Nike sneakers (or Starbucks coffee) into something special.

A couple of decades ago two new labels established themselves in the fruit juice (and also ice cream) market: "forest fruit" and "multivitamin." Both are associated with clearly identified flavors, but the connection between the label and what it designates is contingent. Any other combination of forest fruits would produce a different flavor, and it would be possible to generate the same flavor artificially (with the same, of course, being true for multivitamin juice). One can imagine a child who, after getting authentic homemade "forest fruit" juice, complains to his mother, "That's not what I want! I want true forest fruit juice!" Such examples distinguish the gap between what a word really means (in our case, the flavor recognized as multivitamin) and what would have been its meaning if it were to function literally (any juice that has a lot of vitamins). The autonomous "symbolic efficiency" is so strong it can occasionally generate effects that are almost uncannily mysterious.

Can we get rid of this excessive dimension and use only names that directly designate objects and processes? In 1986, Austrian writer Peter Handke wrote Repetition, a novel describing Slovenia in the drab 1960s. Handke compares an Austrian supermarket, with many brands of milk and yogurt, with a modest Slovene grocery store that has only one kind of milk, with no brand name and just the simple inscription milk. But the moment Handke mentions this brand-less packaging, its innocence is lost. Today such packaging doesn't just designate milk; it brings along a complex nostalgia for the old times when life was poor but (allegedly) more authentic, less alienated. The absence of a logo thus functions as a brand name for a lost way of life. In a living language, words never directly designate reality; they signal how we relate to that reality.

Another effort to get rid of brand names is grounded not in poverty but in extreme consumerist awareness. In August 2012 the media reported that tobacco companies in Australia would no longer be allowed to display distinctive colors, brand designs or logos on cigarette packs. In order to make smoking as unglamorous as possible, the packs would have to come in a uniformly drab shade of olive and feature graphic health warnings and images of cancer-riddled mouths, blinded eyeballs and sickly children. (A similar measure is under consideration in the European Union parliament.) This is a kind of self-cancellation of the commodity form. With no logo, no "commodity aesthetics," we are not seduced into buying the product. The package openly and graphically draws attention to the product's dangerous and harmful qualities. It provides reasons against buying it.

The anti-commodity presentation of a commodity is not a novelty. We find cultural products such as paintings and music worth buying only when we can maintain that they aren't commodities. Here the commodity-noncommodity antagonism functions in a way opposite to how it functions with logo-less cigarettes. The superego injunction is "You should be ready to pay an exorbitant price for this commodity precisely because it is much more than a mere commodity." In the case of logo-less cigarettes, we get the raw-use value deprived of its logo form. (In a similar way, we can buy logo-less sugar, coffee, etc. in discount stores.) In the case of a painting, the logo itself sublates use value.

But do such logo-less products really remove us from commodity fetishism? Perhaps they simply provide another example of the fetishist split signaled by the well-known phrase "Je sais tres bien, mais quand meme…." ("I know very well, but nevertheless….") A decade or so ago there was a German ad for Marlboros. The standard cowboy figure points with his finger toward the obligatory note that reads, "Smoking is dangerous for your health." But three words were added: Jetzt erst recht, which can be vaguely translated as "Now things are getting serious." The implication is clear: Now that you know how dangerous it is to smoke, you have a chance to prove you have the courage to continue smoking. In other words, the attitude solicited in the subject is "I know very well the dangers of smoking, but I am not a coward. I am a true man, and as such, I'm ready to take the risk and remain faithful to my smoking commitment." It is only in this way that smoking effectively becomes a form of consumerism: I am ready to consume cigarettes "beyond the pleasure principle," beyond petty utilitarian considerations about health.

This dimension of lethal excessive enjoyment is at work in all publicity and commodity appeals. All utilitarian considerations (this food is healthy, it was organically grown, it was produced and paid for under fair-trade conditions, etc.) are just a deceptive surface under which lies a deeper superego injunction: "Enjoy! Enjoy to the end, irrespective of consequences." Will a smoker, when he buys the "negatively" packaged Australian cigarettes, hear beneath the negative message the more present voice of the superego? This voice will answer his question: "If all these dangers of smoking are true—and I accept they are—why am I then still buying the package?"

To get an answer to this question, let us turn to Coke as the ultimate capitalist merchandise. It is no surprise that Coke was originally introduced as a medicine. Its taste doesn't seem to provide any particular satisfaction; it is not directly pleasing or endearing. But in transcending its immediate use value (unlike water and wine, which do quench our thirst or produce other desired effects), Coke embodies the surplus of enjoyment over standard satisfactions. It represents the mysterious factor all of us are after in our compulsive consumption of merchandise.

Since Coke doesn't satisfy any concrete need, do we drink it as a supplement after another drink has satisfied our substantial need? Or does Coke's superfluous character make our thirst for it more insatiable? Coke is paradoxical: The more you drink it, the thirstier you get, which in turn leads to a greater need to drink more of it. With Coke's strange bittersweet taste, our thirst is never effectively quenched. In the old publicity motto "Coke is it" we should discern the entire ambiguity: Coke is never effectively it. Every satisfaction opens up a desire for more. Coke is a commodity whose use value embodies an ineffable spiritual surplus. It's a commodity with material properties that are already those of a commodity.

This example makes palpable the inherent link between the Marxist concept of surplus value, the Lacanian concept of surplus enjoyment (which Lacan elaborated with direct reference to Marxian surplus value) and the paradox of the superego perceived by Freud: The more you drink Coke, the thirstier you are. The more profit you have, the more you want. The more you obey the superego, the guiltier you become. These paradoxes are the opposite of the paradox of love, which is, in Juliet's immortal words to Romeo, "The more I give, the more I have."

The predominance of brand names isn't new. It is a constant feature of marketing. What has been going on in the past decade is a shift in the accent of marketing. It's a new stage of commodification that Jeremy Rifkin has designated "cultural capitalism." We buy a product—say, an organic apple—because it represents a particular lifestyle. An ecological protest against the exploitation of natural resources is already caught in the commodification of experience. Although ecology is perceived as a protest against the virtualization of daily life and an argument for a return to the direct experience of material reality, ecology is simply branded as a new lifestyle. When we purchase organic food we are buying a cultural experience, one of a "healthy ecological lifestyle." The same goes for every return to "reality": In an ad widely broadcast on U.S. television a decade or so ago, a group of ordinary people was shown engaged in a barbecue, with country music and dancing, and the accompanying message: "Beef. Real food for real people." But the beef offered as a symbol of a certain lifestyle (that of "real" Americans) is much more chemically and genetically manipulated than the "organic" food consumed by "artificial" yuppies.

This is what design is truly about: Designers articulate the meaning above and beyond a product's function. When they try to design a purely functional product, the product displays functionality as its meaning, often at the expense of its real functionality. Prehistoric handaxes, for example, were made by males as sexual displays of power. The excessive and costly perfection of their form served no direct use.

Our experiences have become commodified. What we buy on the market is less a product we want to own and more a life experience—an experience of sex, eating, communicating, cultural consumption or participating in a lifestyle. Material objects serve as props for these experiences and are offered for free to seduce us into buying the true "experiential commodity," such as the free cell phones we get when we sign a one-year contract. To quote the succinct formula of Mark Slouka, "As more of the hours of our days are spent in synthetic environments, life itself is turned into a commodity. Someone makes it for us; we buy it from them. We become the consumers of our own lives." We ultimately buy (the time of) our own life. Michel Foucault's notion of turning one's self into a work of art thus gets an unexpected confirmation: I buy my physical fitness by joining a gym. I buy my spiritual enlightenment by enrolling in courses on Transcendental Meditation. I buy my public persona by going to restaurants patronized by people with whom I want to be associated.

Let's return to the example of ecology. There's something deceptively reassuring in our readiness to assume guilt for threats to the environment. We like to be guilty. If we're guilty, then it all depends on us. We can save ourselves by changing our lives. What is difficult to accept (at least for us in the West) is that we are reduced to a purely passive role. We are just impotent observers who can only sit and watch what our fate will be. To avoid such a situation, we engage in frantic and obsessive activity. We recycle paper and buy organic food so we can believe we're doing something. We are like a sports fan who supports his team by shouting and jumping from his seat in front of the TV screen in a superstitious belief that this will somehow influence the outcome of the game.

The typical form of fetishist disavowal apropos ecology is "I know very well (that we are all threatened), but I don't really believe it (so I'm not ready to do anything important like change my way of life)." But there is also the opposite form of disavowal: "I know very well I can't really influence processes that can lead to my ruin, but it is nonetheless too traumatic for me to accept. I cannot resist the urge to do something, even if I know it is ultimately meaningless." Isn't this why we buy organic food? Who really believes that half-rotten and expensive "organic" apples are healthier? The point is that, by buying them, we do not just buy and consume a product; we simultaneously do something meaningful, show our care and global awareness and participate in a large collective project.

Today we buy commodities neither for their utility nor as status symbols. We buy them to get the experience they provide; we consume them to make our lives meaningful. Consumption should sustain quality of life. Its time should be "quality time"—not a time of alienation, of imitating models imposed on us by society, of the fear of not keeping up with the Joneses. We seek authentic fulfillment of our true selves, of the sensuous play of experience, of caring for others.

An exemplary case of "cultural capitalism" can be found in the Starbucks ad campaign that says, "It's not just what you're buying. It's what you're buying into." After celebrating the quality of the coffee, the ad continues: "But when you buy Starbucks, whether you realize it or not, you're buying into something bigger than a cup of coffee. You're buying into a coffee ethic. Through our Starbucks Shared Planet program, we purchase more fair-trade coffee than any company in the world, ensuring that the farmers who grow the beans receive a fair price for their work. We invest in and improve coffee-growing practices and communities around the globe. It's good coffee karma. Oh, and a little bit of the price of a cup of Starbucks coffee helps furnish the place with comfy chairs, good music and the right atmosphere to dream, work and chat in. We all need places like that these days. When you choose Starbucks, you are buying a cup of coffee from a company that cares. No wonder it tastes so good."

The "cultural" surplus is here spelled out. The price is higher because you are really buying the "coffee ethic," which includes care for the environment, social responsibility toward producers and a place where you can participate in a communal life (from the beginning Starbucks presented its shops as ersatz community spaces). If this isn't enough, if your ethical needs are still unsatisfied, if you continue to worry about Third World misery, there are other products you can buy. Consider the description Starbucks offers for its Ethos Water program: "Ethos Water is a brand with a social mission—helping children around the world get clean water and raising awareness of the world water crisis. Every time you purchase a bottle of Ethos Water, Ethos Water will contribute five cents toward our goal of raising at least $10 million by 2010. Through the Starbucks Foundation, Ethos Water supports humanitarian water programs in Africa, Asia and Latin America. To date, Ethos Water grant commitments exceed $6.2 million. These programs will help an estimated 420,000 people gain access to safe water, sanitation and hygiene education."

Authentic experience matters. This is how capitalism, at the level of consumption, integrates the legacy of 1968. This is how it addresses the critique of alienated consumption. A recent Hilton ad consists of a simple claim: "Travel doesn't only get us from place A to place B. It should also make us a better person." Can we imagine such an ad a decade ago? The latest scientific expression of this new spirit is the rise of happiness studies. But how is it that, in this era of spiritualized hedonism, when the goal of life is defined as happiness, anxiety and depression are exploding? It is the enigma of this self-sabotage of happiness and pleasure that makes Freud's message more actual than ever.


Authenticity and brand names are not mutually exclusive—authenticity echoes beneath every brand name.

Aristotle's Account of the Subjection of Women

$
0
0

Dana Jalbert Stauffera
aThe University of Texas at Austin
Published by: The University of Chicago Press on behalf of the Southern Political Science Association 
The Journal of Politics, Vol. 70, No. 4, October 2008
In recent years, several studies have argued that Aristotle saw the associations of the household as voluntary, mutually beneficial, and directed toward lofty aims. These studies have brought out genuine complexities in Aristotle's understanding of the relationship between the public and private spheres. But, in their characterization of Aristotle's view of the household, they miss the mark. While Aristotle discusses marriage and family in other places, he examines the hierarchical aspect of the relationship between men and women most fully in Politics I. Close examination of Politics I reveals that Aristotle thought that the subjection of women in the household was rooted in force.

For Aristotle, the best and highest form of human community is the political community. Other types of community, such as the household, are subordinate and inferior to the polis. The household is subordinate to the political community because the aim of life in the household is the mere preservation of life, or the satisfaction of life's daily needs, whereas the aim of membership in the political community is to live well. It is in the political community that man fulfills his telos or end by exercising his nature as a political animal. The household is also inferior to the political community in the character of its rule. In the household, one man rules, by virtue of his age and his sex, monarchically at best and tyrannically at worst. In the political community, it is possible for citizens to choose their rulers on the basis of merit, to share collectively in deliberation, and to share in rule itself, and thus to experience a form of republican government. The importance of the household, for Aristotle, lies in the fact that it liberates free men from concern with daily needs and provides them with the leisure to devote their time and energy to politics.

This is how Aristotle seems, at least, to present the relationship between the city and the household, or between the public and private spheres, in the Politics. In recent decades, some political theorists have found Aristotle's exaltation of the political a refreshing alternative, and a helpful corrective, to the tendency of modern liberal democracies to undervalue the political. However, at the same time, a number of excellent studies have challenged the conventional understanding of Aristotle's view of the public and private spheres, charging that it is too simplistic. Arlene Saxonhouse (1985), Judith Swanson (1992), and Darrell Dobbs (1996) have argued that Aristotle's treatment of the household is both more positive and more complex than is generally appreciated. They assert that while Aristotle says that the political community is the natural end of all human association, he also indicates that the household is in some respects the superior form of community. While conflicts of interest often characterize the relationship between citizens, stronger and firmer bonds, such as the shared interest of parents in the welfare of their children, unite the members of the household. In the political community, citizens vie for supremacy regardless of the merit of their claims, whereas the hierarchy in the household is rooted in nature. Saxonhouse, for example, writes that Aristotle sees the household as “a cooperative adventure in which the friendship between the members comes from a common concern for the welfare of the unit” (1985, 87). The family “appears to order itself naturally” and “to be founded on a natural hierarchy that the city composed of supposed equals can only pretend to approximate” (85). Dobbs writes that, in Aristotle's view, “the complementarity of man and woman” provides the basis for their association in the household.

The man and woman who share unselfishly in the work of procreation—who do not misconstrue the spousal relationship as merely an alternative mode of seeking comfort and security—are naturally excepted from the structures of domination that haunt both partners in self-centered, security-seeking relationships. (1996, 77–78)

Not only did Aristotle see the household as more natural than the political community in these ways, they argue, he also saw an important role for the household in sustaining political health. Far from viewing the household as aimed solely at the satisfaction of daily needs, Dobbs (1996) and Swanson (1992) contend, Aristotle regarded the household as the primary vehicle of moral education, the political community's most serious task. Stephen Salkever goes so far as to deny that Aristotle sees any difference between the aims of the household and those of the city: “For Aristotle … both polis and oikia, when truly, rather than nominally, such, aim at that virtue or excellence that is distinctly human” (1991, 175).1

Studies such as those of Salkever and Saxonhouse have succeeded admirably in bringing out the complexity of Aristotle's view of the relationship between the public and private spheres—a complexity that is not always noted by interpreters of Aristotle, but clearly there. For example, when Aristotle asserts that the abilities to perceive and communicate about the good and bad and the just and unjust make us “political animals,” he adds that “association in these things makes a household and a city” (1253a18).2 Clearly, then, the distinction between the aims of the household and the political community is not as stark as he suggests elsewhere. Rather, the aims of household and city overlap. Just as concern with the satisfaction of life's basic necessities is hardly absent from political life, neither is reasoning about the good and bad and the just and unjust absent from the household.

These studies show persuasively, in my view, that the conventional understanding of Aristotle's view of the private sphere and its relationship to the public sphere is too simplistic. However, in maintaining that Aristotle saw the household as an institution in which men practice a mild, mutually beneficial rule over willing subordinates, these studies introduce a distortion of their own. Their arguments draw heavily on Aristotle's discussions of marriage and family in the Nicomachean Ethics. And although the Ethics contributes to our understanding of Aristotle's overall view of the household, it is first and foremost to the Politics that we must look for his understanding of the political dimension of the relationship between man and woman. For it is in the Politics that Aristotle deals centrally with questions of hierarchy and authority—of why some rule and others obey.

In the Politics, Aristotle appears to present the subjection of women as part of a wholly natural social and political order. But careful study of Book I yields a much more complicated picture. Several interpreters have argued that Aristotle's treatment of slavery, in particular, has been misunderstood (e.g., Ambler 1985, 1987; Davis 1996; Frank 2004; Lord 1987; Nichols 1983). They maintain that although Aristotle holds that slavery could be natural under certain conditions, careful examination of Book I reveals that, in his view, slavery as actually practiced in Greece is rooted in force rather than in nature. Those who have made this argument concerning Aristotle's treatment of slavery, however, have stopped short of drawing a parallel between Aristotle's view of slavery and his view of the status of women. If anything, these interpreters argue that Aristotle means to draw a contrast between slavery and the subjection of women (see, for example, Ambler 1987, 398–99).

Aristotle does not, it is true, equate the subjection of women with slavery. But he does indicate important similarities between the two. While he gives the general impression that the household came about through the voluntary cooperation of all of its members, he quietly indicates that force played a considerable role in the origins of marriage. Moreover, Aristotle indicates that, in his own day, the household had not entirely transcended its brutal beginnings; the threat of physical force that helped bring about the rule of men over women continued to underlie and to shape the relations between the sexes.

To be sure, these are not the conclusions to which one is led by a cursory reading of Book I. To see the complexity in Aristotle's argument concerning the status of women requires a willingness to approach Book I with fresh eyes. Moreover, coming to appreciate that complexity, far from giving one a comprehensive interpretation of Book I, opens up a new and difficult question: why does Aristotle give the superficial impression that he regards the subjection of women—and, indeed, the household order in general—as much less problematic, and much more natural, than he indicates it is in the fine print, so to speak? Before attempting to address that question, however, let us first turn to the arguments of Book I with a view to uncovering Aristotle's true account of the subjection of women.

The Household's Beginnings in Politics I.2
Women, Slaves, and the Judgment of Euripides

Aristotle's description of the development of social and political life in the second chapter of Book I is one of the most famous parts of the work. It is the closest parallel in Aristotle's corpus to the accounts of man's emergence from the state of nature offered by modern political philosophers such as Hobbes, Locke, and Rousseau. Aristotle's account appears to be diametrically opposed to those of the modern philosophers, who depict free and equal beings living independently and apolitically, and forming political communities only after rational calculation suggests that self-preservation requires it. Aristotle gives the impression that human beings entered into association with one another in the household spontaneously and voluntarily, and that the growth of households led to the development of villages, which led, in a smooth progression, to the rise of cities. He appears to trace the household back to the natural human impulses to procreate and to cooperate with other human beings in the satisfaction of daily needs; and he seems to say that the roles that men, women, and slaves play in the household are in full harmony with their natures.

Underlying these surface impressions, however, are indications that the development of domestic and political life was not altogether smooth or peaceful.3 Aristotle's account of the relationship between men and women begins with an identification, at the beginning of Chapter Two, of the two basic associations from which the household develops.

Necessarily there must first be a union of those who cannot exist without one another, female and male, for the sake of reproduction—and this not out of choice, but, as in the other animals and plants, out of a natural impulse to leave behind something that is the same as oneself—and the natural ruler and subject, on account of security. For the one who can see, by means of the mind, is by nature ruler and master, and the one who can work, by means of the body, is by nature a slave. On this account, the master and slave have a common interest. (1252a26–34)

Aristotle thus locates the origins of the ruler-ruled relationship in the benefit, common to both ruler and ruled, derived from the rule of intelligence over the physically able. He presents the association between male and female as distinct from the association between ruler and ruled. The latter might be described as the joining together of “brains” and “brawn,” while the former is rooted in the impulse to procreate. As Wayne Ambler points out, even Aristotle's characterization of the male-female association refers to the sexes in the abstract; it does not address the relationship between men and women in its complexity (1985, 167). In particular, it does not explain why men rule over women, in addition to procreating with them (cf. Davis 1996, 19; Dobbs 1996, 77).

How and why does the association between man and woman take on a hierarchical character? Aristotle begins to answer this question by commenting on male rule among “barbarians,” or non-Greeks.

By nature the female has been distinguished from the slave. For nature makes nothing in the manner that the coppersmiths make the Delphic knife—that is, frugally—but, rather, it makes each thing for one purpose. For each thing would do its work most nobly if it had one task rather than many. Among the barbarians the female and the slave have the same status. This is because there are no natural rulers among them but, rather, the association among them is between male and female slave. On account of this, the poets say that “it is fitting that Greeks rule barbarians,” as the barbarian and the slave are by nature the same. (1252a34–b9)

Here, Aristotle introduces the teleological view of nature for which he is known. According to this view, a purposive force has arranged the world in the best possible way. Since the division of labor allows each worker to do his or her work “most nobly,” nature must have created each thing with a view to one task. Now, one might well use this reasoning to justify the place of women in the household. One might conclude that women are born to a role and a purpose different from that of men. And, given the importance he has just assigned to the procreative impulse in bringing men and women together, one might expect Aristotle to identify procreation as the task, or purpose, to which women are naturally directed. But Aristotle brings in his teleological view of nature here not to support the claim that nature has distinguished the female from the male, but rather, to support the claim that nature has distinguished the female from the slave. If each type of human being has been created with a view to one purpose, he reasons, then the common practice of using women as slaves is unnatural. In this way, Aristotle directs our focus not to the naturalness of the subjection of women, but rather to the fact that, among non-Greeks, the status of women is unnaturally low.

It is noteworthy that the aspect of the life of non-Greeks that bespeaks their incivility and justifies their subjection, in Aristotle's view, is their treatment of women.4 But why exactly, in Aristotle's analysis, do non-Greeks ignore the natural distinction between woman and slave? In what, precisely, does the barbarism of the barbarian consist? According to Aristotle, there are no natural rulers among the barbarians. But only barbarian women hold the rank or position (taxis) of slave. Among barbarians, then, naturally slavish men are nevertheless masters in rank. The principle of rule is clear enough: in the absence of “brains” to merit rule over “brawn,” “brawn” prevails; men rule by virtue of their superior strength. Outside of Greece, then, men rule women because they are stronger than women, and they use that strength to assert their authority.

This passage seems to indicate that the rule of Greek men over their women, by contrast, is not a matter of brute strength. Aristotle seems to say that this very fact—that, in Greece, relations between the sexes are determined by a higher principle than “might makes right”—establishes the Greeks’ greater civility. Hence the judgment of Euripides: “it is fitting that Greeks rule over barbarians.” This line comes from Euripides’ play Iphigenia at Aulis. The play takes place as the Greek army impatiently awaits a favorable wind to take them from Aulis to Troy. A prophet has declared that the gods will not send a favorable wind until the general Agamemnon makes a sacrifice of his daughter, Iphigenia. After initially begging her father for mercy, Iphigenia suddenly declares that she will martyr herself for the sake of Greece:

Sacrifice me, I say to Greece, and win Troy. This is my memorial, my marriage, my children, my duty, all you could wish for me. It is fitting that Greeks rule barbarians. They are born to be slaves as we are to be free. (1629–35)5

As Michael Davis (1996, 17) and Harvey Mansfield (2006, 205, 209) note, there is irony in citing, as proof that Euripides believed that the Greeks are especially civilized in their treatment of women and therefore deserve to rule, the words of a girl who is about to be sacrificed by her father. It is true that Iphigenia is not forced to sacrifice herself; she goes willingly. But what considerations lead her to that choice? Iphigenia “decides” to offer herself up to the army only once it has become clear that the Greek army is going to kill her one way or another, and the only question is whether Achilles is going to die defending her—and with him, any chance of Greek victory. Faced with this choice, Iphigenia chooses to comfort herself with the thought that her death will benefit Greece. Far from making a decision free from the pressure of force, then, Iphigenia acquiesces in the face of overwhelming force.6

In explaining her decision, Iphigenia argues that, in dying, she contributes to the noble aim of the war, which is to protect the women of Greece from the barbarians. A few moments earlier, however, Agamemnon points out that the army clamors for Iphigenia's blood, and that if they do not get it, they are likely to turn on Argos and slay him and his family in their beds. The great cause on behalf of which Iphigenia believes herself to be dying, the cause of “Greece,” is in reality a conglomeration of city-states just as ready to fight one another as they are to struggle in common against Troy. He says that the alleged concern to protect the women of Greece from the barbarians is not a genuine concern but a pretext offered by the Greek army for a war they want to fight for the sake of vengeance. Helen herself was not kidnapped, but ran off willingly with another man; she is not an innocent victim, but a “whore” (71–72; 435).

This is hardly a story that bespeaks the civility of the Greeks toward women, or the Greek transcendence of the role of brute force in male-female relations. It is hardly the play of a poet who believes in “Greece.” Could all of this have been lost on Aristotle when he approvingly cites Iphigenia's assertion that “it is fitting that Greeks rule barbarians” as the judgment of the poets on Greece? At the very least, Aristotle's use of this quote weaves into his account a thread of doubt as to the genuine superiority of the Greeks (cf. Ambler 1987, 393; Frank 2004, 101). He leaves us wondering whether the early Greek treatment of women was really so different from that of the barbarians, or whether it, too, did not fall short of nature's dictate that women ought to be distinguished from slaves.

The Formation of the Household: Wives, Oxen, and the Case of Perses

Continuing his account of the origins of the household, Aristotle says that the household “first arose from these two associations,” male-female and ruler-ruled. Once again, he cites a poet as evidence.
Thus rightly Hesiod spoke the line, “A house first, then a wife, and then an ox for plowing,” for an ox stands in for a servant among the poor. This association, that has come about by nature with a view to the daily things, is a household, which is why Charondas calls the members of a household “peers of the mess” and Epimenides of Crete calls them “peers of the manger.” (1252b10–15)

Earlier, Aristotle said that male and female were drawn together by the natural impulse to procreate. Now we learn that the union of men and women in the household exists to satisfy daily needs, especially the need for food. The role of women in the household, then, is multifaceted; they are mothers, maids, and cooks. But if “nature makes each thing for one purpose,” then the question arises: What is the relationship of women's multifaceted role in the household to nature? And if the subordinate, multifaceted role of women is natural, what are the grounds of its naturalness? If “brains” and “brawn” are brought together by the mutual benefit each derives from the rule of the former, what brings men and women into a hierarchical association with one another, with a view to the daily needs of life?

To answer this question, several interpreters look to the Nicomachean Ethics (Dobbs 1996, 75, 78–79; Salkever 1991, 181; Saxonhouse 1985, 84; Swanson 1992, 52–55). There Aristotle suggests that marriage is rooted, like the union of “brains” and “brawn,” in complementary abilities.

The love between man and wife seems to be in accord with nature. For the human being is by nature more a coupling being than a political one, insofar as the household is older and more necessary than the city, and the human being has procreation more in common with the other animals. Among other animals the association goes just this far, whereas human beings live together not only for the sake of procreation but also for the things of life. For from the beginning the tasks are divided, the husband and wife each having their own; they help one another by each contributing his or her own part to their common life. (1252b10–15)

As Aristotle presents marriage in this passage, husband and wife each contribute to the needs of the household in accord with their respective abilities. Not only tasks, but authority, too, are divided and distributed on the grounds on natural suitability. “For the husband rules on account of merit, and in the realm that requires a man. Whatever realms are suited to a woman, he gives to her” (1160b33–35).

In the Ethics, then, Aristotle roots marriage in a natural complementarity between man and woman. In the Politics, however, Aristotle points to a different account of the origins of marriage. To illustrate how the household grows out of the two basic associations of male and female and master and slave, as we noted, he quotes Hesiod: “A house first, then a wife, and then an ox for plowing.” This line is from Hesiod's Works and Days, in which Hesiod advises his brother, Perses, about how to put a life of degeneracy behind him. Hesiod urges Perses to a life of honest work as the only reliable protection against destitution. Contrary to what we might expect given Aristotle's argument, Hesiod does not counsel Perses to get a woman with a view to procreation. (Indeed, far from encouraging Perses to fulfill this natural impulse, Hesiod cautions against such entanglements: “Do not let any sweet-talking woman beguile your good sense with the fascinations of her shape. It's your barn she's after,” 372–74).7 Rather, he counsels Perses to get a woman to work for him, to drive his plow.
First of all, get yourself an ox for plowing, and a woman—for work, not to marry—one who can plow with the oxen, and get all necessary gear in your house in good order, lest you have to ask someone else, and he deny you, and you go short, and the seasons pass you by, and your work be undone. (405–409)

If Perses follows his brother's advice, then, he will not “take” a woman with a view to procreation. Rather, Hesiod advises Perses to get a woman because, as Aristotle helpfully points out, male slaves are expensive. Like an ox, a female slave is cheap help. There is no suggestion that Perses will acquire a female servant with a view to her interests, or even with a view to a common good that might arise between the two of them. Moreover, there is no suggestion that he will allow her a sphere of her own authority, or that he will assign her tasks on the basis of natural suitability; even if women are naturally suited to “getting household gear in order,” are they naturally suited to ox driving?

The account of the origins of marriage pointed to by this reference to Hesiod is, thus, quite different from the account offered in theEthics. In both the Ethics and the Politics, Aristotle begins his account of marriage by observing that males and females are drawn together by a natural impulse to procreate. But men and women have been procreating for as long as human beings have existed. His reference to the Hesiod quote in the Politics suggests that the household formed—and women came under the rule of men—not because such an arrangement was mutually beneficial, but rather, because men began to enlist women forcibly in the satisfaction of their own daily needs (cf. Mansfield 2006, 208–209; Nagle 2006, 85–86).

Why might Aristotle present marriage differently in the two works? In the Ethics, Aristotle considers marriage in the context of a discussion of love and friendship. His primary concern is not the basis of men's rule over women, but the character and basis of the friendship between husbands and wives. Thus, it makes sense that he would focus on the common goods that are potentially present in marriage, for such goods are foundations of marital affection. But such common goods are not necessarily present in marriage, nor is it likely that marriage began with a view to such goods. This is not so important in the Ethics, and it may even be essential to an account of friendship in marriage to refrain from looking too hard into the precise reasons that men rule. But in the Politics, one of Aristotle's main aims is to illuminate the nature of the hierarchies that exist in the political community and its subordinate communities. Thus, it makes sense that he would indicate in this work, albeit quietly, the true origins of male rule (cf. Saxonhouse 1982, 206).

Polygamy and Savagery: The Character of Early Household Rule

If Hesiod gives us insight into how the early household formed, Homer gives us insight into how it functioned. Moving forward in his account of the development of political community, Aristotle argues that households gradually joined together to form villages.

Just as all households were ruled monarchically by the oldest, so too were the villages, on account of kinship. This is what Homer means in saying “Each ruled over his children and wives,” for they lived dispersed from one another. Thus did ancient men live. (1252b19–24)

This line comes from Homer's account in the Odyssey of the Cyclops. These one-eyed creatures appear as the epitome of barbarism; they eat their guests. Homer's description of the way of life of the Cyclops is unequivocal: uninterested in the affairs of their neighbors, each of these brutes exercised a lawless rule over his family (Odyssey IX.112–15). In addition to indicating the despotic character of early patriarchal rule, Aristotle's reference to Homer's description of the Cyclopian household introduces an interesting wrinkle into the argument, for each of the Cyclops ruled over his children and wives or bedfellows (aloxon), in the plural, suggesting that these early patriarchs were polygamous. This is significant. It underscores the brutality of the conditions in the early household, and the abysmally low status of women. Michael Davis goes so far as to conclude that “prior to the polis, there are no husbands and wives. By itself, the household cannot preserve the distinction between women and slaves” (1996, 24).

If the households of early patriarchs resembled those of the Cyclops then, at some point, the household underwent a major change from polygamy to monogamy. How and why might this have happened? If the early patriarchs were a law unto themselves, it is not likely that a constriction of their power resulted from a revolution from within. Perhaps, as populations grew, the men who found themselves without women objected to the hoarding of women by the patriarchs; perhaps this coincided, as Davis suggests, with the rise of political authorities who could establish laws regulating the behavior of individual patriarchs (1996, 24–27). In support of this, Aristotle concludes Chapter Two by remarking that, although everyone has in himself an impulse toward political community, the first founder of a city should be regarded as a great benefactor because it is in the city that virtue and justice develop. Without virtue and justice, man is the most savage of all animals, especially with respect to food and sex (1253a29–39). The emergence of political life, then, allows the household to become more than a means for savage men to gratify their desires.

If we have any lingering doubt about whether the early Greeks treated women as property, confirmation comes in Book II of the Politics. Having moved on to other matters, Aristotle momentarily drops the façade that he constructed in Book I of the superior civility of the early Greeks. Considering the possibility that one should not necessarily regard changes in laws as bad, he remarks, “One might say that the facts themselves are the proof, for the ancient laws were overly simplistic and barbaric. The Greeks used to carry weapons and buy their wives from one another” (1268b38–42). It is telling that the two practices went together; when men are constantly armed, it is a sign that their society relies heavily on the threat of force to sustain law and order.

In sum, Aristotle's aim in Chapter Two of Book I is to show that the city arose naturally, out of subordinate associations that are themselves natural. But the details of his account of the formation of the household indicate otherwise. After arguing that women's position in the household should be completely distinct from slavery, having a different aim and basis, Aristotle indicates that, in the early household, the man-woman relationship was not completely distinct from the master-slave relationship, either in its origins or in the character of the rule to which women were subject. The association between men and women in the early household aimed at the satisfaction of daily needs, and it was directed primarily to the needs of the ruler rather than to those of the ruled.

The Rule of Men, Understood in Light of Its Origins

If the manner in which men acquired wives and governed them in the earliest times did not accord with nature, perhaps this should not be a surprise. For Aristotle says in Chapter Two that “nature is an end (telos), and we say that a thing's nature is what it is when its generation has reached its end, whether it be a man or a horse or a household” (1252b32–34). If the household began barbarically, it also became more civilized as political life developed. The domination of men by women gradually became less despotic and less extreme (Dobbs 1996, 86; Nagle 2006, 30). The crucial question, though, is this: After the emergence of political life brought with it “virtue and justice,” how much more civilized did patriarchal rule become? Did the household of the polis transcend its barbaric beginnings?

In the rest of Book I, Aristotle speaks to the character of household rule in the life of the developed polis. He continues to characterize the rule of men in ways that suggest that superior physical strength lies behind their rule. The first relevant remark comes in Chapter Five, in Aristotle's discussion of slavery. The question Aristotle considers in this chapter is whether any human beings can be rightfully described as natural slaves. Aristotle first has recourse to the general concepts of ruler and ruled; rule and obedience, he says, are necessary and advantageous. Whatever is constituted by a number of things and yet becomes a single thing has a ruling and ruled element, he argues, such as musical harmony (1254a17–32). The difficult question, of course, is whether this sort of union ever exists between human beings. Aristotle notes that in animals, at least in the best animals, the soul rules over the body. In the well-ordered human being, the soul rules over the body, and reason rules over the other parts of the soul. That this is natural and good is shown, he says, by the fact that it is good for the body to be ruled by the soul, and harmful to both if the order is reversed. The same is true, he notes, of human beings’ rule over animals: being ruled by men ensures preservation for tame animals. Next, he says, “further, the relation of male to female is one of superior to inferior, and ruler to ruled. And it must be the same way for all human beings” (1254a32–b16).

Aristotle appears here to confirm the naturalness of slavery and the subjection of women. But on what grounds? There is an important difference between what Aristotle says about the rule of male over female and what he says about the other natural hierarchies: in the rule of the soul over the body and of human beings over animals, a common good derives from the rule of the superior element. Aristotle says that men are “superior” and women are “inferior,” but he does not say that the rule of men results in a good common to both sexes. Most important, the primary meaning of the word he uses for superior (kreitton) is not wiser or more virtuous, but stronger, mightier, and more powerful (see also Davis 1996, 24). Now, if Aristotle had indicated clearly in Chapter Two that the subjection of women originated in a common good between men and women, we might be inclined not to place much weight on Aristotle's choice of this word. But, in light of what we have seen, we have to wonder: Is Aristotle saying that the rule of men over women is natural in the same way that the rule of a soul over a body is natural? Or is he saying that it is natural in a different sense—perhaps in the sense that the rule of the stronger is natural? By using the word kreitton, and by neglecting to affirm that a common good derives from the rule of males over females, Aristotle leaves the precise reason that men “naturally” rule over women ambiguous (cf. Ambler 1987, 398; Matthews 1986, 18–19).

After discussing slavery and acquisition in the middle chapters of Book I, Aristotle returns to the topic of women in Chapter Twelve. He asserts that slaves, children, and wives are each ruled differently: a slave is ruled despotically, a child monarchically, and a wife politically. “For the male,” Aristotle writes, “unless, I suppose, he is constituted contrary to nature, is fitter to command than the female, and the elder and mature is fitter to command than the younger and immature” (1259b1–4). As Saxonhouse hastens to point out, although these lines provide a rationale for the rule of men over women, Aristotle admits here that reality does not always correspond with nature's intention. At least in some cases there is a departure from nature—that is, a husband is less fit to rule than his wife, but he rules anyway. “We cannot be assured that nature is in control at all times” (Saxonhouse 1985, 71; see also 1986, 413; Nichols 1992, 30). Aristotle's assertion about the naturalness of male rule, like his doctrine of natural slavery, does not justify the status quo; it sets up a standard for judging it. Beyond this, though, if the male is by nature “fitter to command” (hegemonikoteron), the key question is, of course, fitter in what way? In light of Aristotle's earlier statement that the relation of male to female is that of “stronger to weaker,” we have to wonder: Are men fitter to command in the sense that they are smarter and better? Or are they fitter to command in the sense that their superior strength gives them the ability to enforce their commands? Once again, Aristotle leaves the precise character of the natural basis of the subjection of women unclear.

Aristotle pairs this ambiguous explanation of the naturalness of male rule with the statement that rule of husbands over wives is political. With this statement, the problematic character of the status of women comes most clearly to the fore. Earlier, Aristotle said that men rule their households as kings (1252b20–21). His new statement that husbands rule their wives politically seems to revise that account. By characterizing the rule of men over women as political, Aristotle acknowledges that women are not children any more than they are slaves; they are, in some important sense, the equals of men. For a thinker who appears to advocate unreservedly the subjection of women, such an acknowledgment is striking. And if not for the complexities and nuances that we have observed in his treatment of the subjection of women thus far, this acknowledgment would come as an abrupt and rather drastic shift. When it is read, however, in light of the complexities and nuances that we have observed, Aristotle's acknowledgement is no surprise at all; rather, it reads as a first step in the full and final surfacing of a problem that Aristotle has been quietly indicating, but struggling to avoid confronting directly, all along.

Aristotle gives only indirect indications of why the rule of husbands over wives should be understood as political. Of kingly rule, he says: “It is necessary that a king differ from his subjects by nature, but be of the same stock. This is the case of the elder and younger and parent and child” (1259b14–17). If it is not appropriate for husbands to rule their wives monarchically, it could be because husband and wife are not “of the same stock.” Perhaps the fact that the bond between husband and wife is conventional, and weaker, than that between parents and children makes men less likely to use unbridled monarchical authority benevolently over wives than over children. But kingly rule also requires that the ruler differ from his subjects “by nature”; perhaps husband and wife are not different enough in their natures to justify such rule.

As soon as Aristotle indicates that marital rule is political, he acknowledges a difficulty in understanding it in this way. Aristotle explains that although the rule of a husband is political, it lacks the main characteristic of political rule—namely, that it is temporary (cf. Bradshaw 1991, 563–64). Free citizens take turns ruling and being ruled, Aristotle says, “since the members of a political association wish by their very nature to be equal and to differ in nothing” (1259b5–6). And yet, Aristotle continues, “when one rules and the other is ruled, he [the ruler] seeks to differentiate himself in external appearances and speeches and honors, just as Amasis said in the story of his footpan. The male always stands thus in relation to the female” (1259b6–10). Aristotle's reference to Amasis, punctuated by his remark that the male “always” stands thus in relation to the female, helps us to see why marital rule cannot be characterized simply as political. Amasis was a man of low birth who became king of Egypt. He had a footbath made of gold, and when he became king he had it melted down and reshaped into a statue of a god. When his subjects worshipped the statue he told them, “If you can worship one day what you urinated into the day before, you can defer to me as your ruler” (Herodotus ii.172). Amasis seeks deference from his subjects, then, despite the fact that he is not necessarily superior to them. By directing us to this story as a way of understanding the relationship between husband and wife, Aristotle seems to be suggesting that, even though men rule their wives as equals, nevertheless, as rulers, men seek the marks of inequality—“distinctions in external appearances and speeches and honors.”

Now, if the members of a political association “wish by their very nature to be equal” and “to differ in nothing,” the first question that arises is why those who rule such an association would seek to create distinctions between themselves and their subjects. The answer would seem to be that, without such distinctions, it is impossible to rule. The members of a political association merely “wish” to be equal; rule, even political rule, requires a degree of inequality. But a second question also arises that is much harder to answer: why would the ruling member of an association of equals be Entitled to distinctions of any sort? Amasis comes to power by chance, and he seeks deference on the grounds of his insight that the distinction between the high and the low, or between the ruler and the ruled, is a matter of form rather than of substance. But if Amasis’ insight applies to men and women—if men are not intrinsically superior to women—then how is the permanent rule of men over women justified (cf. Dobbs 1996, 78; Mulgan 1994, 188; Nagle 2006, 167–70; Nichols 1992, 29–31; Saxonhouse 1985, 72; Swanson 1999, 237–38)?

This question becomes the central focus of Chapter Thirteen, the final chapter of Book I. In this chapter, Aristotle finally confronts squarely the question: why should the head of the household rule over his wife, children, and slaves—especially his wife? He approaches this question by way of the questions of whether and how subordinate members of the household can possess virtue. First, he asks whether it is possible for slaves to possess virtues such as moderation, courage, and justice. “For if it is [possible for slaves to possess these virtues], then how are they different from free persons? But if it is not possible, it is strange, since they are human beings and share in reason” (1259b26–28). After beginning in this way, Aristotle wonders if the same question might not be raised with respect to women and children, adding:

And, more generally, we must investigate about the natural subject and the ruler, whether virtue is the same or different. For if it is necessary for both to have gentlemanliness, on what account could we say that one must rule and the other be ruled, once and for all? (125932–36)

Aristotle's use of the word meaning “once and for all” (kathapax) suggests that he is thinking especially of women, for only in the case of women has he explicitly raised the permanence of their subjection as a problem. He stresses that the difference in the virtue of ruler and ruled cannot be simply a matter of degree: “being ruled and ruling differ in kind, not by greater and less” (1259b36–38).

The answer at which Aristotle seems to arrive in Chapter Thirteen is that men and women have different kinds of virtue: “It is clear that it is necessary for both to have virtue, but also that their virtues must differ, just as those who are natural subjects differ [from those who rule by nature]” (1260a2–4). But this conclusion is beset with difficulties. The reasoning that leads Aristotle to it begins from “the nature of the soul.”

For in the soul there is naturally a ruling and ruled part, and we say of both reason and the irrational part that there is virtue in each. It is clear that the same thing holds in other things as well, just as by nature most things are ruling and ruled. The free person rules the slave, the male the female, the man the child, but they do so differently. All have the parts of the soul, but they have them differently: the slave is wholly lacking in the capacity to deliberate; the female has it, but it lacks authority; the child has it, but it is incomplete. (1260a5–14)

Once again, Aristotle offers a rationale for the subjection of women. But its meaning, like that of similar statements that have preceded it, is not entirely clear. As Saxonhouse points out, the phrase “the female has reason, but it lacks authority” may mean that women's reason lacks authority in her own soul, or it may mean that women's reason lacks authority in the world, i.e., with men (Saxonhouse 1985, 74; see also Dobbs 1996, 85; Levy 1990, 404–405; Nichols 1992, 31; Smith 1983, 475–77; and Zuckert 1983, 194; cf. Achtenberg 1996; Homiak1996). In support of the latter reading, Saxonhouse points to Aristotle's final literary reference in Politics I. To illustrate that certain virtues are specific to women, he cites a line from Sophocles’ Ajax: “To woman, silence is an adornment” (1260a30). This line seems to mean that women should submit silently to the commanding reason of their husbands. And yet Ajax speaks this line to tell his wife Tecmessa to keep quiet when she is attempting to give him life-saving advice, advice that he does not take, to his great detriment. The quotation expresses quite aptly, then, that women's reason may be sound, but nonetheless lack authority with men (Saxonhouse 1985, 74–75; see also Davis 1996, 26; Nichols 1987, 132–33; cf. Kraut 2002, 214–15; Modrak 1994). This interpretation of Aristotle's remark would seem less plausible if it required us to conclude that, after arguing throughout Book I that men are morally and intellectually superior to women, suddenly, in the last chapter, Aristotle calls the basis of male rule into question. But our examination of Book I has revealed the continuity in Aristotle's account of the rule of men. By saying that women “have reason, but it lacks authority,” Aristotle once again allows himself to be interpreted in different ways. He could mean that women are intellectually inferior to men, or he could mean that men's superior strength lies behind their rule.

From his assertion that men, women, children, and slaves possess reason in different ways, Aristotle extrapolates that they must also possess moral virtue differently.

So then we must suppose that it is necessarily similar in the case of the moral virtues: it is necessary for all to have them, but not in the same way, and each must have as much as is enough for his own work. Thus it is necessary for the ruler to have complete moral virtue … while the others must have as much as falls to them. So it is clear that there is a moral virtue of all of those we have spoken of, but that the moderation of the man and the woman is not the same, nor is their courage or justice, as Socrates suggested. Rather, there is a ruling and a serving courage, and the same is true with respect to the other virtues.” (1260a14–24)

The fact that free men, women, children, and slaves have different “works,” or tasks, seems to provide the grounds for asserting that their virtues differ. And yet, in explaining how the differences in the tasks of each of these groups bear on their possession of moral virtue, Aristotle falls back into the language of degree: Each of these groups must have “enough virtue for his own work,” and each of the subject members of the household must have “as much virtue as falls to them.” Aristotle thus leaves us to wonder whether the difference between the ruling and the serving forms of courage, for example, is primarily one of substance or of mere degree (see also Salkever 1990, 186). Moreover, Aristotle here speaks of which virtues are necessary in men, women, and children. The original question was whether it is possible for women and slaves to possess the moral virtues in their full-fledged forms. The conclusion Aristotle draws, then, does not answer this original question. Finally, Aristotle says only that “we must suppose” (upolepteon) that it is necessary that men, women, children, and slaves possess the moral virtues in the same ways in which they possess reason. The question is, what is the necessity dictating what “we must suppose?” Must we suppose that men, women, and slaves possess the moral virtues differently because they do, in fact, possess them differently? Or must we suppose that they possess the moral virtues differently because it is only on that basis that the household order will be vindicated as natural? By neglecting to clarify these aspects of his argument, Aristotle stops short of affirming decisively that the household order has a solid basis in nature.

Conclusion: The Household, the City, and Nature

Aristotle's ostensible intention in Book I of the Politics is to establish the naturalness of the political community and of its constituent parts, the village and the household. But, as we have seen, the details of his account of how the most basic element of the social order came into being in Chapter Two tell another story: the rule of men over women in the household began in force. In the rest of Book I, Aristotle continues to speak in ways consistent with the view that the basis of male rule is superior physical strength. He offers a number of rationales for the naturalness of the subjection of women. But those rationales are both ambiguous in their meaning and conspicuously limited. In particular, Aristotle never affirms that the strongest rationale for the naturalness of an association—namely, that it serves a common good—applies to the rule of men over women in the household. Finally, in the concluding chapter of Book I, he raises the question of the justice of the household order, and the answer he offers to that question is incomplete at best.

Aristotle seems to have thought that, within the context of developed political life, some reform of the household was possible. He tries to bring his readers to see that treating women as slaves violates nature, and he encourages them to rule their wives as equals. He seems to have thought that the household had the potential, then, to become more like the community that Saxonhouse, Dobbs, and Salkever envision, full of mutual affection and aimed at a common good. Still, Aristotle's efforts to improve the status of women indicate that he did not think women were typically accorded sufficient respect. On the contrary, his efforts on this front suggest that the tendency of household rule is toward despotism and exploitation rather than toward republicanism and benevolence.8

While Aristotle thought that the household might be improved, he gives no indication that he thought that the association of men and women in the household could ever become one of genuine equality. If this is true, and if it is also true that Aristotle doubted the justice of the hierarchy within the household, one might well wonder why Aristotle did not favor abolishing the household, as proposed, for example, by Socrates in Plato's Republic. Aristotle takes up this Socratic proposal directly in Book II. Against Socrates’ claim that abolishing private families would allow all of the citizens to feel as though the city was one big, united family, Aristotle argues that the real consequence of abolishing the private family would be that no one would feel strong connections of kinship with anyone else. Just as wine becomes weaker when it is diluted with water, he argues, so, too, feelings of love or friendship are weakened when they are spread out among an entire city or class of citizens (1262b17–22). Rather than experiencing all of their fellow citizens as their own kin, people living under such a system would experience nothing and no one as their own. Aristotle argues that this is objectionable on two grounds. First, to subject citizens to such an arrangement would be to deprive them of the pleasure that human beings naturally take in what is their own. He says that the difference between the pleasure that human beings take in what is common and the pleasure that they take in their own is “inexpressible” (amutheton). This is true by nature; nature makes us love ourselves (1263a40-1263b1). Nature further directs us toward loving our family members by pointing them out to us. Aristotle remarks that Socrates’ scheme would not work because the guardians would be able to identify their children through family resemblance (1262a14-24). Not only does nature instill in us a preference for our own, then, but it also obstructs attempts to prevent such a preference from developing.

Aristotle seems to have judged that, even if the household is not natural in all respects, it is natural in this respect, that it expresses the powerful tendency to love and to take pleasure in one's own. On this reasoning, the household is rooted not only in the nature of men but in human nature, for the tendency to love and to take pleasure in one's own is certainly not limited to men.9 In addition, Aristotle seems to have thought that the household was essential to the health of the political community. He argues that abolishing the household as Socrates proposes would have grave political consequences. First, it would make the city weaker, for friendship is what prevents a city from splitting into factions (1262b7–9). Second, it would lead to neglect. People give the least care to what is common, he observes; they love and care for what is their own (1261b33–38, 1262b22–23). If wives and children were held in common, crimes against family members and incest would increase (1262a25–32, 1262b29–35). Fathers, too, would cease to concern themselves with education. In a city in which each man has 1,000 sons, Aristotle says, no one is the son of any one man, but each is the son of all equally; the result will be that all sons will be neglected (1261b38–40; see also Zuckert 1983, 193; cf. Saxonhouse 1985, 80–84).

Finally, Aristotle notes that an experience of private ownership, both of goods and people, is necessary to the experience of moral virtue. One needs private property in order to be generous by using one's property to help friends and family members, and one needs the existence of the private family to be moderate by abstaining from other men's wives (1263b7–14). Without the division of interests among men created by private possessions, it seems, there can be no possibility of self-overcoming or of self-restraint. Abolishing the private household, then, would undermine one of the greatest benefits of political community—namely, that it allows moral virtue to develop and flourish.
These points can help us to understand why Aristotle does not voice his criticisms of the household order more loudly. As problematic as the household may be, it is a crucial support to political life. And yet this is not to say, as has been argued by many interpreters of Aristotle, that in accepting and endorsing the private household, Aristotle simply sacrifices women and slaves so that free men can reap the rewards of political life (Arendt 1958, 31, 37; Coole 1988; Elshtain 1981; Nussbaum 2001, 370; Okin 1979; Spelman 1994; Zuckert 1983, 195). It is true that the household provides free men with at least a partial liberation from concern with practical necessities, and Aristotle does argue that such liberation is necessary for human beings to devote themselves fully to political life and to the pursuit of virtue (1328b33–a2, 1278a8–11). But one of the things that my examination of Book I of the Politics has shown is that the development of virtue and justice that takes place in the political community benefits the weak at least as much as it does the strong. In giving expression to man's political nature, the political community opens up the prospect of more civilized relations among all of those who live within it. The development of virtue and justice restrains and moderates men, and thus acts as a check on their authority. The growth and flourishing of political life is, for this reason, a good common to both women and men, even if they partake of that good in different ways.
Aristotle may well have judged, then, that the natural impulses leading human beings into households are so strong, and that attempts to abolish the household are so impracticable, that a radical transformation of the traditional social structure would not be possible. On this view, blatantly exposing the defects of the household order would not bring about radical reform. But it would weaken and undermine the strength and health of the very thing that most improved the household, the political community. If such was his reasoning, then Aristotle's task in discussing the household in Book I of the Politics was exceedingly delicate. He had to present the household in such a way as to indicate its inferiority to the political community, and to bolster the supremacy of political authority over domestic authority. But he also had to present the household as a fundamentally good thing; he had to tread lightly, in other words, on its flaws. Still, Aristotle aimed to do more in the Politics than foster politically and socially salutary views. He also sought to convey the truth. And so, while he shined a brighter light on the more positive, attractive aspects of the household than he did on its uglier ones, he shined at least a dim light on all of them. Our understanding of Aristotle's account of the household in the Politics will remain defective and incomplete unless we see that, within that account, Aristotle indicates that the hierarchy in the household rests in no small part on superior physical strength.

References

  • Achtenberg, Deborah. 1996. “Aristotelian Resources for Feminist Thinking.” In Feminism and Ancient Philosophy, ed. Julie K. Ward. New York: Routledge, 95–117.
  • Ambler, Wayne. 1985. “Aristotle's Understanding of the Naturalness of the City.” Review of Politics 47 (2): 163–85.
  • Ambler, Wayne. 1987. “Aristotle on Nature and Politics: The Case of Slavery.” Political Theory 15 (3): 390–410.
  • Arendt, Hannah. 1958. The Human Condition. Chicago: University of Chicago Press.
  • Bradshaw, Leah. 1991. “Political Rule, Prudence and the ‘Woman Question’ in Aristotle.” Canadian Journal of Political Science 24 (3): 557–73.
  • Coole, Diana H. 1988. Women in Political Theory: From Ancient Misogyny to Contemporary Feminism. Boulder, CO: Lynne Rienner.
  • Davis, Michael. 1996. The Politics of Philosophy: A Commentary on Aristotle's Politics. Lanham, MD: Rowman and Littlefield.
  • Dobbs, Darrell. 1996. “Family Matters: Aristotle's Appreciation of Women and the Plural Structure of Society.” American Political Science Review 90 (1): 74–87.
  • Elshtain, Jean Bethke. 1981. Public Man, Private Woman: Women in Social and Political Thought. Princeton, NJ: Princeton University Press.
  • Euripides. 1998. “Iphigenia at Aulis.” In Euripides 3, ed. Slavitt, David R. and Palmer Bovie, trans. Elaine Terranova. Philadelphia: University of Pennsylvania Press.
  • Frank, Jill. 2004. “Citizens, Slaves, and Foreigners: Aristotle on Human Nature.” American Political Science Review 98 (1): 91–104.
  • Hesiod. 1991. “Works and Days.” In Hesiod, trans. Lattimore, Richard. Ann Arbor: University of Michigan Press.
  • Homiak, Marcia. 1996. “Feminism and Aristotle's Rational Ideal.” In Feminism and Ancient Philosophy, ed. Julie K. Ward. New York: Routledge, 118–40.
First citation in article
  • Kraut, Richard. 2002. Aristotle: Political Philosophy. Oxford: Oxford University Press.
  • Levy, Harold. 1990. “Does Aristotle Exclude Women from Politics?” Review of Politics 52 (3): 397–406.
  • Lindsay, Thomas K. 1994. “Was Aristotle Racist, Sexist, and Anti-Democratic? A Review Essay.” Review of Politics 56 (1): 127–51.
  • Lord, Carnes. 1987. “Aristotle.” In The History of Political Philosophy, 3rd. ed., ed. Leo Strauss and Joseph Cropsey. Chicago: University of Chicago Press.
  • Matthews, Gareth B. 1986. “Gender and Essence in Aristotle.” Australasian Journal of Philosophy 64 (supplement): 16–25.
  • Mansfield, Harvey C. 2006. Manliness. New Haven, CT: Yale University Press.
  • Mill, John Stuart. 1988. The Subjection of Women, ed. by Susan Moller Okin. Indianapolis, IN: Hackett.
  • Modrak, Deborah K. W. 1994. “Aristotle: Women, Deliberation, and Nature.” In Engendering Origins: Critical Feminist Readings in Plato and Aristotle, ed. Bat-Ami Bar On. Albany: State University of New York Press, 207–22.
  • Mulgan, Richard. 1994. “Aristotle and the Political Role of Women.” History of Political Thought 15 (2): 179–202.
  • Nagle, D. Brendan. 2006. The Household as the Foundation of Aristotle's Polis. Cambridge: Cambridge University Press.
  • Nichols, Mary P. 1983. “The Good Life, Slavery, and Acquisition: Aristotle's Introduction to Politics.” Interpretation 11 (2): 171–84.
  • Nichols, Mary P. 1987. Review of Saxonhouse (1985). Review of Politics 49 (1): 130–33.
  • Nichols, Mary P. 1992. Citizens and Statesmen: A Study of Aristotle's Politics. Savage, MD: Rowman & Littlefield.
  • Nussbaum, Martha. 2001. The Fragility of Goodness: Luck and Ethics in Greek Tragedy and Philosophy. Cambridge: Cambridge University Press.
  • Okin, Susan Moller. 1979. Women in Western Political Thought. Princeton, NJ: Princeton University Press.
  • Salkever, Stephen. 1990. Finding the Mean: Theory and Practice in Aristotelian Political Philosophy. Princeton, NJ: Princeton University Press.
  • Salkever, Stephen. 1991. “Women, Soldiers, Citizens: Plato and Aristotle on the Politics of Virility.” In Essays on the Foundations of Aristotelian Political Science, ed. Carnes Lord and David K. O'Connor. Berkeley: University of California Press, 165–90.
  • Salkever, Stephen. 1993. Review of Nichols (1992) and Swanson (1992). American Political Science Review 87 (4): 1004–1006.
  • Saxonhouse, Arlene. 1982. “Family, Polity, and Unity: Aristotle on Socrates’ Community of Wives.” Polity 15 (2): 202–19.
  • Saxonhouse, Arlene. 1985. Women in the History of Political Thought. Westport, CT: Praeger.
  • Saxonhouse, Arlene. 1986. “From Tragedy to Hierarchy and Back Again: Women in Greek Political Thought.” American Political Science Review 80 (2): 403–18.
  • Smith, Nicholas D. 1983. “Plato and Aristotle on the Nature of Women.” Journal of the History of Philosophy 21 (4): 467–78.
  • Spelman, Elizabeth V. 1994. “Who's Who in the Polis.” In Engendering Origins: Critical Feminist Readings in Plato and Aristotle, ed. Bat-Ami Bar On. Albany: State University of New York Press, 99–125.
  • Swanson, Judith A. 1992. The Public and the Private in Aristotle's Political Philosophy. Ithaca, NY: Cornell University Press.
  • Swanson, Judith A. 1999. “Aristotle on Nature, Human Nature, and Justice.” In Action and Contemplation: Studies in the Moral and Political Thought of Aristotle, ed. Robert C. Bartlett and Susan D. Collins. Albany: State University of New York Press, 225–47.
  • Zuckert, Catherine. 1983. “Aristotle on the Limits and Satisfactions of Political Life.” Interpretation 11 (2): 185–206.

Notes

  • Dana Jalbert Stauffer is a lecturer of government, The University of Texas at Austin, Austin, TX 78712.
  • 1 See also Salkever (1993, 1006); Saxonhouse (1985, 85; 1982, 203); cf. Nichols (1992, 15–16); Zuckert (1983). For a helpful review of the arguments of Swanson, Salkever, and Nichols, see Lindsay (1994). Mulgan (1994) provides a broad review of the schools of thought concerning Aristotle's view of women.
  • 2 All references to Aristotle's works are to the Oxford Classical Text editions. Translations are my own.
  • 3 For an excellent general discussion of Aristotle's treatment of the naturalness of the city, and of why Aristotle seeks to defend the naturalness of the city despite his awareness that it is not natural in all respects, see Ambler (1985).
  • 4 “Every step in improvement has been so invariably accompanied by a step made in raising the social position of women, that historians and philosophers have been led to adopt their elevation or debasement as on the whole the surest test and most correct measure of the civilization of a people or an age” (Mill 1988, 21–22).
  • 5 References to Iphigenia at Aulis are to the edition of Slavitt and Bowie (1998), with minor modifications of the translation.
  • 6 Lest we take this as an isolated incident, Euripides provides additional insight into the Greek treatment of women through the explanation of Clytemnestra, Iphigenia's mother, of how their household formed: Agamemnon not only killed Clytemnestra's first husband and took her by force, he tore her infant from her breast and smashed its head on the stones beneath his feet. Clytemnestra's brothers came to her defense but her father decided, on reflection, to give her to Agamemnon as a wife (1342–52).
  • 7 References to Works and Days are to the translation of Lattimore (1991).
  • 8 At the same time, the prospect of women enjoying a higher status in well-developed political communities opens up dangers of its own. In Book II, Aristotle argues that the women of Sparta dominated the men, owing to the tendency of warlike societies to be obsessed with sex (1269b23–31). While Sparta's lawgiver imposed strict military training and rigorous moral discipline on Spartan men, he failed to assign any education to women, leaving them idle, undisciplined, and extravagant (1269b19–23, b39–a9). The influence of Sparta's corrupt women was so great, according to Aristotle, that it led to the downfall of that regime. “What difference does it make whether women rule, or whether the rulers are ruled by women?” he asks. “The results are the same” (1269b32–34). If it is not desirable for women to be slaves, it is also not desirable that they be tyrants. And yet, even in his characterization of the situation in Sparta, Aristotle is careful to distinguish the status of Spartan women from that of actual rule; as much as Spartan women may have “ruled” Spartan men, exerting influence over them as objects of erotic attraction, the fact remains that they were not themselves rulers—they did not share in actual political power.
  • 9 For an ancient expression of this point, see Oikonomicus IX.18–19.

Light pollution blots out Milky Way

$
0
0

Nicola Davis
The Guardian Weekly (print version), 17.06.2016

Natural wonder of the galaxy shrouded from view across EU and US

It has inspired astronomers, artists, musicians and poets but the Milky Way could become a distant memory for much of humanity, a new global atlas of light pollution suggests.

The study reveals that 60% of Europeans and almost 80% of North Americans cannot see the glowing band of our galaxy because of the effects of artificial lighting, while it is imperceptible to the entire populations of Singapore, Kuwait and Malta.

Overall, the Milky Way is no longer visible to more than one third of the world’s population. 

Lead author Fabio Falchi from the Light Pollution Science and Technology Institute in Italy said the situation was a “cultural loss of unprecedented magnitude.”

Chris Elvidge of the US National Oceanic and Atmospheric Administration and a co-author of the study, added that the times he has seen the Milky Way have been magical experiences. 

“Through our technology we’ve cut off that possibility for large numbers of people for multiple generations now,” he said. “We’ve lost something - but how do we place value on it?”

Described by the poet John Milton as “a broad and ample road whose dust is gold, and pavement stars,” the Milky Way is so obscured by the effects of modern lighting that areas around Hong Kong, Beijing and a large stretch of the east coast of America are among those where a glimpse of the galactic band is out of the question – a situation also found across much of Qatar, the Netherlands and Israel. In Belgium, it cannot be seen in 51% of the country.

“Humanity has enveloped our planet in a luminous fog that prevents most of Earth’s population from having the opportunity to observe our galaxy,” the authors write.

Published in the journal Science Advances by an international team of scientists, the research is based on data collected from space by Nasa’s Suomi National Polar-orbiting Partnership satellite, together with computer models of sky luminescence and professional and citizen science measurements of sky brightness taken from the ground.

The resulting global atlas reveals that large swaths of humanity experience light pollution, including more than 99% of people living in the US and the European Union. People living near Paris would have to travel 900km to areas as such central Scotland, Corsica or central Spain to find a region with night skies almost unpolluted by light, the authors say.


By contrast, Central African Republic and Madagascar are among the countries least affected by light pollution, with nearly the entirety of Greenland boasting pristine skies.

“Until the advent of night-time lighting became really prominent in the 19th and 20th centuries, everybody would have been familiar with the Milky Way,” said Marek Kukula, public astronomer at the Royal Observatory in Greenwich, who was not involved in the study. “We see it in mythology about the sky, in all cultures around the world. It is one of the obvious components of the sky along with the stars, the planets and the moon.”

When light from our streetlamps, homes and other sources of illumination is thrown up into the sky it bounces off particles and moisture droplets in the atmosphere and is scattered, resulting in artificial “sky glow” - one of the key factors contributing to light pollution. The upshot is that spectacles like the Milky Way can become obscured.

“The night sky is part of our natural heritage. It is beautiful, it is awe-inspiring and being able to see it is a way for us to connect to the wider universe and understand our place in the natural world,” said Kukula. “If we lose that we have lost that direct connection with something much bigger than us.”

The situation could become worse. According to the new study, if all sodium lights are replaced with cool white LED lighting, artificial sky brightness seen across Europe could more than double as a result of the increase in blue-light emission.

“There are also biological consequences, not only on birds and insects and mammals, but also even on humans,” said Elvidge, pointing out that the light pollution can disrupt the natural behaviour of animals.

Undeniable climate change facts

$
0
0
CNN's Jennifer Gray give five reasons why 97% of scientists agree climate change is happening.




CNN's John Sutter was in Shishmaref, Alaska, to explain how climate change has affected the world this year.




The Corporation

$
0
0

The Corporation is a 2003 Canadian documentary film written by University of British Columbia law professor Joel Bakan, and directed by Mark Achbar and Jennifer Abbott. The documentary examines the modern-day corporation. Bakan wrote the book, The Corporation: The Pathological Pursuit of Profit and Power, during the filming of the documentary. (Wikipedia)



Related material:
The World According to Monsanto

Practical Reason, Phronēsis

$
0
0

Practical Reason

Prof. R. W. Hepburn
The Oxford Companion to Philosophy (2 ed.)

Argument, intelligence, insight, directed to a practical and especially a moral outcome. Historically, a contrast has often been made between theoretical and practical employments of reason. Aristotle's ‘practical syllogism’ concludes in an action rather than in a proposition or a new belief: and phronēsis (see book vi of Nicomachean Ethics) is the ability to use intellect practically. In discussions of motivation, furthermore, appeals to practical reason may seek to counter claims that only desire or inclination can ultimately prompt to action. A measure of disengagement from personal wish and want, a readiness to appraise one's acts by criteria which (rising above individual contingent desire) can be every rational moral agent's criteria, marks a crucial point of insertion of reason into practice. To Kant, the bare notion of being subject to a moral law suffices to indicate how practical reason can operate. Considering any moral policy, ask: Could it consistently function as universal law? The scope of practical reason, however, is much wider than this: practical reasoning must (for example) include the critical comparison and sifting of alleged human goods and ends, and the reflective establishing of their ranking and place in a life plan.
Bibliography
E. Millgram (ed.), Varieties of Practical Reasoning (Cambridge, Mass., 2001).
O. O'Neill, Constructions of Reason (Cambridge, 1989).

Phronēsis

Prof. C. C. W. Taylor.
The Oxford Companion to Philosophy (2 ed.)

Practical wisdom. In ancient Greek the term (frequently interchangeable with sophia) has connotations of intelligence and soundness of judgement, especially in practical contexts. In Aristotle's ethics it is the complete excellence of the practical intellect, the counterpart of sophia in the theoretical sphere, comprising a true conception of the good life and the deliberative excellence necessary to realize that conception in practice via choice (prohairesis).
Bibliography
R. Sorabji, ‘Aristotle on the Rôle of the Intellect in Virtue’, Proceedings of the Aristotelian Society (1973–4); repr. in A. Rorty (ed.), Essays on Aristotle's Ethics (Berkeley, Calif., 1980).

Phronēsis


The Oxford Dictionary of Philosophy (2 rev. ed.) Simon Blackburn

Practical wisdom, or knowledge of the proper ends of life, distinguished by Aristotle from theoretical knowledge and mere means-end reasoning, or craft, and itself a necessary and sufficient condition of virtue.

The Psychology of Climate Change Denial

$
0
0
 The Australian Psychological Society
Whilst the vast majority of people claim to be concerned about the climate, it is also the case that large numbers of people also avoid, minimise, switch off, or distance themselves from effectively engaging with the problems. A small but noisy minority actively deny that there even is a problem. How do we understand this, and how do we solve the “It’s Not My Problem” problem?
One of the ways of dealing with denial is to raise awareness of the scientific consensus on climate change.  The importance of this cannot be overstated. Typically, the general public think around 50% of climate scientists agree that humans are causing global warming. The reality is that 97% of scientists agree.
Psychologists who have specialised in understanding science denial have found that the best way to respond to this is to use a branch of psychology dating back to the 1960s known as “inoculation theory” (See Cook, 2015). The way to neutralise misinformation is to expose people to a weak form of the misinformation. The way to achieve this is to explain the fallacy employed by the myth. Once people understand the techniques used to distort the science, they can reconcile the myth with the fact.
With respect to climate change, science denial can be stopped by first explaining the psychological research into why and how people deny climate science.
Having laid the framework, you then show people how to examine the fallacies behind the most common climate myths. There are five common techniques that are used to create myths about climate change.

  • Fake experts
  • Logical fallacies
  • Impossible expectations
  • Cherry picking
  • Conspiracy theories



Big data, Google and the end of free will

$
0
0
Yuval Noah Harari, tenured professor at the Department of History
 of  The Hebrew University of Jerusalem
Forget about listening to ourselves. In the age of data, algorithms have the answer, writes the historian Yuval Noah Harari

For thousands of years humans believed that authority came from the gods. Then, during the modern era, humanism gradually shifted authority from deities to people. Jean-Jacques Rousseau summed up this revolution in Emile, his 1762 treatise on education. When looking for the rules of conduct in life, Rousseau found them “in the depths of my heart, traced by nature in characters which nothing can efface. I need only consult myself with regard to what I wish to do; what I feel to be good is good, what I feel to be bad is bad.” Humanist thinkers such as Rousseau convinced us that our own feelings and desires were the ultimate source of meaning, and that our free will was, therefore, the highest authority of all.

Now, a fresh shift is taking place. Just as divine authority was legitimised by religious mythologies, and human authority was legitimised by humanist ideologies, so high-tech gurus and Silicon Valley prophets are creating a new universal narrative that legitimises the authority of algorithms and Big Data. This novel creed may be called “Dataism”. In its extreme form, proponents of the Dataist worldview perceive the entire universe as a flow of data, see organisms as little more than biochemical algorithms and believe that humanity’s cosmic vocation is to create an all-encompassing data-processing system — and then merge into it.

We are already becoming tiny chips inside a giant system that nobody really understands. Every day I absorb countless data bits through emails, phone calls and articles; process the data; and transmit back new bits through more emails, phone calls and articles. I don’t really know where I fit into the great scheme of things, and how my bits of data connect with the bits produced by billions of other humans and computers. I don’t have time to find out, because I am too busy answering emails. This relentless dataflow sparks new inventions and disruptions that nobody plans, controls or comprehends.

But no one needs to understand. All you need to do is answer your emails faster. Just as free-market capitalists believe in the invisible hand of the market, so Dataists believe in the invisible hand of the dataflow. As the global data-processing system becomes all-knowing and all-powerful, so connecting to the system becomes the source of all meaning. The new motto says: “If you experience something — record it. If you record something — upload it. If you upload something — share it.”

Dataists further believe that given enough biometric data and computing power, this all-encompassing system could understand humans much better than we understand ourselves. Once that happens, humans will lose their authority, and humanist practices such as democratic elections will become as obsolete as rain dances and flint knives.

When Michael Gove announced his shortlived candidacy to become Britain’s prime minister in the wake of June’s Brexit vote, he explained: “In every step in my political life I have asked myself one question, ‘What is the right thing to do? What does your heart tell you?’” That’s why, according to Gove, he had fought so hard for Brexit, and that’s why he felt compelled to backstab his erstwhile ally Boris Johnson and bid for the alpha-dog position himself — because his heart told him to do it.

Gove is not alone in listening to his heart in critical moments. For the past few centuries humanism has seen the human heart as the supreme source of authority not merely in politics but in every other field of activity. From infancy we are bombarded with a barrage of humanist slogans counselling us: “Listen to yourself, be true to yourself, trust yourself, follow your heart, do what feels good.”

In politics, we believe that authority depends on the free choices of ordinary voters. In market economics, we maintain that the customer is always right. Humanist art thinks that beauty is in the eye of the beholder; humanist education teaches us to think for ourselves; and humanist ethics advise us that if it feels good, we should go ahead and do it.

Of course, humanist ethics often run into difficulties in situations when something that makes me feel good makes youfeel bad. For example, every year for the past decade the Israeli LGBT community has held a gay parade in the streets of Jerusalem. It is a unique day of harmony in this conflict-riven city, because it is the one occasion when religious Jews, Muslims and Christians suddenly find a common cause — they all fume in accord against the gay parade. What’s really interesting, though, is the argument the religious fanatics use. They don’t say: “You shouldn’t hold a gay parade because God forbids homosexuality.” Rather, they explain to every available microphone and TV camera that “seeing a gay parade passing through the holy city of Jerusalem hurts our feelings. Just as gay people want us to respect their feelings, they should respect ours.” It doesn’t matter what you think about this particular conundrum; it is far more important to understand that in a humanist society, ethical and political debates are conducted in the name of conflicting human feelings, rather than in the name of divine commandments.

Yet humanism is now facing an existential challenge and the idea of “free will” is under threat. Scientific insights into the way our brains and bodies work suggest that our feelings are not some uniquely human spiritual quality. Rather, they are biochemical mechanisms that all mammals and birds use in order to make decisions by quickly calculating probabilities of survival and reproduction.

Contrary to popular opinion, feelings aren’t the opposite of rationality; they are evolutionary rationality made flesh. When a baboon, giraffe or human sees a lion, fear arises because a biochemical algorithm calculates the relevant data and concludes that the probability of death is high. Similarly, feelings of sexual attraction arise when other biochemical algorithms calculate that a nearby individual offers a high probability for successful mating. These biochemical algorithms have evolved and improved through millions of years of evolution. If the feelings of some ancient ancestor made a mistake, the genes shaping these feelings did not pass on to the next generation.

Even though humanists were wrong to think that our feelings reflected some mysterious “free will”, up until now humanism still made very good practical sense. For although there was nothing magical about our feelings, they were nevertheless the best method in the universe for making decisions — and no outside system could hope to understand my feelings better than me. Even if the Catholic Church or the Soviet KGB spied on me every minute of every day, they lacked the biological knowledge and the computing power necessary to calculate the biochemical processes shaping my desires and choices. Hence, humanism was correct in telling people to follow their own heart. If you had to choose between listening to the Bible and listening to your feelings, it was much better to listen to your feelings. The Bible represented the opinions and biases of a few priests in ancient Jerusalem. Your feelings, in contrast, represented the accumulated wisdom of millions of years of evolution that have passed the most rigorous quality-control tests of natural selection.

However, as the Church and the KGB give way to Google and Facebook, humanism loses its practical advantages. For we are now at the confluence of two scientific tidal waves. On the one hand, biologists are deciphering the mysteries of the human body and, in particular, of the brain and of human feelings. At the same time, computer scientists are giving us unprecedented data-processing power. When you put the two together, you get external systems that can monitor and understand my feelings much better than I can. Once Big Data systems know me better than I know myself, authority will shift from humans to algorithms. Big Data could then empower Big Brother.

This has already happened in the field of medicine. The most important medical decisions in your life are increasingly based not on your feelings of illness or wellness, or even on the informed predictions of your doctor — but on the calculations of computers who know you better than you know yourself. A recent example of this process is the case of the actress Angelina Jolie. In 2013, Jolie took a genetic test that proved she was carrying a dangerous mutation of the BRCA1 gene. According to statistical databases, women carrying this mutation have an 87 per cent probability of developing breast cancer. Although at the time Jolie did not have cancer, she decided to pre-empt the disease and undergo a double mastectomy. She didn’t feel ill but she wisely decided to listen to the computer algorithms. “You may not feel anything is wrong,” said the algorithms, “but there is a time bomb ticking in your DNA. Do something about it — now!”

What is already happening in medicine is likely to take place in more and more fields. It starts with simple things, like which book to buy and read. How do humanists choose a book? They go to a bookstore, wander between the aisles, flip through one book and read the first few sentences of another, until some gut feeling connects them to a particular tome. Dataists use Amazon. As I enter the Amazon virtual store, a message pops up and tells me: “I know which books you liked in the past. People with similar tastes also tend to love this or that new book.”

This is just the beginning. Devices such as Amazon’s Kindle are able constantly to collect data on their users while they are reading books. Your Kindle can monitor which parts of a book you read quickly, and which slowly; on which page you took a break, and on which sentence you abandoned the book, never to pick it up again. If Kindle was to be upgraded with face recognition software and biometric sensors, it would know how each sentence influenced your heart rate and blood pressure. It would know what made you laugh, what made you sad, what made you angry. Soon, books will read you while you are reading them. And whereas you quickly forget most of what you read, computer programs need never forget. Such data should eventually enable Amazon to choose books for you with uncanny precision. It will also allow Amazon to know exactly who you are, and how to press your emotional buttons.

Take this to its logical conclusion, and eventually people may give algorithms the authority to make the most important decisions in their lives, such as who to marry. In medieval Europe, priests and parents had the authority to choose your mate for you. In humanist societies we give this authority to our feelings. In a Dataist society I will ask Google to choose. “Listen, Google,” I will say, “both John and Paul are courting me. I like both of them, but in a different way, and it’s so hard to make up my mind. Given everything you know, what do you advise me to do?”

And Google will answer: “Well, I know you from the day you were born. I have read all your emails, recorded all your phone calls, and know your favourite films, your DNA and the entire biometric history of your heart. I have exact data about each date you went on, and I can show you second-by-second graphs of your heart rate, blood pressure and sugar levels whenever you went on a date with John or Paul. And, naturally enough, I know them as well as I know you. Based on all this information, on my superb algorithms and on decades’ worth of statistics about millions of relationships — I advise you to go with John, with an 87 per cent probability of being more satisfied with him in the long run.

“Indeed, I know you so well that I even know you don’t like this answer. Paul is much more handsome than John and, because you give external appearances too much weight, you secretly wanted me to say ‘Paul’. Looks matter, of course, but not as much as you think. Your biochemical algorithms — which evolved tens of thousands of years ago in the African savannah — give external beauty a weight of 35 per cent in their overall rating of potential mates. My algorithms — which are based on the most up-to-date studies and statistics — say that looks have only a 14 per cent impact on the long-term success of romantic relationships. So, even though I took Paul’s beauty into account, I still tell you that you would be better off with John.”



Google won’t have to be perfect. It won’t have to be correct all the time. It will just have to be better on average than me. And that is not so difficult, because most people don’t know themselves very well, and most people often make terrible mistakes in the most important decisions of their lives.

The Dataist worldview is very attractive to politicians, business people and ordinary consumers because it offers groundbreaking technologies and immense new powers. For all the fear of missing our privacy and our free choice, when consumers have to choose between keeping their privacy and having access to far superior healthcare — most will choose health.

For scholars and intellectuals, Dataism promises to provide the scientific Holy Grail that has eluded us for centuries: a single overarching theory that unifies all the scientific disciplines from musicology through economics, all the way to biology. According to Dataism, Beethoven’s Fifth Symphony, a stock-exchange bubble and the flu virus are just three patterns of dataflow that can be analysed using the same basic concepts and tools. This idea is extremely attractive. It gives all scientists a common language, builds bridges over academic rifts and easily exports insights across disciplinary borders.

Of course, like previous all-encompassing dogmas, Dataism, too, may be founded on a misunderstanding of life. In particular, Dataism has no answer to the notorious “hard problem of consciousness”. At present we are very far from explaining consciousness in terms of data-processing. Why is it that when billions of neurons in the brain fire particular signals to one another, a subjective feeling of love or fear or anger appears? We don’t have a clue.

But even if Dataism is wrong about life, it may still conquer the world. Many previous creeds gained enormous popularity and power despite their factual mistakes. If Christianity and communism could do it, why not Dataism? Dataism has especially good prospects, because it is currently spreading across all scientific disciplines. A unified scientific paradigm may easily become an unassailable dogma.

If you don’t like this, and you want to stay beyond the reach of the algorithms, there is probably just one piece of advice to give you, the oldest in the book: know thyself. In the end, it’s a simple empirical question. As long as you have greater insight and self-knowledge than the algorithms, your choices will still be superior and you will keep at least some authority in your hands. If the algorithms nevertheless seem poised to take over, it is mainly because most human beings hardly know themselves at all.

Originally published in the Financial TimesAUGUST 26, 2016
Viewing all 89 articles
Browse latest View live