Quantcast
Channel: Virtue Ethics
Viewing all 89 articles
Browse latest View live

Virtue Ethics: A Survey

$
0
0



Nafsika Athanassoulis
Email: n.athanassoulis@keele.ac.uk
Keele University, United Kingdom
Originally published: August 28, 2004

Virtue ethics is a broad term for theories that emphasize the role of character and virtue in moral philosophy rather than either doing one’s duty or acting in order to bring about good consequences. A virtue ethicist is likely to give you this kind of moral advice: “Act as a virtuous person would act in your situation.”

Most virtue ethics theories take their inspiration from Aristotle who declared that a virtuous person is someone who has ideal character traits. These traits derive from natural internal tendencies, but need to be nurtured; however, once established, they will become stable. For example, a virtuous person is someone who is kind across many situations over a lifetime because that is her character and not because she wants to maximize utility or gain favors or simply do her duty. Unlike deontological and consequentialist theories, theories of virtue ethics do not aim primarily to identify universal principles that can be applied in any moral situation. And virtue ethics theories deal with wider questions—“How should I live?” and “What is the good life?” and “What are proper family and social values?”

Since its revival in the twentieth century, virtue ethics has been developed in three main directions: Eudaimonism, agent-based theories, and the ethics of care. Eudaimonism bases virtues in human flourishing, where flourishing is equated with performing one’s distinctive function well. In the case of humans, Aristotle argued that our distinctive function is reasoning, and so the life “worth living” is one which we reason well. An agent-based theory emphasizes that virtues are determined by common-sense intuitions that we as observers judge to be admirable traits in other people. The third branch of virtue ethics, the ethics of care, was proposed predominately by feminist thinkers. It challenges the idea that ethics should focus solely on justice and autonomy; it argues that more feminine traits, such as caring and nurturing, should also be considered.

Here are some common objections to virtue ethics. Its theories provide a self-centered conception of ethics because human flourishing is seen as an end in itself and does not sufficiently consider the extent to which our actions affect other people. Virtue ethics also does not provide guidance on how we should act, as there are no clear principles for guiding action other than “act as a virtuous person would act given the situation.” Lastly, the ability to cultivate the right virtues will be affected by a number of different factors beyond a person’s control due to education, society, friends and family. If moral character is so reliant on luck, what role does this leave for appropriate praise and blame of the person?

This article looks at how virtue ethics originally defined itself by calling for a change from the dominant normative theories of deontology and consequentialism. It goes on to examine some common objections raised against virtue ethics and then looks at a sample of fully developed accounts of virtue ethics and responses.

Table of Contents

1. Changing Modern Moral Philosophy

a. Anscombe
b. Williams
c. MacIntyre

2. A Rival for Deontology and Utilitarianism

a. How Should One Live?
b. Character and Virtue
c. Anti-Theory and the Uncodifiability of Ethics
d. Conclusion

3. Virtue Ethical Theories

a. Eudaimonism
b. Agent-Based Accounts of Virtue Ethics
c. The Ethics of Care
d. Conclusion

4. Objections to Virtue Ethics

a. Self-Centeredness
b. Action-Guiding
c. Moral Luck

5. Virtue in Deontology and Consequentialism

6. References and Further Reading

a. Changing Modern Moral Philosophy
b. Overviews of Virtue Ethics
c. Varieties of Virtue Ethics
d. Collections on Virtue Ethics
e. Virtue and Moral Luck
f. Virtue in Deontology and Consequentialism

1. Changing Modern Moral Philosophy

a. Anscombe

In 1958 Elisabeth Anscombe published a paper titled “Modern Moral Philosophy” that changed the way we think about normative theories. She criticized modern moral philosophy’s pre-occupation with a law conception of ethics. A law conception of ethics deals exclusively with obligation and duty. Among the theories she criticized for their reliance on universally applicable principles were J. S. Mill‘s utilitarianism and Kant‘s deontology. These theories rely on rules of morality that were claimed to be applicable to any moral situation (that is, Mill’s Greatest Happiness Principle and Kant’s Categorical Imperative). This approach to ethics relies on universal principles and results in a rigid moral code. Further, these rigid rules are based on a notion of obligation that is meaningless in modern, secular society because they make no sense without assuming the existence of a lawgiver—an assumption we no longer make.

In its place, Anscombe called for a return to a different way of doing philosophy. Taking her inspiration from Aristotle, she called for a return to concepts such as character, virtue and flourishing. She also emphasized the importance of the emotions and understanding moral psychology. With the exception of this emphasis on moral psychology, Anscombe’s recommendations that we place virtue more centrally in our understanding of morality were taken up by a number of philosophers. The resulting body of theories and ideas has come to be known as virtue ethics.

Anscombe’s critical and confrontational approach set the scene for how virtue ethics was to develop in its first few years. The philosophers who took up Anscombe’s call for a return to virtue saw their task as being to define virtue ethics in terms of what it is not—that is, how it differs from and avoids the mistakes made by the other normative theories. Before we go on to consider this in detail, we need to take a brief look at two other philosophers, Bernard Williams and Alasdair MacIntyre, whose call for theories of virtue was also instrumental in changing our understanding of moral philosophy.

b. Williams

Bernard Williams’ philosophical work has always been characterized by its ability to draw our attention to a previously unnoticed but now impressively fruitful area for philosophical discussion. Williams criticized how moral philosophy had developed. He drew a distinction between morality and ethics. Morality is characterized mainly by the work of Kant and notions such as duty and obligation. Crucially associated with the notion of obligation is the notion of blame. Blame is appropriate because we are obliged to behave in a certain way and if we are capable of conforming our conduct and fail to, we have violated our duty.

Williams was also concerned that such a conception for morality rejects the possibility of luck. If morality is about what we are obliged to do, then there is no room for what is outside of our control. But sometimes attainment of the good life is dependant on things outside of our control.

In response, Williams takes a wider concept, ethics, and rejects the narrow and restricting concept of morality. Ethics encompasses many emotions that are rejected by morality as irrelevant. Ethical concerns are wider, encompassing friends, family and society and make room for ideals such as social justice. This view of ethics is compatible with the Ancient Greek interpretation of the good life as found in Aristotle and Plato.

c. MacIntyre

Finally, the ideas of Alasdair MacIntyre acted as a stimulus for the increased interest in virtue. MacIntyre’s project is as deeply critical of many of the same notions, like ought, as Anscombe and Williams. However, he also attempts to give an account of virtue. MacIntyre looks at a large number of historical accounts of virtue that differ in their lists of the virtues and have incompatible theories of the virtues. He concludes that these differences are attributable to different practices that generate different conceptions of the virtues. Each account of virtue requires a prior account of social and moral features in order to be understood. Thus, in order to understand Homeric virtue you need to look its social role in Greek society. Virtues, then, are exercised within practices that are coherent, social forms of activity and seek to realize goods internal to the activity. The virtues enable us to achieve these goods. There is an end (or telos) that transcends all particular practices and it constitutes the good of a whole human life. That end is the virtue of integrity or constancy.

These three writers have all, in their own way, argued for a radical change in the way we think about morality. Whether they call for a change of emphasis from obligation, a return to a broad understanding of ethics, or a unifying tradition of practices that generate virtues, their dissatisfaction with the state of modern moral philosophy lay the foundation for change.

2. A Rival for Deontology and Utilitarianism

There are a number of different accounts of virtue ethics. It is an emerging concept and was initially defined by what it is not rather than what it is. The next section examines claims virtue ethicists initially made that set the theory up as a rival to deontology and consequentialism.

a. How Should One Live?

Moral theories are concerned with right and wrong behavior. This subject area of philosophy is unavoidably tied up with practical concerns about the right behavior. However, virtue ethics changes the kind of question we ask about ethics. Where deontology and consequentialism concern themselves with the right action, virtue ethics is concerned with the good life and what kinds of persons we should be. “What is the right action?” is a significantly different question to ask from “How should I live? What kind of person should I be?” Where the first type of question deals with specific dilemmas, the second is a question about an entire life. Instead of asking what is the right action here and now, virtue ethics asks what kind of person should one be in order to get it right all the time.

Whereas deontology and consequentialism are based on rules that try to give us the right action, virtue ethics makes central use of the concept of character. The answer to “How should one live?” is that one should live virtuously, that is, have a virtuous character.

b. Character and Virtue

Modern virtue ethics takes its inspiration from the Aristotelian understanding of character and virtue. Aristotelian character is, importantly, about a state of being. It’s about having the appropriate inner states. For example, the virtue of kindness involves the right sort of emotions and inner states with respect to our feelings towards others. Character is also about doing. Aristotelian theory is a theory of action, since having the virtuous inner dispositions will also involve being moved to act in accordance with them. Realizing that kindness is the appropriate response to a situation and feeling appropriately kindly disposed will also lead to a corresponding attempt to act kindly.

Another distinguishing feature of virtue ethics is that character traits are stable, fixed, and reliable dispositions. If an agent possesses the character trait of kindness, we would expect him or her to act kindly in all sorts of situations, towards all kinds of people, and over a long period of time, even when it is difficult to do so. A person with a certain character can be relied upon to act consistently over a time.

It is important to recognize that moral character develops over a long period of time. People are born with all sorts of natural tendencies. Some of these natural tendencies will be positive, such as a placid and friendly nature, and some will be negative, such as an irascible and jealous nature. These natural tendencies can be encouraged and developed or discouraged and thwarted by the influences one is exposed to when growing up. There are a number of factors that may affect one’s character development, such as one’s parents, teachers, peer group, role-models, the degree of encouragement and attention one receives, and exposure to different situations. Our natural tendencies, the raw material we are born with, are shaped and developed through a long and gradual process of education and habituation.

Moral education and development is a major part of virtue ethics. Moral development, at least in its early stages, relies on the availability of good role models. The virtuous agent acts as a role model and the student of virtue emulates his or her example. Initially this is a process of habituating oneself in right action. Aristotle advises us to perform just acts because this way we become just. The student of virtue must develop the right habits, so that he tends to perform virtuous acts. Virtue is not itself a habit. Habituation is merely an aid to the development of virtue, but true virtue requires choice, understanding, and knowledge. The virtuous agent doesn't act justly merely out of an unreflective response, but has come to recognize the value of virtue and why it is the appropriate response. Virtue is chosen knowingly for its own sake.

The development of moral character may take a whole lifetime. But once it is firmly established, one will act consistently, predictably and appropriately in a variety of situations.

Aristotelian virtue is defined in Book II of the Nicomachean Ethics as a purposive disposition, lying in a mean and being determined by the right reason. As discussed above, virtue is a settled disposition. It is also a purposive disposition. A virtuous actor chooses virtuous action knowingly and for its own sake. It is not enough to act kindly by accident, unthinkingly, or because everyone else is doing so; you must act kindly because you recognize that this is the right way to behave. Note here that although habituation is a tool for character development it is not equivalent to virtue; virtue requires conscious choice and affirmation.

Virtue “lies in a mean” because the right response to each situation is neither too much nor too little. Virtue is the appropriate response to different situations and different agents. The virtues are associated with feelings. For example: courage is associated with fear, modesty is associated with the feeling of shame, and friendliness associated with feelings about social conduct. The virtue lies in a mean because it involves displaying the mean amount of emotion, where mean stands for appropriate. (This does not imply that the right amount is a modest amount. Sometimes quite a lot may be the appropriate amount of emotion to display, as in the case of righteous indignation). The mean amount is neither too much nor too little and is sensitive to the requirements of the person and the situation.

Finally, virtue is determined by the right reason. Virtue requires the right desire and the right reason. To act from the wrong reason is to act viciously. On the other hand, the agent can try to act from the right reason, but fail because he or she has the wrong desire. The virtuous agent acts effortlessly, perceives the right reason, has the harmonious right desire, and has an inner state of virtue that flows smoothly into action. The virtuous agent can act as an exemplar of virtue to others.

It is important to recognize that this is a perfunctory account of ideas that are developed in great detail in Aristotle. They are related briefly here as they have been central to virtue ethics’ claim to put forward a unique and rival account to other normative theories. Modern virtue ethicists have developed their theories around a central role for character and virtue and claim that this gives them a unique understanding of morality. The emphasis on character development and the role of the emotions allows virtue ethics to have a plausible account of moral psychology—which is lacking in deontology and consequentialism. Virtue ethics can avoid the problematic concepts of duty and obligation in favor of the rich concept of virtue. Judgments of virtue are judgments of a whole life rather than of one isolated action.

c. Anti-Theory and the Uncodifiability of Ethics

In the first book of the Nicomachean Ethics, Aristotle warns us that the study of ethics is imprecise. Virtue ethicists have challenged consequentialist and deontological theories because they fail to accommodate this insight. Both deontological and consequentialist type of theories rely on one rule or principle that is expected to apply to all situations. Because their principles are inflexible, they cannot accommodate the complexity of all the moral situations that we are likely to encounter.

We are constantly faced with moral problems. For example: Should I tell my friend the truth about her lying boyfriend? Should I cheat in my exams? Should I have an abortion? Should I save the drowning baby? Should we separate the Siamese twins? Should I join the fuel protests? All these problems are different and it seems unlikely that we will find the solution to all of them by applying the same rule. If the problems are varied, we should not expect to find their solution in one rigid and inflexible rule that does not admit exception. If the nature of the thing we are studying is diverse and changing, then the answer cannot be any good if it is inflexible and unyielding. The answer to “how should I live?” cannot be found in one rule. At best, for virtue ethics, there can be rules of thumb—rules that are true for the most part, but may not always be the appropriate response.

The doctrine of the mean captures exactly this idea. The virtuous response cannot be captured in a rule or principle, which an agent can learn and then act virtuously. Knowing virtue is a matter of experience, sensitivity, ability to perceive, ability to reason practically, etc. and takes a long time to develop. The idea that ethics cannot be captured in one rule or principle is the “uncodifiability of ethics thesis.” Ethics is too diverse and imprecise to be captured in a rigid code, so we must approach morality with a theory that is as flexible and as situation-responsive as the subject matter itself. As a result some virtue ethicists see themselves as anti-theorists, rejecting theories that systematically attempt to capture and organize all matters of practical or ethical importance.

d. Conclusion

Virtue ethics initially emerged as a rival account to deontology and consequentialism. It developed from dissatisfaction with the notions of duty and obligation and their central roles in understanding morality. It also grew out of an objection to the use of rigid moral rules and principles and their application to diverse and different moral situations. Characteristically, virtue ethics makes a claim about the central role of virtue and character in its understanding of moral life and uses it to answer the questions “How should I live? What kind of person should I be?” Consequentialist theories are outcome-based and Kantian theories are agent-based. Virtue ethics is character-based.

3. Virtue Ethical Theories

Raising objections to other normative theories and defining itself in opposition to the claims of others, was the first stage in the development of virtue ethics. Virtue ethicists then took up the challenge of developing full fledged accounts of virtue that could stand on their own merits rather than simply criticize consequentialism and deontology. These accounts have been predominantly influenced by the Aristotelian understanding of virtue. While some virtue ethics take inspiration from Plato’s, the Stoics’, Aquinas’, Hume’s and Nietzsche’s accounts of virtue and ethics, Aristotelian conceptions of virtue ethics still dominate the field. There are three main strands of development for virtue ethics: Eudaimonism, agent-based theories and the ethics of care.

a. Eudaimonism

“Eudaimonia” is an Aristotelian term loosely (and inadequately) translated as happiness. To understand its role in virtue ethics we look to Aristotle’s function argument. Aristotle recognizes that actions are not pointless because they have an aim. Every action aims at some good. For example, the doctor’s vaccination of the baby aims at the baby’s health, the English tennis player Tim Henman works on his serve so that he can win Wimbledon, and so on. Furthermore, some things are done for their own sake (ends in themselves) and some things are done for the sake of other things (means to other ends). Aristotle claims that all the things that are ends in themselves also contribute to a wider end, an end that is the greatest good of all. That good is eudaimonia. Eudaimonia is happiness, contentment, and fulfillment; it’s the name of the best kind of life, which is an end in itself and a means to live and fare well.

Aristotle then observes that where a thing has a function the good of the thing is when it performs its function well. For example, the knife has a function, to cut, and it performs its function well when it cuts well. This argument is applied to man: man has a function and the good man is the man who performs his function well. Man’s function is what is peculiar to him and sets him aside from other beings—reason. Therefore, the function of man is reason and the life that is distinctive of humans is the life in accordance with reason. If the function of man is reason, then the good man is the man who reasons well. This is the life of excellence or of eudaimonia. Eudaimonia is the life of virtue—activity in accordance with reason, man’s highest function.

The importance of this point of eudaimonistic virtue ethics is that it reverses the relationship between virtue and rightness. A utilitarian could accept the value of the virtue of kindness, but only because someone with a kind disposition is likely to bring about consequences that will maximize utility. So the virtue is only justified because of the consequences it brings about. In eudaimonist virtue ethics the virtues are justified because they are constitutive elements of eudaimonia (that is, human flourishing and wellbeing), which is good in itself.

Rosalind Hursthouse developed one detailed account of eudaimonist virtue ethics. Hursthouse argues that the virtues make their possessor a good human being. All living things can be evaluated qua specimens of their natural kind. Like Aristotle, Hursthouse argues that the characteristic way of human beings is the rational way: by their very nature human beings act rationally, a characteristic that allows us to make decisions and to change our character and allows others to hold us responsible for those decisions. Acting virtuously—that is, acting in accordance with reason—is acting in the way characteristic of the nature of human beings and this will lead to eudaimonia. This means that the virtues benefit their possessor. One might think that the demands of morality conflict with our self-interest, as morality is other-regarding, but eudaimonist virtue ethics presents a different picture. Human nature is such that virtue is not exercised in opposition to self-interest, but rather is the quintessential component of human flourishing. The good life for humans is the life of virtue and therefore it is in our interest to be virtuous. It is not just that the virtues lead to the good life (e.g. if you are good, you will be rewarded), but rather a virtuous life is the good life because the exercise of our rational capacities and virtue is its own reward.

It is important to note, however, that there have been many different ways of developing this idea of the good life and virtue within virtue ethics. Philippa Foot, for example, grounds the virtues in what is good for human beings. The virtues are beneficial to their possessor or to the community (note that this is similar to MacIntyre’s argument that the virtues enable us to achieve goods within human practices). Rather than being constitutive of the good life, the virtues are valuable because they contribute to it.

Another account is given by perfectionists such as Thomas Hurka, who derive the virtues from the characteristics that most fully develop our essential properties as human beings. Individuals are judged against a standard of perfection that reflects very rare or ideal levels of human achievement. The virtues realize our capacity for rationality and therefore contribute to our well-being and perfection in that sense.

b. Agent-Based Accounts of Virtue Ethics

Not all accounts of virtue ethics are eudaimonist. Michael Slote has developed an account of virtue based on our common-sense intuitions about which character traits are admirable. Slote makes a distinction between agent-focused and agent-based theories. Agent-focused theories understand the moral life in terms of what it is to be a virtuous individual, where the virtues are inner dispositions. Aristotelian theory is an example of an agent-focused theory. By contrast, agent-based theories are more radical in that their evaluation of actions is dependent on ethical judgments about the inner life of the agents who perform those actions. There are a variety of human traits that we find admirable, such as benevolence, kindness, compassion, etc. and we can identify these by looking at the people we admire, our moral exemplars.

c. The Ethics of Care

Finally, the Ethics of Care is another influential version of virtue ethics. Developed mainly by feminist writers, such as Annette Baier, this account of virtue ethics is motivated by the thought that men think in masculine terms such as justice and autonomy, whereas woman think in feminine terms such as caring. These theorists call for a change in how we view morality and the virtues, shifting towards virtues exemplified by women, such as taking care of others, patience, the ability to nurture, self-sacrifice, etc. These virtues have been marginalized because society has not adequately valued the contributions of women. Writings in this area do not always explicitly make a connection with virtue ethics. There is much in their discussions, however, of specific virtues and their relation to social practices and moral education, etc., which is central to virtue ethics.

d. Conclusion

There are many different accounts of virtue ethics. The three types discussed above are representative of the field. There is a large field, however, of diverse writers developing other theories of virtue. For example, Christine Swanton has developed a pluralist account of virtue ethics with connections to Nietzsche. Nietzsche’s theory emphasizes the inner self and provides a possible response to the call for a better understanding of moral psychology. Swanton develops an account of self-love that allows her to distinguish true virtue from closely related vices, e.g. self-confidence from vanity or ostentation, virtuous and vicious forms of perfectionism, etc. She also makes use of the Nietzschean ideas of creativity and expression to show how different modes of acknowledgement are appropriate to the virtues.

Historically, accounts of virtue have varied widely. Homeric virtue should be understood within the society within which it occurred. The standard of excellence was determined from within the particular society and accountability was determined by one’s role within society. Also, one’s worth was comparative to others and competition was crucial in determining one’s worth.

Other accounts of virtue ethics are inspired from Christian writers such as Aquinas and Augustine (see the work of David Oderberg). Aquinas’ account of the virtues is distinctive because it allows a role for the will. One’s will can be directed by the virtues and we are subject to the natural law, because we have the potential to grasp the truth of practical judgments. To possess a virtue is to have the will to apply it and the knowledge of how to do so. Humans are susceptible to evil and acknowledging this allows us to be receptive to the virtues of faith, hope and charity—virtues of love that are significantly different from Aristotle’s virtues.

The three types of theories covered above developed over long periods, answering many questions and often changed in response to criticisms. For example, Michael Slote has moved away from agent-based virtue ethics to a more Humean-inspired sentimentalist account of virtue ethics. Humean accounts of virtue ethics rely on the motive of benevolence and the idea that actions should be evaluated by the sentiments they express. Admirable sentiments are those that express a concern for humanity. The interested reader must seek out the work of these writers in the original to get a full appreciation of the depth and detail of their theories.

4. Objections to Virtue Ethics

Much of what has been written on virtue ethics has been in response to criticisms of the theory. The following section presents three objections and possible responses, based on broad ideas held in common by most accounts of virtue ethics.

a. Self-Centeredness

Morality is supposed to be about other people. It deals with our actions to the extent that they affect other people. Moral praise and blame is attributed on the grounds of an evaluation of our behavior towards others and the ways in that we exhibit, or fail to exhibit, a concern for the well-being of others. Virtue ethics, according to this objection, is self-centered because its primary concern is with the agent’s own character. Virtue ethics seems to be essentially interested in the acquisition of the virtues as part of the agent’s own well-being and flourishing. Morality requires us to consider others for their own sake and not because they may benefit us. There seems to be something wrong with aiming to behave compassionately, kindly, and honestly merely because this will make oneself happier.

Related to this objection is a more general objection against the idea that well-being is a master value and that all other things are valuable only to the extent that they contribute to it. This line of attack, exemplified in the writings of Tim Scanlon, objects to the understanding of well-being as a moral notion and sees it more like self-interest. Furthermore, well-being does not admit to comparisons with other individuals. Thus, well-being cannot play the role that eudaimonists would have it play.

This objection fails to appreciate the role of the virtues within the theory. The virtues are other-regarding. Kindness, for example, is about how we respond to the needs of others. The virtuous agent’s concern is with developing the right sort of character that will respond to the needs of others in an appropriate way. The virtue of kindness is about being able to perceive situations where one is required to be kind, have the disposition to respond kindly in a reliable and stable manner, and be able to express one’s kind character in accordance with one’s kind desires. The eudaimonist account of virtue ethics claims that the good of the agent and the good of others are not two separate aims. Both rather result from the exercise of virtue. Rather than being too self-centered, virtue ethics unifies what is required by morality and what is required by self-interest.

b. Action-Guiding

Moral philosophy is concerned with practical issues. Fundamentally it is about how we should act. Virtue ethics has criticized consequentialist and deontological theories for being too rigid and inflexible because they rely on one rule or principle. One reply to this is that these theories are action guiding. The existence of “rigid” rules is a strength, not a weakness because they offer clear direction on what to do. As long as we know the principles, we can apply them to practical situations and be guided by them. Virtue ethics, it is objected, with its emphasis on the imprecise nature of ethics, fails to give us any help with the practicalities of how we should behave. A theory that fails to be action-guiding is no good as a moral theory.

The main response to this criticism is to stress the role of the virtuous agent as an exemplar. Virtue ethics reflects the imprecise nature of ethics by being flexible and situation-sensitive, but it can also be action-guiding by observing the example of the virtuous agent. The virtuous agent is the agent who has a fully developed moral character, who possesses the virtues and acts in accordance with them, and who knows what to do by example. Further, virtue ethics places considerable of emphasis on the development of moral judgment. Knowing what to do is not a matter of internalizing a principle, but a life-long process of moral learning that will only provide clear answers when one reaches moral maturity. Virtue ethics cannot give us an easy, instant answer. This is because these answers do not exist. Nonetheless, it can be action-guiding if we understand the role of the virtuous agent and the importance of moral education and development. If virtue consists of the right reason and the right desire, virtue ethics will be action-guiding when we can perceive the right reason and have successfully habituated our desires to affirm its commands.

c. Moral Luck

Finally, there is a concern that virtue ethics leaves us hostage to luck. Morality is about responsibility and the appropriateness of praise and blame. However, we only praise and blame agents for actions taken under conscious choice. The road to virtue is arduous and many things outside our control can go wrong. Just as the right education, habits, influences, examples, etc. can promote the development of virtue, the wrong influencing factors can promote vice. Some people will be lucky and receive the help and encouragement they need to attain moral maturity, but others will not. If the development of virtue (and vice) is subject to luck, is it fair to praise the virtuous (and blame the vicious) for something that was outside of their control? Further, some accounts of virtue are dependent on the availability of external goods. Friendship with other virtuous agents is so central to Aristotelian virtue that a life devoid of virtuous friendship will be lacking in eudaimonia. However, we have no control over the availability of the right friends. How can we then praise the virtuous and blame the vicious if their development and respective virtue and vice were not under their control?

Some moral theories try to eliminate the influence of luck on morality (primarily deontology). Virtue ethics, however, answers this objection by embracing moral luck. Rather than try to make morality immune to matters that are outside of our control, virtue ethics recognizes the fragility of the good life and makes it a feature of morality. It is only because the good life is so vulnerable and fragile that it is so precious. Many things can go wrong on the road to virtue, such that the possibility that virtue is lost, but this vulnerability is an essential feature of the human condition, which makes the attainment of the good life all the more valuable.

5. Virtue in Deontology and Consequentialism

Virtue ethics offers a radically different account to deontology and consequentialism. Virtue ethics, however, has influenced modern moral philosophy not only by developing a full-fledged account of virtue, but also by causing consequentialists and deontologists to re-examine their own theories with view to taking advantage of the insights of virtue.

For years Deontologists relied mainly on the Groundwork of the Metaphysics of Morals for discussions of Kant’s moral theory. The emergence of virtue ethics caused many writers to re-examine Kant’s other works. Metaphysics of Morals, Anthropology From a Pragmatic Point of View and, to a lesser extent, Religion Within the Limits of Reason Alone, have becomes sources of inspiration for the role of virtue in deontology. Kantian virtue is in some respects similar to Aristotelian virtue. In the Metaphysics of Morals, Kant stresses the importance of education, habituation, and gradual development—all ideas that have been used by modern deontologists to illustrate the common sense plausibility of the theory. For Kantians, the main role of virtue and appropriate character development is that a virtuous character will help one formulate appropriate maxims for testing. In other respects, Kantian virtue remains rather dissimilar from other conceptions of virtue. Differences are based on at least three ideas: First, Kantian virtue is a struggle against emotions. Whether one thinks the emotions should be subjugated or eliminated, for Kant moral worth comes only from the duty of motive, a motive that struggles against inclination. This is quite different from the Aristotelian picture of harmony between reason and desire. Second, for Kant there is no such thing as weakness of will, understood in the Aristotelian sense of the distinction between continence and incontinence. Kant concentrates on fortitude of will and failure to do so is self-deception. Finally, Kantians need to give an account of the relationship between virtue as occurring in the empirical world and Kant’s remarks about moral worth in the noumenal world (remarks that can be interpreted as creating a contradiction between ideas in the Groundwork and in other works).

Consequentialists have found a role for virtue as a disposition that tends to promote good consequences. Virtue is not valuable in itself, but rather valuable for the good consequences it tends to bring about. We should cultivate virtuous dispositions because such dispositions will tend to maximize utility. This is a radical departure from the Aristotelian account of virtue for its own sake. Some consequentialists, such as Driver, go even further and argue that knowledge is not necessary for virtue.

Rival accounts have tried to incorporate the benefits of virtue ethics and develop in ways that will allow them to respond to the challenged raised by virtue ethics. This has led to very fruitful and exciting work being done within this area of philosophy.

6. References and Further Reading

a. Changing Modern Moral Philosophy

Anscombe, G.E. M., “Modern Moral Philosophy”, Philosophy, 33 (1958).
The original call for a return to Aristotelian ethics.
MacIntyre, A., After Virtue (London: Duckworth, 1985).
His first outline of his account of the virtues.
Murdoch, I., The Sovereignty of Good (London: Ark, 1985)
Williams, B., Ethics and the Limits of Philosophy (London: Fontana, 1985).
Especially Chapter 10 for the thoughts discussed in this paper.

b. Overviews of Virtue Ethics

Oakley, J., “Varieties of Virtue Ethics”, Ratio, vol. 9 (1996)
Trianosky, G.V. “What is Virtue Ethics All About?” in Statman D., Virtue Ethics (Cambridge: Edinburgh University Press, 1997)

c. Varieties of Virtue Ethics

Adkins, A.W.H., Moral Values and Political Behaviour in Ancient Greece from Homer to the End of the Fifth Century (London: Chatto and Windus, 1972).
An account of Homeric virtue.
Baier, A., Postures of the Mind (Minneapolis: University of Minnesota Press, 1985)
Blum, L.W., Friendship, Altruism and Morality (London: 1980)
Cottingham, J., “Partiality and the Virtues”, in Crisp R. and Slote M., How Should One Live? (Oxford: Clarendon Press, 1996)
Cottingham, J., “Religion, Virtue and Ethical Culture”, Philosophy, 69 (1994)
Cullity, G., “Aretaic Cognitivism”, American Philosophical Quarterly, vol. 32, no. 4, (1995a).
Particularly good on the distinction between aretaic and deontic.
Cullit,y G., “Moral Character and the Iteration Problem”, Utilitas, vol. 7, no. 2, (1995b)
Dent, N.J.H., “The Value of Courage”, Philosophy, vol. 56 (1981)
Dent, N.J.H., “Virtues and Actions”, The Philosophical Quarterly, vol. 25 (1975)
Dent, N.J.H., The Psychology of the Virtues (G.B.: Cambridge University Press, 1984)
Driver, J., “Monkeying with Motives: Agent-based Virtue Ethics”, Utilitas, vol. 7, no. 2 (1995).
A critique of Slote’s agent-based virtue ethics.
Foot, P., Natural Goodness (Oxford: Clarendon Press, 2001).
Her more recent work, developing new themes in her account of virtue ethics.
Foot, P., Virtues and Vices (Oxford: Blackwell, 1978).
Her original work, setting out her version of virtue ethics.
Hursthouse, R., “Virtue Theory and Abortion”, Philosophy and Public Affairs, 20, (1991)
Hursthouse, R., On Virtue Ethics (Oxford: OUP, 1999).
A book length account of eudaimonist virtue ethics, incorporating many of the ideas from her previous work and fully developed new ideas and responses to criticisms.
McDowell, J., “Incontinence and Practical Wisdom in Aristotle”, in Lovibond S and Williams S.G.,Essays for David Wiggins, Aristotelian Society Series, Vol.16 (Oxford: Blackwell, 1996)
McDowel,l J., “Virtue and Reason”, The Monist, 62 (1979)
Roberts, R.C., “Virtues and Rules”, Philosophy and Phenomenological Research, vol. LI, no. 2 (1991)
Scanlon, T.M., What We Owe Each Other (Cambridge: Harvard University Press, 1998).
A comprehensive criticism of well-being as the foundation of moral theories.
Slote, M., From Morality to Virtue (New York: OUP, 1992).
His original account of agent-based virtue ethics.
Slote, M., Morals from Motives, (Oxford: OUP, 2001).
A new version of sentimentalist virtue ethics.
Swanton, C., Virtue Ethics (New York: OUP, 2003).
A pluralist account of virtue ethics, inspired from Nietzschean ideas.
Walker, A.D.M., “Virtue and Character”, Philosophy, 64 (1989)

d. Collections on Virtue Ethics

Crisp, R. and M. Slote, How Should One Live? (Oxford: Clarendon Press, 1996).
A collection of more recent as well as critical work on virtue ethics, including works by Kantian critics such as O’Neill, consequentialist critics such as Hooker and Driver, an account of Humean virtue by Wiggins, and others.
Crisp, R. and M. Slote, Virtue Ethics (New York: OUP, 1997).
A collection of classic papers on virtue ethics, including Anscombe, MacIntyre, Williams, etc.
Engstrom, S., and J. Whiting, Aristotle, Kant and the Stoics (USE: Cambridge University Press, 1996).
A collection bringing together elements from Aristotle, Kant and the Stoics on topics such as the emotions, character, moral development, etc.
Hursthouse, R., G. Lawrence and W. Quinn, Virtues and Reasons (Oxford: Clarendon Press, 1995).
A collections of essays in honour of Philippa Foot, including contributions by Blackburn, McDowell, Kenny, Quinn, and others.
Rorty, A.O., Essays on Aristotle’s Ethics (USA: University of California Press, 1980).
A seminal collection of papers interpreting the ethics of Aristotle, including contributions by Ackrill, McDowell and Nagel on eudaimonia, Burnyeat on moral development, Urmson on the doctrine of the mean, Wiggins and Rorty on weakness of will, and others.
Statman, D., Virtue Ethics (Cambridge: Edinburgh University Press, 1997).
A collection of contemporary work on virtue ethics, including a comprehensive introduction by Statman, an overview by Trianosky, Louden and Solomon on objections to virtue ethics, Hursthouse on abortion and virtue ethics, Swanton on value, and others.

e. Virtue and Moral Luck

Andree, J., “Nagel, Williams and Moral Luck”, Analysis 43 (1983).
An Aristotelian response to the problem of moral luck.
Nussbaum, M., Love’s Knowledge (Oxford: Oxford University Press, 1990)
Nussbaum, M., The Fragility of Goodness (Cambridge: Cambridge University Press, 1986).
Includes her original response to the problem of luck as well as thoughts on rules as rules of thumb, the role of the emotions, etc.
Statman, D., Moral Luck (USA: State University of New York Press, 1993).
An excellent introduction by Statman as well as almost every article written on moral luck, including Williams’ and Nagel’s original discussions (and a postscript by Williams).

f. Virtue in Deontology and Consequentialism

Baron, M.W., Kantian Ethics Almost Without Apology (USA: Cornell University Press, 1995).
A book length account of a neo-Kantian theory that takes virtue and character into account.
Baron, M.W., P. Pettit and M. Slote, Three Methods of Ethics (GB: Blackwell, 1997).
Written by three authors adopting three perspectives, deontology, consequentialism and virtue ethics, this is an excellent account of how the three normative theories relate to each other.
Drive,r J., Uneasy Virtue (Cambridge: Cambridge University Press, 2001).
A book length account of a consequentialist version of virtue ethics, incorporating many of her ideas from previous pieces of work.
Herman, B., The Practice of Moral Judgement (Cambridge: Harvard University Press, 1993).
Another neo-Kantian who has a lot to say on virtue and character.
Hooker, B., Ideal Code, Real World (Oxford: Clarendon Press, 2000).
A modern version of rule-consequentialism, which is in many respects sensitive to the insights of virtue.
O’Neill, “Kant’s Virtues”, in Crisp R. and Slote M., How Should One Live? (Oxford: Clarendon Press, 1996).
One of the first Kantian responses to virtue ethics.
Sherman, N., The Fabric of Character (GB: Clarendon Press, 1989).
An extremely sympathetic account of Aristotelian and Kantian ideas on the emotions, virtue and character.
Sherman, N., Making a Necessity of Virtue (USA: Cambridge University Press, 1997).



Source: Internet Encyclopedia of Philosophy (IEP)


Buddhist Economics

$
0
0

E. F. Schumacher (1911-1977)

From E. F. Schumacher, Small Is Beautiful. Copyright (C) 1973 by E. F. Schumacher.


"Right Livelihood" is one of the requirements of the Buddha's Noble Eightfold Path. It is clear, therefore, that there must be such a thing as Buddhist economics.


Buddhist countries have often stated that they wish to remain faithful to their heritage. So Burma: "The New Burma sees no conflict between religious values and economic progress. Spiritual health and material well-being  are not enemies: they are natural allies."[1] Or: "We can blend successfully  the religious and spiritual values of our heritage with the benefits of modern technology."[2] Or: "We Burmans have a sacred duty to conform both our dreams and our acts to our faith. This we shall ever do."[3]


All the same, such countries invariably assume that they can model their
economic development plans in accordance with modern economics, and they
call upon modern economists from so-called advanced countries to advise
them, to formulate the policies to be pursued, and to construct the grand
design for development, the Five-Year Plan or whatever it may be called. No
one seems to think that a Buddhist way of life would call for Buddhist
economics, just as the modern materialist way of life has brought forth
modern economics.

Economists themselves, like most specialists, normally suffer from a kind
of metaphysical blindness, assuming that theirs is a science of absolute
and invariable truths, without any presuppositions. Some go as far as to
claim that economic laws are as free from "metaphysics" or "values" as the
law of gravitation. We need not, however, get involved in arguments of
methodology. Instead, let us take some fundamentals and see what they look
like when viewed by a modern economist and a Buddhist economist.

There is universal agreement that a fundamental source of wealth is human
labor. Now, the modern economist has been brought up to consider "labor" or
work as little more than a necessary evil. From the point of view of the
employer, it is in any case simply an item of cost, to be reduced to a
minimum if it cannot be eliminated altogether, say, by automation. From the
point of view of the workman, it is a "disutility"; to work is to make a
sacrifice of one's leisure and comfort, and wages are a kind of
compensation for the sacrifice. Hence the ideal from the point of view of
the employer is to have output without employees, and the ideal from the
point of view of the employee is to have income without employment.

The consequences of these attitudes both in theory and in practice are, of
course, extremely far-reaching. If the ideal with regard to work is to get
rid of it, every method that "reduces the work load" is a good thing.

The most potent method, short of automation, is the so-called "division of
labor" and the classical example is the pin factory eulogized in Adam
Smith's Wealth of Nations. Here it is not a matter of ordinary
specialization, which mankind has practiced from time immemorial, but of
dividing up every complete process of production into minute parts, so that
the final product can be produced at great speed without anyone having had
to contribute more than a totally insignificant and, in most cases,
unskilled movement of his limbs.

Add caption
The Buddhist point of view takes the function of work to be at least
threefold: to give man a chance to utilize and develop his faculties; to
enable him to overcome his ego-centeredness by joining with other people in a common task; and to bring forth the goods and services needed for a
becoming existence. Again, the consequences that flow from this view are
endless. To organize work in such a manner that it becomes meaningless,
boring, stultifying, or nerve-racking for the worker would be little short
of criminal; it would indicate a greater concern with goods than with
people, an evil lack of compassion and a soul-destroying degree of
attachment to the most primitive side of this worldly existence. Equally,
to strive for leisure as an alternative to work would be considered a complete misunderstanding of one of the basic truths of human existence, namely that work and leisure are complementary parts of the same living process and cannot be separated without destroying the joy of work and the bliss of leisure.

From the Buddhist point of view, there are therefore two types of
mechanization which must be clearly distinguished: one that enhances a
man's skill and power and one that turns the work of man over to a
mechanical slave, leaving man in a position of having to serve the slave.
How to tell the one from the other? "The craftsman himself," says Ananda
Coomaraswamy, a man equally competent to talk about the modern West as the
ancient East, "can always, if allowed to, draw the delicate distinction
between the machine and the tool. The carpet loom is a tool, a contrivance
for holding warp threads at a stretch for the pile to be woven round them
by the craftmen's fingers; but the power loom is a machine, and its
significance as a destroyer of culture lies in the fact that it does the
essentially human part of the work. "[4] It is clear, therefore, that
Buddhist economics must be very different from the economics of modern
materialism, since the Buddhist sees the essence of civilization not in a
multiplication of wants but in the purification of human character.
Character, at the same time, is formed primarily by a man's work. And work,
properly conducted in conditions of human dignity and freedom, blesses
those who do it and equally their products. The Indian philosopher and
economist J. C. Kumarappa sums the matter up as follows:

If the nature of the work is properly appreciated and applied, it will
stand in the same relation to the higher faculties as food is to the
physical body. It nourishes and enlivens the higher man and urges him to
produce the best he is capable of. It directs his free will along the
proper course and disciplines the animal in him into progressive channels.
It furnishes an excellent background for man to display his scale of values
and develop his personality.[5]

If a man has no chance of obtaining work he is in a desperate position, not
simply because he lacks an income but because he lacks this nourishing and
enlivening factor of disciplined work which nothing can replace. A modern
economist may engage in highly sophisticated calculations on whether full
employment "pays" or whether it might be more "economic" to run an economy
at less than full employment so as to ensure a greater mobility of labor, a
better stability of wages, and so forth. His fundamental criterion of
success is simply the total quantity of goods produced during a given
period of time. "If the marginal urgency of goods is low," says Professor
Galbraith in The Affluent Society, "then so is the urgency of employing the
last man or the last million men in the labor force."[6] And again:

If . . . we can afford some unemployment in the interest of stability--a
proposition, incidentally, of impeccably conservative antecedents--then we
can afford to give those who are unemployed the goods that enable them to
sustain their accustomed standard of living.

From a Buddhist point of view, this is standing the truth on its head by
considering goods as more important than people and consumption as more
important than creative activity. It means shifting the emphasis from the
worker to the product of work, that is, from the human to the subhuman, a
surrender to the forces of evil. The very start of Buddhist economic
planning would be a planning for full employment, and the primary purpose
of this would in fact be employment for everyone who needs an "outside"
job: it would not be the maximization of employment nor the maximization of
production. Women, on the whole, do not need an "outside" job, and the
large-scale employment of women in offices or factories would be considered
a sign of serious economic failure. In particular, to let mothers of young
children work in factories while the children run wild would be as
uneconomic in the eyes of a Buddhist economist as the employment of a
skilled worker as a soldier in the eyes of a modern economist.

While the materialist is mainly interested in goods, the Buddhist is mainly
interested in liberation. But Buddhism is "The Middle Way" and therefore in
no way antagonistic to physical well-being. It is not wealth that stands in
the way of liberation but the attachment to wealth; not the enjoyment of
pleasurable things but the craving for them. The keynote of Buddhist
economics, therefore, is simplicity and non-violence. From an economist's
point of view, the marvel of the Buddhist way of life is the utter
rationality of its pattern-- amazingly small means leading to
extraordinarily satisfactory results.

For the modern economist this is very difficult to understand. He is used
to measuring the "standard of living" by the amount of annual consumption,
assuming all the time that a man who consumes more is "better off" than a
man who consumes less. A Buddhist economist would consider this approach
excessively irrational: since consumption is merely a means to human
well-being, the aim should be to obtain the maximum of well-being with the
minimum of consumption. Thus, if the purpose of clothing is a certain
amount of temperature comfort and an attractive appearance, the task is to
attain this purpose with the smallest possible effort, that is, with the
smallest annual destruction of cloth and with the help of designs that
involve the smallest possible input of toil. The less toil there is, the
more time and strength is left for artistic creativity. It would be highly
uneconomic, for instance, to go in for complicated tailoring, like the
modern West, when a much more beautiful effect can be achieved by the
skillful draping of uncut material. It would be the height of folly to make
material so that it should wear out quickly and the height of barbarity to
make anything ugly, shabby, or mean. What has just been said about clothing
applies equally to all other human requirements. The ownership and the
consumption of goods is a means to an end, and Buddhist economics is the
systematic study of how to attain given ends with the minimum means.

Modern economics, on the other hand, considers consumption to be the sole
end and purpose of all economic activity, taking the factors of
production--land, labor, and capital--as the means. The former, in short,
tries to maximize human satisfactions by the optimal pattern of
consumption, while the latter tries to maximize consumption by the optimal
pattern of productive effort. It is easy to see that the effort needed to
sustain a way of life which seeks to attain the optimal pattern of
consumption is likely to be much smaller than the effort needed to sustain
a drive for maximum consumption. We need not be surprised, therefore, that
the pressure and strain of living is very much less in, say, Burma than it
is in the United States, in spite of the fact that the amount of
labor-saving machinery used in the former country is only a minute fraction
of the amount used in the latter.

Simplicity and non-violence are obviously closely related. The optimal
pattern of consumption, producing a high degree of human satisfaction by
means of a relatively low rate of consumption, allows people to live
without great pressure and strain and to fulfill the primary injunction of
Buddhist teaching: "Cease to do evil; try to do good." As physical
resources are everywhere limited, people satisfying their needs by means of
a modest use of resources are obviously less likely to be at each other's
throats than people depending upon a high rate of use. Equally, people who
live in highly self-sufficient local communities are less likely to get
involved in large-scale violence than people whose existence depends on
world-wide systems of trade.

From the point of view of Buddhist economics, therefore, production from
local resources for local needs is the most rational way of economic life,
while dependence on imports from afar and the consequent need to produce
for export to unknown and distant peoples is highly uneconomic and
justifiable only in exceptional cases and on a small scale. Just as the
modern economist would admit that a high rate of consumption of transport
services between a man's home and his place of work signifies a misfortune
and not a high standard of life, so the Buddhist economist would hold that
to satisfy human wants from faraway sources rather than from sources nearby
signifies failure rather than success. The former tends to take statistics
showing an increase in the number of ton/miles per head of the population
carried by a country's transport system as proof of economic progress,
while to the latter-the Buddhist economist--the same statistics would
indicate a highly undesirable deterioration in the pattern of consumption.

Another striking difference between modern economics and Buddhist economics
arises over the use of natural resources. Bertrand de Jouvenel, the eminent
French political philosopher, has characterized "Western man" in words
which may be taken as a fair description of the modern economist:

He tends to count nothing as an expenditure, other than human effort; he
does not seem to mind how much mineral matter he wastes and, far worse, how
much living matter he destroys. He does not seem to realize at all that
human life is a dependent part of an ecosystem of many different forms of
life. As the world is ruled from towns where men are cut off from any form
of life other than human, the feeling of belonging to an ecosystem is not
revived. This results in a harsh and improvident treatment of things upon
which we ultimately depend, such as water and trees. [7]

The teaching of the Buddha, on the other hand, enjoins a reverent and
non-violent attitude not only to all sentient beings but also, with great
emphasis, to trees. Every follower of the Buddha ought to plant a tree
 every few years and look after it until it is safely established, and the
Buddhist economist can demonstrate without difficulty that the universal
observation of this rule would result in a high rate of genuine economic
development independent of any foreign aid. Much of the economic decay of
Southeast Asia (as of many other parts of the world) is undoubtedly due to
a heedless and shameful neglect of trees.

Modern economics does not distinguish between renewable and non-renewable
materials, as its very method is to equalize and quantify everything by
means of a money price. Thus, taking various alternative fuels, like coal,
oil, wood, or water-power: the only difference between them recognized by
modern economics is relative cost per equivalent unit. The cheapest is
automatically the one to be preferred, as to do otherwise would be
irrational and "uneconomic." From a Buddhist point of view, of course, this
will not do; the essential difference between non-renewable fuels like coal
and oil on the one hand and renewable fuels like wood and water-power on
the other cannot be simply overlooked. Non-renewable goods must be used
only if they are indispensable, and then only with the greatest care and
the most meticulous concern for conservation. To use them heedlessly or
extravagantly is an act of violence, and while complete non-violence may
not be attainable on this earth, there is nonetheless an ineluctable duty
on man to aim at the ideal of non-violence in all he does.

Just as a modern European economist would not consider it a great economic
achievement if all European art treasures were sold to America at
attractive prices, so the Buddhist economist would insist that a population
basing its economic life on non-renewable fuels is living parasitically, on
capital instead of income. Such a way of life could have no permanence and
could therefore be justified only as a purely temporary expedient. As the
world's resources of non-renewable fuels--coal, oil and natural gas--are
exceedingly unevenly distributed over the globe and undoubtedly limited in
quantity, it is clear that their exploitation at an ever-increasing rate is
an act of violence against nature which must almost inevitably lead to
violence between men.

This fact alone might give food for thought even to those people in
 Buddhist countries who care nothing for the religious and spiritual values
of their heritage and ardently desire to embrace the materialism of modern
economics at the fastest possible speed. Before they dismiss Buddhist
economics as nothing better than a nostalgic dream, they might wish to
consider whether the path of economic development outlined by modern
economics is likely to lead them to places where they really want to be.
Towards the end of his courageous book The Challenge of Man's Future,
Professor Harrison Brown of the California Institute of Technology gives
the following appraisal:

Thus we see that, just as industrial society is fundamentally unstable and
subject to reversion to agrarian existence, so within it the conditions
which offer individual freedom are unstable in their ability to avoid the
conditions which impose rigid organization and totalitarian control.
Indeed, when we examine all of the foreseeable difficulties which threaten
the survival of industrial civilization, it is difficult to see how the
achievement of stability and the maintenance of individual liberty can be
made compatible. [8]

Even if this were dismissed as a long-term view there is the immediate
question of whether "modernization," as currently practiced without regard
to religious and spiritual values, is actually producing agreeable results.
As far as the masses are concerned, the results appear to be disastrous--a
collapse of the rural economy, a rising tide of unemployment in town and
country, and the growth of a city proletariat without nourishment for
either body or soul.

It is in the light of both immediate experience and long-term prospects
that the study of Buddhist economics could be recommended even to those who
believe that economic growth is more important than any spiritual or
religious values. For it is not a question of choosing between "modern
growth" and "traditional stagnation." It is a question of finding the right
path of development, the Middle Way between materialist heedlessness and
traditionalist immobility, in short, of finding "Right Livelihood."

NOTES

1. The New Burma (Economic and Social Board, Government of the Union of
Burma, 1954).

2. Ibid,

3. Ibid.

4. Art and Swadeshi by Ananda K. Coomaraswamy (Ganesh & Co., Madras).

5. Economy of Permanence by J.C. Kumarappa (Sarva-Seva Sangh Publication,
Rajghat, Kashi, 4th edn., 1958).

6. The Affluent Society by John Kenneth Galbraith (Penguin Books Ltd.,
1962).

7. A Philosophy of Indian Economic Development by Richard B. Gregg
(Navajivan Publishing House, Ahmedabad, India, 1958).

8. The Challenge of Man's Future by Harrison Brown (The Viking Press, New
York, 1954).


It takes a whole village to raise a child

$
0
0


Rev. Joseph G. Healey, M.M.
Dar Es Salaam, Tanzania

It takes a whole village to raise a child

This Igbo and Yoruba (Nigeria) proverb exists in different forms in many African languages. The basic meaning is that child upbringing is a communal effort. The responsibility for raising a child is shared with the larger family (sometimes called the extended family). Everyone in the family participates especially the older children, aunts and uncles, grandparents, and even cousins. It is not unusual for African children to stay for long periods with their grandparents or aunts or uncles. Even the wider community gets involved such as neighbors and friends. Children are considered a blessing from God for the whole community. This communal responsibility in raising children is also seen in the Sukuma (Tanzania) proverb "One knee does not bring up a child" and in the Swahili (East and Central Africa) proverb "One hand does not nurse a child."

In general this Nigerian proverb conveys the African worldview that emphasizes the values of family relationships, parental care, self-sacrificing concern for others, sharing, and even hospitality. This is very close to the Biblical worldview as seen in scripture texts related to unity and cooperation (Ecclesiastes 4:9,12) and a mother's self-sacrificing love (Isaiah 49:15-16).


The multiple uses of this Nigerian proverb show the timeliness and relevancy of African proverbs in today's world. In 1996 Hillary Clinton, the wife of the President of the United States, published a book on children and family values entitled "It Takes a Village" based on this proverb. That same year Maryknoll Father Don Sybertz and I published the first edition of our book "Towards An African Narrative Theology" (now available from Paulines Publications Africa, Nairobi, Kenya and Orbis Books, Maryknoll, New York, USA). In Chapter Three on "Community'' we used this Nigerian proverb and many other African proverbs and sayings on the values of community, unity, cooperation and sharing. In Dallas, Texas there was a controversy over four security guards that whipped some kids who broke into a mall. The parents of the kids said that the guards had no right to discipline their kids, but the guards said that they did what they did because "the village raises the children."

The Anglican Archbishop John Sentamu of York, England at a consultation in Swanwick, England in September, 2005 stated: "As It takes a whole village to raise a child so it takes the whole global village to eradicate poverty . It starts with each of us personally. [For example] do we buy Fairtrade goods?"


www.afriprov.org

Modern Moral Philosophy

$
0
0


By G. E. M. Anscombe 

Originally published in Philosophy 33, No. 124 (January 1958).

I will begin by stating three theses which I present in this paper. The first is that it is not profitable for us at present to do moral philosophy; that should be laid aside at any rate until we have an adequate philosophy of psychology, in which we are conspicuously lacking. The second is that the concepts of obligation, and duty-moral obligation and moral duty, that is to say-and of what is morally right and wrong, and of the moral sense of "ought," ought to be jettisoned if this is psychologically possible; because they are survivals, or derivatives from survivals, from an earlier conception of ethics which no longer generally survives, and are only harmful without it. My third thesis is that the differences between the wellknown English writers on moral philosophy from Sidgwick to the present day are of little importance.

Anyone who has read Aristotle's Ethics and has also read modern moral philosophy must have been struck by the great contrasts between them. The concepts which are prominent among the moderns seem to be lacking, or at any rate buried or far in the background, in Aristotle. Most noticeably, the term "moral" itself, which we have by direct inheritance Aristotle, just doesn't seem to fit, in its modern sense, into an account of Aristotelian ethics. Aristotle distinguishes virtues as moral and intellectual. Have some of what he calls "intellectual" virtues what we should call a "moral" aspect? It would seem so; the criterion is presumably that a failure in an "intellectual" virtue--like that of having good judgment in calculating how to bring about something useful, say in municipal government-may be blameworthy. But-it may reasonably be asked-cannot any failure be made a matter of blame or reproach? Any derogatory criticism, say of the workmanship of a product or the design of a machine, can be called blame or reproach. So we want to put in the word "morally" again: sometimes such a failure may be morally blameworthy, sometimes not. Now has Aristotle got this idea of moral blame, as opposed to any other? If he has, why isn't it more central? There are some mistakes, he says, which are causes, not of involuntariness in actions but of scoundrelism, and for which a man is blamed. Does this mean that there is a moral obligation not to make certain intellectual mistakes? Why doesn't he discuss obligation in general, and this obligation in particular?  If someone professes to be expounding Aristotle and talks in a modern fashion about “moral” such-and-such he must be very imperceptive if he does not constantly feel like someone whose jaws have somehow got out of alignment: the teeth don’t come together in a proper bite.

We cannot then, look to Aristotle for any elucidation of the modern way of talking about “moral” goodness, obligation, etc.  And all the best-known writers on ethics in modern times, from Butler to Mill, appear to me to have faults as thinkers on the subject which make it impossible to hope for any direct light on it from them.  I will state these objections with the brevity which their character makes possible.

Butler exalts conscience, but appears ignorant that a man’s conscience may tell him to do the vilest things.

Hume defines “truth” in such a way as to exclude ethical judgments from it, and professes that he has proved that they are so excluded.  He also implicitly defines “passion” in such a way that aiming at anything is having a passion.  His objection to passing from “is” to “ought” would apply equally to passing from “is” to “owes” or from “is” to “needs.”  (However, because of the historical situation, he has a point here, which I shall return to.)

Kant introduces the idea of “legislating for oneself,” which is as absurd as if in these days, when majority votes command great respect, one were to call each reflective decision a man made a vote resulting in a majority, which as a matter of proportion is overwhelming, for it is always 1-0.  The concept of legislation requires superior power in the legislator.  His own rigoristic convictions on the subject of lying were so intense that it never occurred to him that a lie could be relevantly described as anything but just a lie (e.g. as “a lie in such-and-such circumstances”).  His rule about universalizable maxims is useless without stipulations as to what shall count as a relevant description of an action with a view to constructing a maxim about it.

Bentham and Mill do not notice the difficulty of the concept “pleasure.”  They are often said to have gone wrong through committing the “naturalistic fallacy”; but this charge does not impress me, because I do not find accounts of it coherent.  But the other point—about pleasure—seems to me a fatal objection from the very outset.  The ancients found this concept pretty baffling.  It reduced Aristotle to sheer babble about “the bloom on the cheek of youth” because, for good reasons, he wanted to make it out both identical with and different from the pleasurable activity.  Generations of modern philosophers found this concept quite unperplexing, and it reappeared in the literature as a problematic one only a year or two ago when Ryle wrote about it.  The reason is simple:  since Locke, pleasure was taken to be some sort of internal impression.  But it was superficial, if that was the right account of it, to make it the point of actions.  One might adapt something Wittgenstein said about “meaning” and say “Pleasure cannot be an internal impression, for no internal impression could have the consequences of pleasure.”

Mill also, like Kant, fails to realize the necessity for stipulation as to relevant descriptions, if his theory is to have content.  It did not occur to him that acts of murder and theft could be otherwise described.  He holds that where a proposed action is of such a kind as to fall under some one principle established on grounds of utility, one must go by that; where it falls under none or several, the several suggesting contrary views of the action, the thing to do is to calculate particular consequences.  But pretty well any action can be so described as to make it fall under a variety of principles of utility (as I shall say for short) if it falls under any.

I will now return to Hume.  The features of Hume’s philosophy which I have mentioned, like many other features of it, would incline me to think that Hume was a mere—brilliant—sophist; and his procedures are certainly sophistical.  But I am forced, not to reverse, but to add to, this judgment by a peculiarity of Hume’s philosophizing: namely that although he reaches his conclusions—with which he is in love—by sophistical methods, his considerations constantly open up very deep and important problems.  It is often the case that in the act of exhibiting the sophistry one finds oneself noticing matters which deserve a lot of exploring:  the obvious stands in need of investigations as a result of the points that Hume pretends to have made.  In this, he is unlike, say, Butler.  It was already well known that conscience could dictate vile actions; for Butler to have written disregarding this does not open up any new topics for us.  But with Hume it is otherwise: hence he is a very profound and great philosopher, in spite of his sophistry.  For example:

Suppose that I say to my grocer “Truth consists in either relations of ideas, as that 20s=£1, or matters of fact, as that I ordered potatoes, you supplied them, and you sent me a bill.  So it doesn’t apply to such a proposition as that I owe you such-and-such a sum.”

Now if one makes this comparison, it comes to light that the relation of the facts mentioned to the description “X owes Y so much money” is an interesting one, which I will call that of being “brute relative to” that description.  Further, the “brute” facts mentioned here themselves have descriptions relatively to which other facts are “brute”—as, e.g., he had potatoes carted to my house and they were left there are brute facts relative to “he supplied me with potatoes.”  And the fact X owes Y money is in turn “brute” relative to other descriptions—e.g. “X is solvent.”  Now the relation of “relative bruteness” is a complicated one.  To mention a few points:  if xyz is a set of facts brute relative to a description A, then xyz is a set out of a range some set among which holds if A holds; but the holding of some set among these does not necessarily entail A because exceptional circumstances can always make a difference; and what are exceptional circumstance relatively to A can generally only be explained by giving a few diverse examples, and no theoretically adequate provision can be made for exceptional circumstances, since a further special context can theoretically always be imagined that would reinterpret any special context.  Further, though in normal circumstances, xyz would be a justification for A, of which institution A is of course not itself a description.  (E.g. the statement that I give someone a shilling is not a description of the institution of money or of the currency of the country.)  Thus, though it would be ludicrous to pretend that there can be no such thing as a transition from, e.g., “is” to “owes,” the character of the transition is in fact rather interesting and comes to light as a result of reflecting on Hume’s arguments.[1]

That I owe the grocer such-and-such a sum would be one of a set of facts which would be “brute” in relation to the description “I am a bilker.”  “bilking” is of course a species of “dishonesty” or “injustice.”  (Naturally the consideration will not have any effect on my actions unless I want to commit or avoid acts of injustice.)

So far, in spite of their strong associations, I conceive “bilking,” “injustice” and “dishonesty” in a merely factual way.  That I can do this for “bilking” is obvious enough; “justice” I have no idea how to define, except that its sphere is that of actions which relate to someone else, but “injustice,” its defect, can for the moment be offered as a generic name covering various species.  E.g.: “bilking,” “theft” (which is relative to whatever property institutions exist), “slander,” “adultery,” ”punishment of the innocent.”

In present-day philosophy an explanation is required how an unjust man is a bad man, or an unjust action a bad one; to give such an explanation belongs to ethics; but it cannot even be begun until we are equipped with a sound philosophy of psychology.  For the proof that an unjust man is a bad man would require a positive account of justice as a “virtue.”  This part of the subject-matter of ethics, is however, completely closed to us until we have an account of what type of characteristic a virtue is—a problem, not of ethics, but of conceptual analysis—and how it relates to the actions in which it is instanced:  a matter which I think Aristotle did not succeed in really making clear.  For this we certainly need an account at least of what a human action is at all, and how its description as “doing such-and-such” is affected by its motive and by the intention or intentions in it; and for this an account of such concepts is required.

The terms “should” or “ought” or “needs” relate to good and bad: e.g. machinery needs oil, or should or ought to be oiled, in that running without oil is bad for it, or it runs badly without oil. According to this conception, of course, "should" and "ought" are not used in a special "moral" sense when one says that a man should not bilk. (In Aristotle's sense of the term "moral" [ήθικός], they are being used in connection with a moral subjectmatter: namely that of human passions and [nontechnical] actions.) But they have now acquired a special socalled "moral" sense-i.e. a sense in which they imply some absolute verdict (like one of guilty/not guilty on a man) on what is described in the "ought" sentences used in certain types of context: not merely the contexts that Aristotle would call "moral"-passions and actions-but also some of the contexts that he would call "intellectual."

The ordinary (and quite indispensable) terms "should," "needs," "ought," "must"-acquired this special sense by being equated in the relevant contexts with "is obliged," or "is bound," or "is required to," in the sense in which one can be obliged or bound by law, or something can be required by law.

How did this come about? The answer is in history: between Aristotle and us came Christianity, with its law conception of ethics. For Christianity derived its ethical notions from the Torah. (One might be inclined to think that a law conception of ethics could arise only among people who accepted an allegedly divine positive law; that this is not so is shown by the example of the Stoics, who also thought that whatever was involved in conformity to human virtues was required by divine law.)

In consequence of the dominance of Christianity for many centuries, the concepts of being bound, permitted, or excused became deeply embedded in our language and thought. The Greek word " ἁμαρτάνειν," the aptest to be turned to that use, acquired the sense "sin," from having meant "mistake," "missing the mark," "going wrong." The Latin peccatum which roughly corresponded to ἁμαρτημα was even apter for the sense "sin," because it was already associated with "culpa"-"guilt"-a juridical notion. The blanket term "illicit," "unlawful," meaning much the same as our blanket term "wrong," explains itself. It is interesting that Aristotle did not have such a blanket term. He has blanket terms for wickedness-"villain," "scoundrel"; but of course a man is not a villain or a scoundrel by the performance of one bad action, or a few bad actions. And he has terms like "disgraceful," "impious"; and specific terms signifying defect of the relevant virtue, like "unjust"; but no term corresponding to "illicit." The extension of this term (i.e. the range of its application) could be indicated in his terminology only by a quite lengthy sentence: that is "illicit" which, whether it is a thought or a consentedto passion or an action or an omission in thought or action, is something contrary to one of the virtues the lack of which shows a man to be bad qua man. That formulation would yield a concept coextensive with the concept "illicit."

To have a law conception of ethics is to hold that what is needed for conformity with the virtues failure in which is the mark of being bad qua man (and not merely, say, qua craftsman or logician)--that what is needed for this, is required by divine law. Naturally it is not possible to have such a conception unless you believe in God as a lawgiver; like Jews, Stoics, and Christians. But if such a conception is dominant for many centuries, and then is given up, it is a natural result that the concepts of "obligation,” of being bound or required as by a law, should remain though they had lost their root; and if the word "ought" has become invested in certain contexts with the sense of "obligation," it too will remain to be spoken with a special emphasis and special feeling in these contexts.

It is as if the notion "criminal" were to remain when criminal law and criminal courts had been abolished and forgotten. A Hume discovering this situation might conclude that there was a special sentiment, expressed by "criminal," which alone gave the word its sense. So Hume discovered the situation which the notion "obligation" survived, and the notion "ought" was invested with that peculiar for having which it is said to be used in a "moral" sense, but in which the belief in divine law had long since been abandoned: for it was substantially given up among Protestants at the time of the Reformation.[2] The situation, if I am right, was the interesting one of the survival of a concept outside the framework of thought that made it a really intelligible one.

When Hume produced his famous remarks about the transition from "is" to "ought," he was, then, bringing together several quite different points. One I have tried to bring out by my remarks on the transition from "is" to "owes" and on the relative "bruteness" of facts. It would be possible to bring out a different point by enquiring about the transition from "is" to "needs"; from the characteristics of an organism to the environment that it needs, for example. To say that it needs that environment is not to say, e.g., that you want it to have that environment, but that it won't flourish unless it has it. Certainly, it all depends whether you want it to flourish! as Hume would say. But what "all depends" on whether you want it to flourish is whether the fact that it needs that environment, or won't flourish without it, has the slightest influence on your actions, Now that suchandsuch "ought" to be or "is needed" is supposed to have an influence on your actions: from which it seemed natural to infer that to judge that it "ought to be" was in fact to grant what you judged "ought to be" influence on your actions. And no amount of truth as to what is the case could possibly have a logical claim to have influence on your actions. (It is not judgment as such that sets us in motion; but our judgment on how to get or do something we want.) Hence it must be impossible to infer "needs" or "ought to be" from "is." But in the case of a plant, let us say, the inference from "is" to "needs" is certainly not in the least dubious. It is interesting and worth examining; but not at all fishy. Its interest is similar to the interest of the relation between brute and less brute facts: these relations have been very little considered. And while you can contrast "what it needs" with "what it's got"--like contrasting de facto and de iure--that does not make its needing this environment less of a "truth."

Certainly in the case of what the plant needs, the thought of a need will only affect action if you want the plant to flourish. Here, then, there is no necessary connection between what you can judge the plant "needs" and what you want. But there is some sort of necessary connection between what you think you need, and what you want. The connection is a complicated one; it is possible not to want something that you judge you need. But, e.g., it is not possible never to want anything that you judge you need. This, however, is not a fact about the meaning of the word "to need," but about the phenomenon of wanting. Hume's reasoning, we might say, in effect, leads one to think it must be about the word "to need," or "to be good for."

Thus we find two problems already wrapped up in the remark about a transition from "is" to "ought"; now supposing that we had clarified the "relative bruteness" of facts on the one hand, and the notions involved in "needing," and "flourishing" on the otherthere would still remain a third point. For, following Hume, someone might say: Perhaps you have made out your point about a transition from "is" to "owes" and from "is" to "needs": but only at the cost of showing "owes" and "needs" sentences to express a kind of truths, a kind of facts. And it remains impossible to infer "morally ought" from "is" sentences.

This comment, it seems to me, would be correct. This word "ought," having become a word of mere mesmeric force, could not, in the character of having that force, be inferred from anything whatever. It may be objected that it could be inferred from other "morally ought" sentences: but that cannot be true. The appearance that this is so is produced by the fact that we say "All men are Φ" and "Socrates is a man" implies "Socrates is Φ." But here "Φ" is a dummy predicate. We mean that if you substitute a real predicate for "Φ" the implication is valid. A real predicate is required; not just a word containing no intelligible thought: a word retaining the suggestion of force, and apt to have a strong psychological effect, but which no longer signifies a real concept at all.

For its suggestion is one of a verdict on my action, according as it agrees or disagrees with the description in the "ought" sentence. And where one does not think there is a judge or a law, the notion of a verdict may retain its psychological effect, but not its meaning. Now imagine that just this word "verdict" were so used-with a characteristically solemn emphasis-as to retain its atmosphere but not its meaning, and someone were to say: "For a verdict, after all, you need a law and a judge." The reply might be made: "Not at all, for if there were a law and a judge who gave a verdict, the question for us would be whether accepting that verdict is something that there is a Verdict on." This is an analogue of an argument which is so frequently referred to as decisive: If someone does have a divine law conception of ethics, all the same, he has to agree that he has to have a judgment that he ought (morally ought) to obey the divine law; so his ethic is in exactly the same position as any other: he merely has a "practical major premise"[3]; "Divine law ought to be obeyed" where someone else has, e.g., "The greatest happiness principle ought to be employed in all decisions."

I should judge that Hume and our presentday ethicists had done a considerable service by showing that no content could be found in the notion "morally ought"; if it were not that the latter philosophers try to find an alternative (very fishy) content and to retain the psychological force of the term. It would be most reasonable to drop it. It has no reasonable sense outside a law conception of ethics; they are not going to maintain such a conception; and you can do ethics without it, as is shown by the example of Aristotle. It would be a great improvement if, instead of "morally wrong," one always named a genus such as "untruthful," "unchaste," "unjust." We should no longer ask whether doing something was "wrong," passing directly from some description of an action to this notion; we should ask whether, e.g., it was unjust; and the answer would sometimes be clear at once.

I now come to the epoch in modern English moral philosophy marked by Sidgwick. There is a startling change that seems to have taken place between Mill and Moore. Mill assumes, as we saw, that there is no question of calculating the particular consequences of an action such as murder or theft; and we saw too that his position was stupid, because it is not at all clear how an action can fall under just one principle of utility. In Moore and in subsequent academic moralists of England we find it taken to be pretty obvious that "the right action" is the action which produces the best possible consequences (reckoning among consequences the intrinsic values ascribed to certain kinds of act by some "Objectivists"[4]). Now it follows from this that a man does well, subjectively speaking, if he acts for the best in the particular circumstances according to his judgment of the total consequences of this particular action. I say that this follows, not that any philosopher has said precisely that. For discussion of these questions can of course get extremely complicated: e.g. it can be doubted whether "suchandsuch is the right action" is a satisfactory formulation, on the grounds that things have to exist to have predicatesso perhaps the best formulation is "I am obliged"; or again, a philosopher may deny that "right" is a "descriptive" term, and then take a roundabout route through linguistic analysis to reach a view which comes to the same thing as "the right action is the one productive of the best consequences" (e.g. the view that you frame your "principles" to effect the end you choose to pursue, the connection between "choice" and "best" being supposedly such that choosing reflectively means that you choose how to act so as to produce the best consequences); further, the roles of what are called "moral principles" and of the "motive of duty" have to be described; the differences between "good" and "morally good" and "right" need to be explored, the special characteristics of "ought" sentences investigated. Such discussions generate an appearance of significant diversity of views where what is really significant is an overall similarity. The overall similarity is made clear if you consider that every one of the best known English academic moral philosophers has put out a philosophy according to which, e.g., it is not possible to hold that it cannot be right to kill the innocent as a means to any end whatsoever and that someone who thinks otherwise is in error. (I have to mention both points; because Mr. Hare, for example, while teaching a philosophy which would encourage a person to judge that killing the innocent would be what he "ought" to choose for overriding purposes would also teach, I think, that if a man chooses to make avoiding killing the innocent for any purpose his "supreme practical principle," he cannot be impugned for error: that just is his "principle." But with that qualification, I think it can be seen that the point I have mentioned holds good of every single English academic moral philosopher since Sidgwick.) Now this is a significant thing: for it means that all these philosophies are quite incompatible with the HebrewChristian ethic. For it has been characteristic of that ethic to teach that there are certain things forbidden whatever consequences threaten, such as choosing to kill the innocent for any purpose, however good; vicarious punishment; treachery (by which I mean obtaining a man's confidence in a grave matter by promises of trustworthy friendship and then betraying him to his enemies); idolatry; sodomy; adultery; making a false profession of faith. The prohibition of certain things simply in virtue of their description as suchandsuch identifiable kinds of action, regardless of any further consequences, is certainly not the whole of the HebrewChristian ethic; but it is a noteworthy feature of it; and if every academic philosopher since Sidgwick has written in such a way as to exclude this ethic, it would argue a certain provinciality of mind not to see this incompatibility as the most important fact about these philosophers, and the differences between them as somewhat trifling by comparison.

It is noticeable that none of these philosophers displays any consciousness that there is such an ethic, which he is contradicting: it is pretty well taken for obvious among them all that a prohibition such as that on murder does not operate in face of some consequences. But of course the strictness of the prohibition has as its point that you are not to be tempted by fear or hope of consequences.

If you notice the transition from Mill to Moore, you will suspect that it was made somewhere by someone; Sidgwick will come to mind as a likely name; and you will in fact find it going on, almost casually, in him. He is rather a dull author; and the important things in him occur in asides and footnotes and small bits of argument which are not concerned with his grand classification of the "methods of ethics." A divine law theory of ethics is reduced to an insignificant variety by a footnote telling us that "the best theologians" (God knows whom he meant) tell us that God is to be obeyed in his capacity of a moral being.  η φορτικός ὁ έπαινος one seems to hear Aristotle saying: "Isn't the praise vulgar?"[5] But Sidgwick is vulgar in that kind of way: he thinks, for example, that humility consists in underestimating your own merits-i.e, in a species of untruthfulness; and that the ground for having laws against blasphemy was that it was offensive to believers; and that to go accurately into the virtue of purity is to offend against its canons, a thing he reproves "medieval theologians" for not realizing.

From the point of view of the present enquiry, the most important thing about Sidgwick was his definition of intention. He defines intention in such a way that one must be said to intend any foreseen consequences of one's voluntary action. This definition is obviously incorrect, and I dare say that no one would be found to defend it now. He uses it to put forward an ethical thesis which would now be accepted by many people: the thesis that it does not make any difference to a man's responsibility for something that he foresaw, that he felt no desire for it, either as an end or as a means to an end. Using the language of intention more correctly, and avoiding Sidgwick's faulty conception, we may state the thesis thus: it does not make any difference to a man's responsibility for an effect of his action which he can foresee, that he does not intend it. Now this sounds rather edifying; it is I think quite characteristic of very bad degenerations of thought on such questions that they sound edifying. We can see what it amounts to by considering an example. Let us suppose that a man has a responsibility for the maintenance of some child. Therefore deliberately to withdraw support from it is a bad sort of thing for him to do. It would be bad for him to withdraw its maintenance because he didn't want to maintain it any longer; and also bad for him to withdraw it because by doing so he would, let us say, compel someone else to do something. (We may suppose for the sake of argument that compelling that person to do that thing is in itself quite admirable.) But now he has to choose between doing something disgraceful and going to prison; if he goes to prison, it will follow that he withdraws support from the child. By Sidgwick's doctrine, there is no difference in his responsibility for ceasing to maintain the child, between the case where he does it for its own sake or as a means to some other purpose, and when it happens as a foreseen and unavoidable consequence of his going to prison rather than do something disgraceful. It follows that he must weigh up the relative badness of withdrawing support from the child and of doing the disgraceful thing; and it may easily be that the disgraceful thing is in fact a less vicious action than intentionally withdrawing support from the child would be; if then the fact that withdrawing support from the child is a side effect of his going to prison does not make any difference to his responsibility, this consideration will incline him to do the disgraceful thing; which can still be pretty bad. And of course, once he has started to look at the matter in this light, the only reasonable thing for him to consider will be the consequences and not the intrinsic badness of this or that action. So that, given that he judges reasonably that no great harm will come of it, he can do a much more disgraceful thing than deliberately withdrawing support from the child. And if his calculations turn out in fact wrong, it will appear that he was not responsible for the consequences, because he did not foresee them. For in fact Sidgwick's thesis leads to its being quite impossible to estimate the badness of an action except in the light of expectedconsequences. But if so, then you must estimate the badness in the light of the consequences you expect; and so it will follow that you can exculpate yourself from the actual consequences of the most disgraceful actions, so long as you can make out a case for not having foreseen them. Whereas I should contend that a man is responsible for the bad consequences of his bad actions, but gets no credit for the good ones; and contrariwise is not responsible for the bad consequences of good actions.

The denial of any distinction between foreseen and intended consequences, as far as responsibility is concerned, was not made by Sidgwick in developing any one "method of ethics"; he made this important move on behalf of everybody and just on its own account; and I think it plausible to suggest that this move on the part of Sidgwick explains the difference between oldfashioned Utilitarianism and that consequentialism, as I name it, which marks him and every English academic moral philosopher since him. By it, the kind of consideration which would formerly have been regarded as a temptation, the kind of consideration urged upon men by wives and flattering friends, was given a status by moral philosophers in their theories.

It is a necessary feature of consequentialism that it is a shallow philosophy. For there are always borderline cases in ethics. Now if you are either an Aristotelian, or a believer in divine law, you will deal with a borderline case by considering whether doing suchandsuch in suchandsuch circumstances is, say, murder, or is an act of injustice; and according as you decide it is or it isn't, you judge it to be a thing to do or not. This would be the method of casuistry; and while it may lead you to stretch a point on the circumference, it will not permit you to destroy the center. But if you are a consequentialist, the question "What is it right to do in suchandsuch circumstances?" is a stupid one to raise. The casuist raises such a question only to ask "Would it be permissible to do soandso?" or "Would it be permissible not to do soandso?" Only if it would not be permissible not to do soandso could he say "This would be the thing to do."[6] Otherwise, though he may speak against some action, he cannot prescribe anyfor in an actual case, the circumstances (beyond the ones imagined) might suggest all sorts of possibilities, and you can't know in advance what the possibilities are going to be. Now the consequentialist has no footing on which to say "This would be permissible, this not"; because by his own hypothesis, it is the consequences that are to decide, and he has no business to pretend that he can lay it down what possible twists a man could give doing this or that; the most he can say is: a man must not bring about this or that; he has no right to say he will, in an actual case, bring about suchandsuch unless he does soandso. Further, the consequentialist, in order to be imagining borderline cases at all, has of course to assume some sort of law or standard according to which this is a borderline case, Where then does he get the standard from? In practice the answer invariably is: from the standards current in his society or his circle. And it has in fact been the mark of all these philosophers that they have been extremely conventional; they have nothing in them by which to revolt against the conventional standards of their sort of people; it is impossible that they should be profound. But the chance that a whole range of conventional standards will be decent is small.-Finally, the point of considering hypothetical situations, perhaps very improbable ones, seems to be to elicit from yourself or someone else a hypothetical decision to do something of a bad kind. I don't doubt this has the effect of predisposing people--who will never get into the situations for which they have made hypothetical choices-to consent to similar bad actions, or to praise and flatter those who do them, so long as their crowd does so too, when the desperate circumstances imagined don't hold at all.

Those who recognize the origins of the notions of "obligation" and of the emphatic, "moral," ought, in the divine law conception of ethics, but who reject the notion of a divine legislator, sometimes look about for the possibility of retaining a law conception without a divine legislator. This search, I think, has some interest in it. Perhaps the first thing that suggests itself is the "norms" of a society. But just as one cannot be impressed by Butler when one reflects what conscience can tell people to do, so, I think, one cannot be impressed by this idea if one reflects what the "norms" of a society can be like. That legislation can be "for oneself" I reject as absurd; whatever you do "for yourself" may be admirable; but is not legislating. Once one sees this, one may say: I have to frame my own rules, and these are the best I can frame, and I shall go by them until I know something better: as a man might say "I shall go by the customs of my ancestors." Whether this leads to good or evil will depend on the content of the rules or of the customs of one's ancestors. If one is lucky it will lead to good. Such an attitude would be hopeful in this at any rate: it seems to have in it some Socratic doubt where, from having to fall back on such expedients, it should be clear that Socratic doubt is good; in fact rather generally it must be good for anyone to think "Perhaps in some way I can't see, I may be on a bad path, perhaps I am hopelessly wrong in some essential way".-The search for "norms" might lead someone to look for laws of nature, as if the universe were a legislator; but in the present day this is not likely to lead to good results; it might lead one to eat the weaker according to the laws of nature, but would hardly lead anyone nowadays to notions of justice the preSocratic feeling about justice as comparable to the balance or harmony which kept things going is very remote to us.

There is another possibility here: "obligation" may be contractual. Just as we look at the law to find out what a man subject to it is required by it to do, so we look at a contract to find out what the man who has made it is required by it to do. Thinkers, admittedly remote from us, might have the idea of a foedus rerum, of the universe not as a legislator but as the embodiment of a contract. Then if you could find out what the contract was, you would learn your obligations under it. Now, you cannot be under a law unless it has been promulgated to you; and the thinkers who believed in "natural divine law" held that it was promulgated to every grown man in his knowledge of good and evil. Similarly you cannot be in a contract without having contracted, i.e. given signs of entering upon the contract. Just possibly, it might be argued that the use of language which one makes in the ordinary conduct of life amounts in some sense to giving the signs of entering into various contracts. If anyone had this theory, we should want to see it worked out. I suspect that it would be largely formal; it might be possible to construct a system embodying the law (whose status might be compared to that of "laws"of logic): "what's sauce for the goose is sauce for the gander," but hardly one descending to such particularities as the prohibition on murder or sodomy. Also, while it is clear that you can be subject to a law that you do not acknowledge and have not thought of as law, it does not seem reasonable to say that you can enter upon a contract without knowing that you are doing so; such ignorance is usually held to be destructive of the nature of a contract.

It might remain to look for "norms" in human virtues: just as man has so many teeth, which is certainly not the average number of teeth men have, but is the number of teeth for the species, so perhaps the species man, regarded not just biologically, but from the point of view of the activity of thought and choice in regard to the various departments of life--powers and faculties and use of things needed--"has" suchandsuch virtues: and this "man" with the complete set of virtues is the "norm," as "man" with, e.g., a complete set of teeth is a norm. But in this sense "norm" has ceased to be roughly equivalent to "law." In this sense the notion of a "norm" brings us nearer to an Aristotelian than a law conception of ethics. There is, I think, no harm in that; but if someone looked in this direction to give "norm" a sense, then he ought to recognize what has happened to the notion "norm," which he wanted to mean "law-without bringing God in"--it has ceased to mean "law" at all; and so the notions of "moral obligation," "the moral ought," and "duty" are best put on the Index, if he can manage it.

But meanwhile-is it not clear that there are several concepts that need investigating simply as part of the philosophy of psychology and, as I should recommend--banishing ethics totallyfrom our minds? Namely-to begin with: "action," "intention," "pleasure," "wanting." More will probably turn up if we start with these. Eventually it might be possible to advance to considering the concept "virtue"; with which, I suppose, we should be beginning some sort of a study of ethics.

I will end by describing the advantages of using the word "ought" in a nonemphatic fashion, and not in a special "moral" sense; of discarding the term "wrong" in a "moral" sense, and using such notions as "unjust."

It is possible, if one is allowed to proceed just by giving examples, to distinguish between the intrinsically unjust, and what is unjust given the circumstances. To arrange to get a man judicially punished for something which it can be clearly seen he has not done is intrinsically unjust. This might be done, of course, and often has been done, in all sorts of ways; by suborning false witnesses, by a rule of law by which something is "deemed" to be the case which is admittedly not the case as a matter of fact, and by open insolence on the part of the judges and powerful people when they more or less openly say: "A fig for the fact that you did not do it; we mean to sentence you for it all the same." What is unjust given, e.g., normal circumstances is to deprive people of their ostensible property without legal procedure, not to pay debts, not to keep contracts, and a host of other things of the kind. Now, the circumstances can clearly make a great deal of difference in estimating the justice or injustice of such procedures as these; and these circumstances may sometimes include expected consequences; for example, a man's claim to a bit of property can become a nullity when its seizure and use can avert some obvious disaster: as, e.g., if you could use a machine of his to produce an explosion in which it would be destroyed, but by means of which you could divert a flood or make a gap which a fire could not jump. Now this certainly does not mean that what would ordinarily be an act of injustice, but is not intrinsically unjust, can always be rendered just by a reasonable calculation of better consequences; far from it; but the problems that would be raised in an attempt to draw a boundary line (or boundary area) here are obviously complicated. And while there are certainly some general remarks which ought to be made here, and some boundaries that can be drawn, the decision on particular cases would for the most part be determined κατόν όρθον λόγον "according to what's reasonable."-E.g. thatsuchandsuch a delay of payment of a suchandsuch debt to a person so circumstanced, on the part of a person so circumstanced, would or would not be unjust, is really only to be decided "according to what's reasonable"; and for this there can in principle be no canon other than giving a few examples. That is to say, while it is because of a big gap in philosophy that we can give no general account of the concept of virtue and of the concept of justice, but have to proceed using the concepts, only by giving examples; still there is an area where it is not because of any gap, but is in principle the case, that there is no account except by way of examples: and that is where the canon is "what's reasonable": which of course is not a canon.

That is all I wish to say about what is just in some circumstances, unjust in others; and about the way in which expected consequences can play a part in determining what is just. Returning to my example of the intrinsically unjust: if a procedure is one of judicially punishing a man for what he is clearly understood not to have done, there can be absolutely no argument about the description of this as unjust. No circumstances, and no expected consequences, which do not modify the description of the procedure as one of judicially punishing a man for what he is known not to have done can modify the description of it as unjust. Someone who attempted to dispute this would only be pretending not to know what "unjust" means: for this is a paradigm case of injustice.

And here we see the superiority or the term "unjust" over the terms "morally right" and "morally wrong." For in the context of English moral philosophy since Sidgwick it appears legitimate to discuss whether it might be "morally right" in some circumstances to adopt that procedure; but it cannot be argued that that the procedure would in any circumstances be just.

Now I am not able to do the philosophy involved-and I think that no one in the present situation of English philosophy can do the philosophy involved-but it is clear that a good man is a just man; and a just man is a man who habitually refuses to commit or participate in any unjust actions for fear of any consequences, or to obtain any advantage, for himself or anyone else. Perhaps no one will disagree. But, it will be said, what is unjust is sometimes determined by expected consequences; and certainly that is true. But there are cases where it is not: now if someone says, "I agree, but all this wants a lot of explaining," then he is right, and, what is more, the situation at present is that we can't do the explaining; we lack the philosophic equipment. But if someone really thinks, in advance,[7] that it is open to question whether such an action as procuring the judicial execution of the innocent should be quite excluded from consideration-I do not want to argue with him; he shows a corrupt mind.

In such cases our moral philosophers seek to impose a dilemma upon us. "If we have a case where the term `unjust' applies purely in virtue of a factual description, can't one raise the question whether one sometimes conceivably ought to do injustice? If `what is unjust' is determined by consideration of whether it is right to do soandso in suchandsuch circumstances, then the question whether it is `right' to commit injustice can't arise, just because `wrong' has been built into the definition of injustice. But if we have a case where the description `unjust' applies purely in virtue of the facts, without bringing `wrong' in, then the question can arise whether one `ought' perhaps to commit an injustice, whether it might not be `right' to? And of course `ought' and `right' are being used in their moral senses here. Now either you must decide what is `morally right' in the light of certain other `principles,' or you make a `principle' about this and decide that an injustice is never 'right'; but even if you do the latter you are going beyond the facts; you are making a decision that you will not, or that it is wrong to, commit injustice. But in either case, if the term `unjust' is determined simply by the facts, it is not the term `unjust' that determines that the term `wrong' applies, but a decision that injustice is wrong,together with the diagnosis of the `factual' description as entailing injustice. But the man who makes an absolute decision that injustice is `wrong' has no footing on which to criticize someone who does not make that decision as judging falsely."

In this argument "wrong" of course is explained as meaning "morally wrong," and all the atmosphere of the term is retained while its substance is guaranteed quite null. Now let us remember that "morally wrong" is the term which is the heir of the notion "illicit," or "what there is an obligation not to do"; which belongs in a divine law theory or ethics. Here it really does add something to the description "unjust" to say there is an obligation not to do it; for what obliges is the divine lawas rules oblige in a game. So if the divine law obliges not to commit injustice by forbidding injustice, it really does add something to the description "unjust" to say there is an obligation not to do it. And it is because "morally wrong" is the heir of this concept, but an heir that is cut off from the family of concepts from which it sprang, that "morally wrong" both goes beyond the mere factual description "unjust" and seems to have no discernible content except a certain compelling force, which I should call purely psychological. And such is the force of the term that philosophers actually suppose that the divine law notion can be dismissed as making no essential difference even if it is heldbecause they think that a "practical principle" running "I ought (i.e. am morally obliged) to obey divine laws" is required for the man who believes in divine laws. But actually this notion of obligation is a notion which only operates in the context of law. And I should be inclined to congratulate the present-day moral philosophers on depriving "morally ought" of its now delusive appearance of content, if only they did not manifest a detestable desire to retain the atmosphere of the term.

It may be possible, if we are resolute, to discard the notion "morally ought," and simply return to the ordinary "ought," which, we ought to notice, is such an extremely frequent term of human language that it is difficult to imagine getting on without it. Now if we do return to it, can't it reasonably be asked whether one might ever need to commit injustice, or whether it won't be the best thing to do? Of course it can. And the answers will be various. One man-a philosopher-may say that since justice is a virtue, and injustice a vice, and virtues and vices are built up by the performances of the action in which they are instanced, an act of injustice will tend to make a man bad; and essentially the flourishing of a man qua man consists in his being good (e.g. in virtues); but for any X to which such terms apply, X needs what makes it flourish, so a man needs, or ought to perform, only virtuous actions; and even if, as it must be admitted may happen, he flourishes less, or not at all, in inessentials, by avoiding injustice, his life is spoiled in essentials by not avoiding injusticeso he still needs to perform only just actions. That is roughly how Plato and Aristotle talk; but it can be seen that philosophically there is a huge gap, at present unfillable as far as we are concerned, which needs to be filled by an account of human nature, human action, the type of characteristic a virtue is, and above all of human "flourishing." And it is the last concept that appears the most doubtful. For it is a bit much to swallow that a man in pain and hunger and poor and friendless is "flourishing," as Aristotle himself admitted. Further, someone might say that one at least needed to stay alive to "flourish." Another man unimpressed by all that will say in a hard case "What we need is suchandsuch, which we won't get without doing this (which is unjust)so this is what we ought to do." Another man, who does not follow the rather elaborate reasoning of the philosophers, simply says "I know it is in any case a disgraceful thing to say that one had better commit this unjust action." The man who believes in divine laws will say perhaps "It is forbidden, and however it looks, it cannot be to anyone's profit to commit injustice"; he like the Greek philosophers can think in terms of "flourishing." If he is a Stoic, he is apt to have a decidedly strained notion of what "flourishing consists" in; if he is a Jew or Christian, he need not have any very distinct notion: the way it will profit him to abstain from injustice is something that he leaves it to God to determine, him self only saying "It can't do me any good to go against his law." (But he also hopes for a great reward in a new life later on, e.g. at the coming of Messiah; but in this he is relying on special promises.)

It is left to modern moral philosophythe moral philosophy of all the wellknown English ethicists since Sidgwick-to construct systems according to which the man who says "We need suchandsuch, and will only get it this way" may be a virtuous character: that is to say, it is left open to debate whether such a procedure as the judicial punishment of the innocent may not in some circumstances be the "right" one to adopt; and though the present Oxford moral philosophers would accord a man permission to "make it his principle" not to do such a thing, they teach a philosophy according to which the particular consequences of such an action could "morally" be taken into account by a man who was debating what to do; and if they were such as to conflict with his "ends," it might be a step in his moral education to frame a moral principle under which he "managed" (to use Mr. NowellSmith's phrases[8]) to bring the action; or it might be a new "decision of principle," making which was an advance in the formation of his moral thinking (to adopt Mr. Hare's conception), to decide: in suchandsuch circumstances one ought to procure the judicial condemnation of the innocent. And that is my complaint.


  
Endnotes

[1] The above two paragraphs are an abstract of a paper “On Brute Facts,” Analysis, 18, 3 (1958).

[2] They did not deny the existence of divine law; but their most characteristic doctrine was that it was given, not to be obeyed, but to show man's incapacity to obey it, even by grace; and this applied not merely to the ramified prescriptions of the Torah, but to the requirements of "natural divine law." Cf. in this connection the decree of Trent against the teaching that Christ was only to be trusted in as mediator, not obeyed as legislator.

[3] As it is absurdly called. Since major premise = premise containing the term which is predicate in the conclusion, it is a solecism to speak of it in the connection with practical reasoning.

[4] Oxford Objectivists of course distinguish between "consequences" and "intrinsic values" and so produce a misleading appearance of not being "consequentialists." But they do not holdand Ross explicitly deniesthat the gravity of, e.g., procuring the condemnation of the innocent is such that it cannot be outweighed by, e.g., national interest. Hence their distinction is of no importance.

[5] E. N. 1178b16.

[6] Necessarily a rare case: for the positive precepts, e.g. "Honor your parents," hardly ever prescribe, and seldom even necessitate, any particular action.

[7] If he thinks it in the concrete situation, he is of course merely a normally tempted human being.  In discussion when this paper was read, as was perhaps to be expected, this case was produced: a government is required to have an innocent man tried, sentenced and executed under threat of a "hydrogen bomb war." It would seem strange to me to have much hope of so averting a war threatened by such men as made this demand. But the most important thing about the way in which cases like this are invented in discussions, is the assumption that only two courses are open: here, compliance and open defiance. No one can say in advance of such a situation what the possibilities are going to be-e.g. that there is none of stalling by a feigned willingness to comply. Accompanied by a skillfully arranged escape of the victim.


[8] Ethics, p. 308.

Welcome to the desert of the real

$
0
0

By Slavoj Zizek


The ultimate American paranoiac fantasy is that of an individual living in a small idyllic Californian city, a consumerist paradise, who suddenly starts to suspect that the world he lives in is a fake, a spectacle staged to convince him that he lives in a real world, while all people around him are effectively actors and extras in a gigantic show. The most recent example of this is Peter Weir's The Truman Show (1998), with Jim Carrey playing the small town clerk who gradually discovers the truth that he is the hero of a 24-hours permanent TV show: his hometown is constructed on a gigantic studio set, with cameras following him permanently. Among its predecessors, it is worth mentioning Philip Dick's Time Out of Joint (1959), in which a hero living a modest daily life in a small idyllic Californian city of the late 50s, gradually discovers that the whole town is a fake staged to keep him satisfied... The underlying experience of Time Out of Joint and of The Truman Show is that the late capitalist consumerist Californian paradise is, in its very hyper-reality, in a way IRREAL, substanceless, deprived of the material inertia.


So it is not only that Hollywood stages a semblance of real life deprived of the weight and inertia of materiality - in the late capitalist consumerist society, "real social life" itself somehow acquires the features of a staged fake, with our neighbors behaving in "real" life as stage actors and extras... Again, the ultimate truth of the capitalist utilitarian de-spiritualized universe is the de-materialization of the "real life" itself, its reversal into a spectral show. Among them, Christopher Isherwood gave expression to this unreality of the American daily life, exemplified in the motel room: "American motels are unreal!/.../ they are deliberately designed to be unreal. /.../ The Europeans hate us because we've retired to live inside our advertisements, like hermits going into caves to contemplate." Peter Sloterdijk's notion of the "sphere" is here literally realized, as the gigantic metal sphere that envelopes and isolates the entire city. Years ago, a series of science-fiction films like Zardoz or Logan's Run forecasted today's postmodern predicament by extending this fantasy to the community itself: the isolated group living an aseptic life in a secluded area longs for the experience of the real world of material decay.


The Wachowski brothers' hit Matrix (1999) brought this logic to its climax: the material reality we all experience and see around us is a virtual one, generated and coordinated by a gigantic mega-computer to which we are all attached; when the hero (played by Keanu Reeves) awakens into the "real reality," he sees a desolate landscape littered with burned ruins - what remained of Chicago after a global war. The resistance leader Morpheus utters the ironic greeting: "Welcome to the desert of the real." Was it not something of the similar order that took place in New York on September 11? Its citizens were introduced to the "desert of the real" - to us, corrupted by Hollywood, the landscape and the shots we saw of the collapsing towers could not but remind us of the most breathtaking scenes in the catastrophe big productions.

When we hear how the bombings were a totally unexpected shock, how the unimaginable Impossible happened, one should recall the other defining catastrophe from the beginning of the XXth century, that of Titanic: it was also a shock, but the space for it was already prepared in ideological fantasizing, since Titanic was the symbol of the might of the XIXth century industrial civilization. Does the same not hold also for these bombings? Not only were the media bombarding us all the time with the talk about the terrorist threat; this threat was also obviously libidinally invested - just recall the series of movies from Escape From New York to Independence Day. The unthinkable which happened was thus the object of fantasy: in a way, America got what it fantasized about, and this was the greatest surprise.

It is precisely now, when we are dealing with the raw Real of a catastrophe, that we should bear in mind the ideological and fantasmatic coordinates which determine its perception. If there is any symbolism in the collapse of the WTC towers, it is not so much the old-fashioned notion of the "center of financial capitalism," but, rather, the notion that the two WTC towers stood for the center of the VIRTUAL capitalism, of financial speculations disconnected from the sphere of material production. The shattering impact of the bombings can only be accounted for only against the background of the borderline which today separates the digitalized First World from theThird World "desert of the Real." It is the awareness that we live in an insulated artificial universe which generates the notion that some ominous agent is threatening us all the time with total destruction.

Is, consequently, Osama Bin Laden, the suspected mastermind behind the bombings, not the real-life counterpart of Ernst Stavro Blofeld, the master-criminal in most of the James Bond films, involved in the acts of global destruction. What one should recall here is that the only place in Hollywood films where we see the production process in all its intensity is when James Bond penetrates the master-criminal's secret domain and locates there the site of intense labor (distilling and packaging the drugs, constructing a rocket that will destroy New York...). When the master-criminal, after capturing Bond, usually takes him on a tour of his illegal factory, is this not the closest Hollywood comes to the socialist-realist proud presentation of the production in a factory? And the function of Bond's intervention, of course, is to explode in firecraks this site of production, allowing us to return to the daily semblance of our existence in a world with the "disappearing working class." Is it not that, in the exploding WTC towers, this violence directed at the threatening Outside turned back at us?

The safe Sphere in which Americans live is experienced as under threat from the Outside of terrorist attackers who are ruthlessly self-sacrificing AND cowards, cunningly intelligent AND primitive barbarians. Whenever we encounter such a purely evil Outside, we should gather the courage to endorse the Hegelian lesson: in this pure Outside, we should recognize the distilled version of our own essence. For the last five centuries, the (relative) prosperity and peace of the "civilized" West was bought by the export of ruthless violence and destruction into the "barbarian" Outside: the long story from the conquest of America to the slaughter in Congo. Cruel and indifferent as it may sound, we should also, now more than ever, bear in mind that the actual effect of these bombings is much more symbolic than real. The US just got the taste of what goes on around the world on a daily basis, from Sarajevo to Grozny, from Rwanda and Congo to Sierra Leone. If one adds to the situation in New York snipers and gang rapes, one gets an idea about what Sarajevo was a decade ago.

It is when we watched on TV screen the two WTC towers collapsing, that it became possible to experience the falsity of the "reality TV shows": even if this shows are "for real," people still act in them - they simply play themselves. The standard disclaimer in a novel ("characters in this text are a fiction, every resemblance with the real life characters is purely contingent") holds also for the participants of the reality soaps: what we see there are fictional characters, even if they play themselves for the real. Of course, the "return to the Real" can be given different twists: Rightist commentators like George Will also immediately proclaimed the end of the American "holiday from history" - the impact of reality shattering the isolated tower of the liberal tolerant attitude and the Cultural Studies focus on textuality. Now, we are forced to strike back, to deal with real enemies in the real world... However, WHOM to strike? Whatever the response, it will never hit the RIGHT target, bringing us full satisfaction. The ridicule of America attacking Afghanistan cannot but strike the eye: if the greatest power in the world will destroy one of the poorest countries in which peasant barely survive on barren hills, will this not be the ultimate case of the impotent acting out?

There is a partial truth in the notion of the "clash of civilizations" attested here -witness the surprise of the average American: "How is it  possible that these people have such a disregard for their own lives?" Is not the obverse of this surprise the rather sad fact that we, in the First World countries, find it more and more difficult even to imagine a public or universal Cause for which one would be ready to sacrifice one's life? When, after the bombings, even the Taliban foreign minister said that he can "feel the pain" of the American children, did he not thereby confirm the hegemonic ideological role of this Bill Clinton's trademark phrase?

Furthermore, the notion of America as a safehaven, of course, also is a fantasy: when a New Yorker commented on how, after the bombings, one can no longer walk safely on the city's streets, the irony of it was that, well before the bombings, the streets of New York were well-known for the dangers of being attacked or, at least, mugged - if anything, the bombings gave rise to a new sense of solidarity, with the scenes of young African-Americans helping an old Jewish gentlemen to cross the street, scenes unimaginable a couple of days ago.

Now, in the days immediately following the bombings, it is as if we dwell in the unique time between a traumatic event and its symbolic impact, like in those brief moment after we are deeply cut, and before the full extent of the pain strikes us - it is open how the events will be symbolized, what their symbolic efficiency will be, what acts they will be evoked to justify. Even here, in these moments of utmost tension, this link is not automatic but contingent. There are already the first bad omens; the day after the bombing, I got a message from a journal which was just about to publish a longer text of mine on Lenin, telling me that they decided to postpone its publication - they considered in opportune to publish a text on Lenin immediately after the bombing. Does this not point towards the ominous ideological rearticulations which will follow?

We don't yet know what consequences in economy, ideology, politics, war, this event will have, but one thing is sure: the US, which, till now, perceived itself as an island exempted from this kind of violence, witnessing this kind of things only from the safe distance of the TV screen, is now directly involved. So the alternative is: will Americans decide to fortify further their "sphere," or to risk stepping out of it? Either America will persist in, strengthen even, the attitude of "Why should this happen to us? Things like this don't happen HERE!", leading to more aggressivity towards the threatening Outside, in short: to a paranoiac acting out. Or America will finally risk stepping through the fantasmatic screen separating it from the Outside World, accepting its arrival into the Real world, making the long-overdued move from "A thing like this should not happen HERE! "to "A thing like this should not happen ANYWHERE!". America's" holiday from history" was a fake: America's peace was bought by the catastrophes going on elsewhere. Therein resides the true lesson of the bombings: the only way to ensure that it will not happen HERE again is to prevent it going on ANYWHERE ELSE.

Cowboy Nation

$
0
0

Robert Kagan, author of the recent book, The Return of History and the End of Dreams (Knopf 2008), writes a monthly column on world affairs for the Washington Post and is a contributing editor at both the Weekly Standard and the New Republic.

(B.A., Yale University; M.P.P., John F. Kennedy School of Government, Harvard University; Ph.D., American University)



This article was published on October 17, 2006 in THE NEW REPUBLIC ONLINE.

These days, we are having a national debate over the direction of foreign policy. Beyond the obvious difficulties in Iraq and Afghanistan, there is a broader sense that our nation has gone astray. We have become too militaristic, too idealistic, too arrogant; we have become an "empire." Much of the world views us as dangerous. In response, many call for the United States to return to its foreign policy traditions, as if that would provide the answer.

What exactly are those traditions? One tradition is this kind of debate, which we've been having ever since the birth of the nation, when Patrick Henry accused supporters of the Constitution of conspiring to turn the young republic into a "great and mighty empire." Today, we are mightier than Henry could have ever imagined. Yet we prefer to see ourselves in modest terms--as a reluctant hegemon, a status quo power that seeks only ordered stability in the international arena. James Schlesinger captured this perspective several years ago, when he said that Americans have "been thrust into a position of lonely preeminence." The United States, he added, is "a most unusual, not to say odd, country to serve as international leader." If, at times, we venture forth and embroil ourselves in the affairs of others, it is either because we have been attacked or because of the emergence of some dangerous revolutionary force--German Nazism, Japanese imperialism, Soviet communism, radical Islamism. Americans do not choose war; war is thrust upon us. As a recent presidential candidate put it, "The United States of America never goes to war because we want to; we only go to war because we have to. That is the standard of our nation."


But that self-image, with its yearning for some imagined lost innocence, is based on myth. Far from the modest republic that history books often portray, the early United States was an expansionist power from the moment the first pilgrim set foot on the continent; and it did not stop expanding--territorially, commercially, culturally, and geopolitically--over the next four centuries. The United States has never been a status quo power; it has always been a revolutionary one, consistently expanding its participation and influence in the world in ever-widening arcs. The impulse to involve ourselves in the affairs of others is neither a modern phenomenon nor a deviation from the American spirit. It is embedded in the American DNA.

Long before the country's founding, British colonists were busy driving the Native American population off millions of acres of land and almost out of existence. From the 1740s through the 1820s, and then in another burst in the 1840s, Americans expanded relentlessly westward from the Alleghenies to the Ohio Valley and on past the Rocky Mountains to the Pacific, southward into Mexico and Florida, and northward toward Canada--eventually pushing off the continent not only Indians, but the great empires of France, Spain, and Russia as well. (The United Kingdom alone barely managed to defend its foothold in North America.) This often violent territorial expansion was directed not by redneck "Jacksonians" but by eastern gentlemen expansionists like George Washington, Thomas Jefferson, and John Quincy Adams.

It would have been extraordinary had early Americans amassed all this territory and power without really wishing for it. But they did wish for it. With 20 years of peace, Washington predicted in his valedictory, the United States would acquire the power to "bid defiance, in a just cause, to any earthly power whatsoever." Jefferson foresaw a vast "empire of liberty" spreading west, north, and south across the continent. Hamilton believed the United States would, "erelong, assume an attitude correspondent with its great destinies--majestic, efficient, and operative of great things. A noble career lies before it." John Quincy Adams considered the United States "destined by God and nature to be the most populous and powerful people ever combined under one social compact." And Americans' aspirations only grew in intensity over the decades, as national power and influence increased. In the 1850s, William Seward predicted that the United States would become the world's dominant power, "the greatest of existing states, greater than any that has ever existed." A century later, Dean Acheson, present at the creation of a U.S.-dominated world order, would describe the United States as "the locomotive at the head of mankind" and the rest of the world as "the caboose." More recently, Bill Clinton labeled the United States "the world's indispensable nation."

From the beginning, others have seen Americans not as a people who sought ordered stability but as persistent disturbers of the status quo. As the ancient Corinthians said of the Athenians, they were "incapable of either living a quiet life themselves or of allowing anyone else to do so." Nineteenth-century Americans were, in the words of French diplomats, "numerous," "warlike," and an "enemy to be feared." In 1817, John Quincy Adams reported from London, "The universal feeling of Europe in witnessing the gigantic growth of our population and power is that we shall, if united, become a very dangerous member of the society of nations." The United States was dangerous not only because it was expansionist, but also because its liberal republicanism threatened the established conservative order of that era. Austria's Prince Metternich rightly feared what would happen to the "moral force" of Europe's conservative monarchies when "this flood of evil doctrines" was married to the military, economic, and political power Americans seemed destined to acquire.

What Metternich understood, and what others would learn, was that the United States was a nation with almost boundless ambition and a potent sense of national honor, for which it was willing to go to war. It exhibited the kind of spiritedness, and even fierceness, in defense of home, hearth, and belief that the ancient Greeks called thumos. It was an uncommonly impatient nation, often dissatisfied with the way things were, almost always convinced of the possibility of beneficial change and of its own role as a catalyst. It was also a nation with a strong martial tradition. Eighteenth- and nineteenth-century Americans loved peace, but they also believed in the potentially salutary effects of war. "No man in the nation desires peace more than I," Henry Clay declared before the war with Great Britain in 1812. "But I prefer the troubled ocean of war, demanded by the honor and independence of the country, with all its calamities, and desolations, to the tranquil, putrescent pool of ignominious peace." Decades later, Oliver Wendell Holmes Jr., the famed jurist who had fought--and been wounded three times--in the Civil War, observed, "War, when you are at it, is horrible and dull. It is only when time has passed that you see that its message was divine."

Modern Americans don't talk this way anymore, but it is not obvious that we are very different in our attitudes toward war. Our martial tradition has remained remarkably durable, especially when compared with most other democracies in the post-World War II era. From 1989 to 2003, a 14-year period spanning three very different presidencies, the United States deployed large numbers of combat troops or engaged in extended campaigns of aerial bombing and missile attacks on nine different occasions: in Panama (1989), Somalia (1992), Haiti (1994), Bosnia (1995-1996), Kosovo (1999), Afghanistan (2001), and Iraq (1991, 1998, 2003). That is an average of one significant military intervention every 19 months--a greater frequency than at any time in our history. Americans stand almost alone in believing in the utility and even necessity of war as a means of obtaining justice. Surveys commissioned by the German Marshall Fund consistently show that 80 percent of Americans agree with the proposition that "[u]nder some conditions, war is necessary to obtain justice." In France, Germany, Italy, and Spain, less than one-third of the population agrees.

How do we reconcile the gap between our preferred self-image and this historical reality? With difficulty. We are, and have always been, uncomfortable with our power, our ambition, and our willingness to use force to achieve our objectives. What the historian Gordon Wood has called our deeply rooted "republicanism" has always made us suspicious of power, even our own. Our enlightenment liberalism, with its belief in universal rights and self-determination, makes us uncomfortable using our influence, even in what we regard as a good cause, to deprive others of their freedom of action. Our religious conscience makes us look disapprovingly on ambition--both personal and national. Our modern democratic worldview conceives of "honor" as something antiquated and undemocratic. These misgivings rarely stop us from pursuing our goals, any more than our suspicion of wealth stops us from trying to accumulate it. But they do make us reluctant to see ourselves as others see us. Instead, we construct more comforting narratives of our past. Or we create some idealized foreign policy against which to measure our present behavior. We hope that we can either return to the policies of that imagined past or approximate some imagined ideal to recapture our innocence. It is easier than facing the hard truth: America's expansiveness, intrusiveness, and tendency toward political, economic, and strategic dominance are not some aberration from our true nature. That is our nature.

Why are we this way? In many respects, we share characteristics common to all peoples through history. Like others, Americans have sought power to achieve prosperity, independence, and security as well as less tangible goals. As American power increased, so, too, did American ambitions, both noble and venal. Growing power changes nations, just as it changes people. It changes their perceptions of the world and their place in it. It increases their sense of entitlement and reduces their tolerance for obstacles that stand in their way. Power also increases ambition. When Americans acquired the unimaginably vast territory of Louisiana at the dawn of the nineteenth century, doubling the size of their young nation with lands that would take decades to settle, they did not rest content but immediately looked for still more territory beyond their new borders. As one foreign diplomat observed, "Since the Americans have acquired Louisiana, they appear unable to bear any barriers round them."

But, in addition to the common human tendency to seek greater power and influence over one's surroundings, Americans have been driven outward into the world by something else: the potent, revolutionary ideology of liberalism that they adopted at the nation's birth. Indeed, it is probably liberalism, more than any other factor, that has made the United States so energetic, expansive, and intrusive over the course of its history.

Liberalism fueled the prodigious territorial and commercial expansion in the eighteenth and nineteenth centuries that made the United States, first, the dominant power in North America and, then, a world power. It did so by elevating the rights of the individual over the state--by declaring that all people had a right to life, liberty, property, and the pursuit of happiness and by insisting it was the government's primary job to safeguard those rights. American political leaders had little choice but to permit, and sometimes support, territorial and commercial claims made by their citizens, even when those claims encroached on the lands or waters of foreigners. Other eighteenth- and nineteenth-century governments, ruled by absolute monarchs, permitted national expansion when it served personal or dynastic interests--and, like Napoleon in the New World, blocked it when it did not. When the king of England tried to curtail the territorial and commercial expansionism of his Anglo-American subjects, they rebelled and established a government that would not hold them back. In this respect, the most important foreign policy statement in U.S. history was not George Washington's farewell address or the Monroe Doctrine but the Declaration of Independence and the enlightenment ideals it placed at the heart of American nationhood. Putting those ideals into practice was a radical new departure in government, and it inevitably produced a new kind of foreign policy.

Liberalism not only drove territorial and commercial expansion; it also provided an overarching ideological justification for such expansion. By expanding territorially, commercially, politically, and culturally, Americans believed that they were bringing both modern civilization and the "blessings of liberty" to whichever nations they touched in their search for opportunity. As Jefferson told one Indian leader: "We desire above all things, brother, to instruct you in whatever we know ourselves. We wish to learn you all our arts and to make you wise and wealthy." In one form or another, Americans have been making that offer of instruction to peoples around the world ever since.

Americans, from the beginning, measured the world exclusively according to the assumptions of liberalism. These included, above all, a belief in what the Declaration of Independence called the "self-evident" universality of certain basic truths--not only that all men were created equal and endowed by God with inalienable rights, but also that the only legitimate and just governments were those that derived their powers "from the consent of the governed." According to the Declaration, "whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it." Such a worldview does not admit the possibility of alternative truths. Americans, over the centuries, accepted the existence of cultural distinctions that influenced other peoples to rule themselves differently. But they never really accepted the legitimacy of despotic governments, no matter how deeply rooted in culture. As a result, they viewed them as transitory. And so, wherever Americans looked in the world, they saw the possibility and the desirability of change.

The notion of progress is a central tenet of liberalism. More than any other people, Americans have taken a progressive view of history, evaluating other nations according to where they stood on the continuum of progress. The Russians, Theodore Roosevelt believed, were "below the Germans just as the Germans are below us ... [but] we are all treading the same path, some faster, some slower." If Roosevelt's language sounds antiquated, our modern perspective is scarcely different. Although we may disagree among ourselves about the pace of progress, almost all Americans believe that it is both inevitable and desirable. We generally agree on the need to assist other nations in their political and economic development. But development toward what, if not toward the liberal democratic ideal that defines our nationalism? The "great struggle of the epoch," Madison declared in the 1820s, is "between liberty and despotism." Because the rights of man were written "by the hand of the divinity itself," as Hamilton put it, that struggle could ultimately have only one outcome.

It was a short step from that conviction to the belief that the interests of the United States were practically indistinguishable from the interests of the world. "The cause of America is in a great measure the cause of all mankind," Thomas Paine argued at the time of the revolution. Herman Melville would later write that, for Americans, "national selfishness is unbounded philanthropy; for we cannot do a good to America but we give alms to the world." It was another short step to the belief that the United States had a special, even unique, role to play in serving as a catalyst for the evolution of mankind. "The rights asserted by our forefathers were not peculiar to themselves," Seward declared, "they were the common rights of mankind." Therefore, he said, the United States had a duty "to renovate the condition of mankind" and lead the way to "the universal restoration of power to the governed" everywhere in the world. Decades earlier, John Quincy Adams had noted with pride that the United States was the source of ideas that made "the throne of every European monarch rock under him as with the throes of an earthquake." Praising the American Revolution, he exhorted "every individual among the sceptered lords of mankind: 'Go thou and do likewise!'"

A Russian minister, appalled at this "appeal to the nations of Europe to rise against their Governments," noted the hypocrisy of Adams's message, asking, "How about your two million black slaves?" Indeed. The same United States that called for global revolution on behalf of freedom was, throughout its first eight decades, also the world's great defender of racial despotism. The slaveholding South was itself a brutal tyranny, almost totalitarian in its efforts to control the speech and personal behavior of whites as well as blacks. Much of the U.S. territorial expansion in the nineteenth century--including the Mexican War, which garnered today's American Southwest and California--was driven by slaveholders, insisting on new lands to which they could spread their despotic system.

In the end, the violent abolition of slavery in the United States was a defining moment in the country's foreign policy: It strengthened the American tendency toward liberal moralism in foreign affairs. The Northern struggle against slavery, culminating in the Civil War, was America's first moral crusade. The military defeat of the Southern slaveholders was America's first war of ideological conquest. And what followed was America's first attempt at occupation and democratic nation-building (with the same mixed results as later efforts). The effect of the whole struggle was to intensify the American dedication to the universality of rights and to reaffirm the Declaration of Independence, rather than the Constitution with its tacit acceptance of slavery, as the central document of American nationhood. The Civil War fixed in the American mind, or at least in the Northern mind, the idea of the just war--a battle, fought for moral reasons, whose objectives can be achieved only through military action.

Such thinking led to the Spanish-American War of 1898. One of the most popular wars in U.S. history, it enjoyed the support of both political parties, of William Jennings Bryan and Andrew Carnegie, of eastern Brahmin Republicans like Henry Cabot Lodge, radical prairie populists, and labor leaders. Although one would not know it from reading most histories today, the war was motivated primarily by humanitarian concerns. Civil strife in Cuba and the brutal policies of the Spanish government--in particular the herding of the civilian population into "reconcentration" camps--had caused some 300,000 deaths, one-fifth of Cuba's population. Most of the victims were women, children, and the elderly. Lodge and many others argued that the United States had a responsibility to defend the Cuban people against Spanish oppression precisely because it had the power to do so. "Here we stand motionless, a great and powerful country not six hours away from these scenes of useless bloodshed and destruction," he said, imploring that, if the United States "stands for humanity and civilization, we should exercise every influence of our great country to put a stop to that war which is now raging in Cuba and give to that island once more peace, liberty, and independence." The overwhelming majority of the nation agreed. The U.S. intervention put an end to that suffering and saved untold thousands of lives. When John Hay called it a "splendid little war," it was not because of the smashing military victory--Hay was no militarist. It was the lofty purposes and accomplishments of the war that were splendid.

It was also true that the United States had self-interested reasons for going to war: commercial interests in Cuba, as well as the desire to remove Spain from the hemisphere and establish our preeminence in the region. Most of Europe condemned the United States as selfish and aggressive, failing to credit it with humanitarian impulses. Moreover, the war produced some unintended and, for many who idealistically supported it, disillusioning consequences. It led to the acquisition of the Philippines and a most unsplendid war against independence-minded Filipinos. It also produced a well-intentioned, but ultimately disappointing, multiyear occupation of Cuba that would haunt Americans for another century. And it reignited an old debate over the course of U.S. foreign policy--similar to the one that consumes us today.

Now, as then, the projection of U.S. power for liberal purposes faces its share of domestic criticism--warnings against arrogance, hubris, excessive idealism, and "imperialism." Throughout the eighteenth and nineteenth centuries, conservatives in the republican tradition of Patrick Henry worried about the effect at home of expansive policies abroad. They predicted, correctly, that a big foreign policy generally meant a big federal government, which--in their eyes--meant impingements on the rights and freedoms of the individual. The conservatives of the slaveholding South were the great realists of the nineteenth century. They opposed moralism, rightly fearing it would be turned against the institution of slavery. As Jefferson Davis put it, "We are not engaged in a Quixotic fight for the rights of man. Our struggle is for inherited rights. ... We are conservative." At the end of the century, when Americans were enthusiastically pushing across the Pacific, critics like Grover Cleveland's long-forgotten secretary of state, Walter Q. Gresham, warned that "[e]very nation, and especially every strong nation, must sometimes be conscious of an impulse to rush into difficulties that do not concern it, except in a highly imaginary way. To restrain the indulgence of such a propensity is not only the part of wisdom, but a duty we owe to the world as an example of the strength, the moderation, and the beneficence of popular government."

But, just as progressivism and big government have generally triumphed in domestic affairs, so, too, has the liberal approach to the world beyond our shores. Henry failed to defeat the Constitution. Southern realism lost to Northern idealism. The critics of liberal foreign policy--whether conservative, realist, or leftist--have rarely managed to steer the United States on a different course.

The result has been some accomplishments of great historical importance--the defeat of German Nazism, Japanese imperialism, and Soviet communism--as well as some notable failures and disappointments. But it was not as if the successes were the product of a good America and the failures the product of a bad America. They were all the product of the same America. The achievements, as well as the disappointments, derived from the very qualities that often make us queasy: our willingness to accumulate and use power; our ambition and sense of honor; our spiritedness in defense of both our interests and our principles; our dissatisfaction with the status quo; our belief in the possibility of change. And, throughout, whether succeeding or failing, we have remained a "dangerous" nation in many senses--dangerous to tyrannies, dangerous to those who do not want our particular brand of liberalism, dangerous to those who fear our martial spirit and our thumos, dangerous to those, including Americans, who would prefer an international order not built around a dominant and often domineering United States.

Whether a different kind of international system or a different kind of America would be preferable is a debate worth having. But let us have this debate about our future without illusions about our past.

On Political Judgment

$
0
0
By Sir Isaiah Berlin (1909-1997)
Initially published in The New York Review of Books, October 3, 1996

What is it to have good judgment in politics? What is it to be politically wise, or gifted, to be a political genius, or even to be no more than politically competent, to know how to get things done? Perhaps one way of looking for the answer is by considering what we are saying when we denounce statesmen, or pity them, for not possessing these qualities. We sometimes complain that they are blinded by prejudice or passion, but blinded to what? We say that they don’t understand the times they live in, or that they are resisting something called “the logic of the facts,” or are “trying to put the clock back,” or that “history is against them,” or that they are ignorant or incapable of learning, or else unpractical idealists, visionaries, Utopians, hypnotized by the dream of some fabulous past or some unrealizable future.

All such expressions and metaphors seem to presuppose that there is something to know (of which the critic has some notion) which these unfortunate persons have somehow not managed to grasp, whether it is the inexorable movement of some cosmic clock which no man can alter, or some pattern of things in time or space, or in some more mysterious medium—”the realm of the Spirit” or “ultimate reality”—which one must first understand if one is to avoid frustration.

But what is this knowledge? Is it knowledge of a science? Are there really laws to be discovered, rules to be learned? Can statesmen be taught something called political science—the science of the relationships of human beings to each other and to their environment—which consists, like other sciences, of systems of verified hypotheses, organized under laws, that enable one, by the use of further experiment and observation, to discover other facts, and to verify new hypotheses?

Certainly that was the notion, either concealed or open, of both Hobbes and Spinoza, each in his own fashion, and of their followers—a notion that grew more and more powerful in the eighteenth and nineteenth centuries, when the natural sciences acquired enormous prestige, and attempts were made to maintain that anything not capable of being reduced to a natural science could not properly be called knowledge at all. The more ambitious and extreme scientific determinists, such as Holbach, Helvétius, and La Mettrie, used to think that, given enough knowledge of universal human nature and of the laws of social behavior, and enough knowledge of the state of given human beings at a given time, one could scientifically calculate how these human beings, or at any rate large groups of them—entire societies or classes—would behave under some other given set of circumstances. It was argued, and this seemed reasonable enough at the time, that just as knowledge of mechanics was indispensable to engineers or architects or inventors, so knowledge of social mechanics was necessary for anyone—statesmen, for example—who wished to get large bodies of men to do this or that. For without it, what had they to rely on but casual impressions, half-remembered, unverified recollections, guesswork, mere rules of thumb, unscientific hypotheses? One must, no doubt, make do with these if one has no proper scientific method at one’s disposal; but one should realize that this is no better than unorganized conjectures about nature made by primitive peoples, or by the inhabitants of Europe during the Dark Ages—grotesquely inadequate tools superseded by the earliest advances of true science. And there are those (in institutions of higher learning) who have thought this, and think this still, in our own times.

Less ambitious thinkers, influenced by the fathers of the life sciences at the turn of the eighteenth century, conceived of the science of society as being rather more like a kind of social anatomy. To be a good doctor it is necessary, but not sufficient, to know anatomical theory. For one must also know how to apply it to specific cases—to particular patients, suffering from particular forms of a particular disease. This cannot be wholly learned from books or professors, it requires considerable personal experience and natural aptitude. Nevertheless, neither experience nor natural gifts can ever be a complete substitute for knowledge of a developed science—pathology, say, or anatomy. To know only the theory might not be enough to enable one to heal the sick, but to be ignorant of it is fatal. By analogy with medicine, such faults as bad political judgment, lack of realism, Utopianism, attempts to arrest progress, and so on were duly conceived as deriving from ignorance or defiance of the laws of social development—laws of social biology (which conceives of society as an organism rather than a mechanism), or of the corresponding science of politics.

The scientifically inclined philosophers of the eighteenth century believed passionately in just such laws, and tried to account for human behavior wholly in terms of the identifi-able effects of education, of natural environment, and of the calculable results of the play of appetites and passions. However, this approach turned out to explain so small a part of the actual behavior of human beings at times when it seemed most in need of explanation—during and after the Jacobin Terror—and failed so conspicuously to predict or analyze such major phenomena as the growth and violence of nationalism, the uniqueness of, and the conflicts between, various cultures, and the events leading to wars and revolutions, and displayed so little understanding of what may broadly be called spiritual or emotional life (whether of individuals or of whole peoples), and the unpredictable play of irrational factors, that new hypotheses inevitably entered the field, each claiming to overthrow all the others, and to be the last and definitive word on the subject.

Messianic preachers—prophets—such as Saint-Simon, Fourier, Comte, dogmatic thinkers such as Hegel, Marx, Spengler, historically-minded theological thinkers from Bossuet to Toynbee, the popularizers of Darwin, the adaptors of this or that dominant school of sociology or psychology—all have attempted to step into the breach caused by the failure of the eighteenth-century philosophers to construct a proper, successful science of society. Each of these new nineteenth-century apostles laid some claim to exclusive possession of the truth. What they all have in common is the belief that there is one great universal pattern, and one unique method of apprehending it, knowledge of which would have saved statesmen many an error, and humanity many a hideous tragedy.

It was not exactly denied that such statesmen as Colbert, or Richelieu, or Washington, or Pitt, or Bismarck, seem to have done well enough without this knowledge, just as bridges had obviously been built before the principles of mechanics were discovered, and diseases had been cured by men who appeared to know no anatomy. It was admitted that much could be—and had been—achieved by the inspired guesses of individual men of genius, and by their instinctive skills; but, so it was argued, particularly toward the end of the nineteenth century, there was no need to look to so precarious a source of light. The principles upon which these great men acted, even though they may not have known it, so some optimistic sociologists have maintained, can be extracted and reduced to an accurate science, very much as the principles of biology or mechanics must once have been established.

According to this view, political judgment need never again be a matter of instinct and flair and sudden illuminations and strokes of unanalyzable genius; rather it should henceforth be built upon the foundations of indubitable knowledge. Opinions might differ about whether this new knowledge was empirical or a priori, whether it derived its authority from the methods of natural science or from metaphysics; but in either form it amounted to what Herbert Spencer called the sciences of social statics and social dynamics. Those who applied it were social engineers; the mysterious art of government was to be mysterious no longer: it could be taught, learned, applied; it was a matter of professional competence and specialization.

This thesis would be more plausible if the newly discovered laws did not, as a rule, turn out either to be ancient truisms—such as that most revolutions are followed by reaction (which amounts to not much more than the virtual tautology that most movements come to an end at some time, and are then followed by something else, often in some opposite direction)—or else to be constantly upset, and violently upset, by events, leaving the theoretical systems in ruins. Perhaps nobody did so much to undermine confidence in a dependable science of human relations as the great tyrants of our day—Lenin, Stalin, Hitler. If belief in the laws of history and “scientific socialism” really did help Lenin or Stalin, it helped them not so much as a form of knowledge, but in the way that a fanatical faith in almost any dogma can be of help to determined men, by justifying ruthless acts and suppressing doubts and scruples.

Between them, Stalin and Hitler left scarcely stone upon stone of the once splendid edifice of the inexorable laws of history. Hitler, after all, almost succeeded in his professed aim of undoing the results of the French Revolution. The Russian Revolution violently twisted the whole of Western society out of what, until that time, seemed to most observers a fairly orderly course—twisted it into an irregular movement, followed by a dramatic collapse, foretold as little by Marxist as by any other “scientific” prophets. It is easy enough to arrange the past in a symmetrical way—Voltaire’s famous cynical epigram to the effect that history is so many tricks played upon the dead is not as superficial as it seems.(1) A true science, though, must be able not merely to rearrange the past but to predict the future. To classify facts, to order them in neat patterns, is not quite yet a science.

We are told that the great earthquake that destroyed Lisbon in the mid-eighteenth century shook Voltaire’s faith in inevitable human progress. Similarly the great destructive political upheavals of our own time have instilled terrible doubts about the feasibility of a reliable science of human behavior for the guidance of men of action—be they industrialists or social-welfare officers or statesmen. The subject evidently had to be re-examined afresh: the assumption that an exact science of social behavior was merely a matter of time and ingenuity no longer seemed quite so self-evident. What method should this science pursue? Clearly not deductive: there existed no accepted axioms from which the whole of human behavior could be deduced by means of agreed logical rules. Not even the most dogmatic theologian would claim as much as that. Inductive, then? Laws based on the survey of a large collection of empirical data? Or on hypothetical-deductive methods not very easily applicable to the complexities of human affairs?

In theory, no doubt, such laws should have been discoverable, but in practice this looked less promising. If I am a statesman faced with an agonizing choice of possible courses of action in a critical situation, will I really find it useful—even if I can afford to wait that long for the answer—to employ a team of specialists in political science to assemble for me from past history all kinds of cases analogous to my situation, from which I or they must then abstract what these cases have in common, deriving from this exercise relevant laws of human behavior? The instances for such induction—or for the construction of hypotheses intended to systematize historical knowledge—would, because human experience is so various, not be numerous; and the dismissal even from these instances of all that is unique to each, and the retention only of that which is common, would produce a very thin, generalized residue, and one far too unspecific to be of much help in a practical dilemma.

Obviously what matters is to understand a particular situation in its full uniqueness, the particular men and events and dangers, the particular hopes and fears which are actively at work in a particular place at a particular time: in Paris in 1791, in Petrograd in 1917, in Budapest in 1956, in Prague in 1968, or in Moscow in 1991. We need not attend systematically to whatever it is that these have in common with other events and other situations, which may resemble them in some respects, but may happen to lack exactly that which makes all the difference at a particular moment, in a particular place. If I am driving a car in desperate haste, and come to a rickety-looking bridge, and must make up my mind whether it will bear my weight, some knowledge of the principles of engineering would no doubt be useful. But even so I can scarcely afford to stop to survey and calculate. To be useful to me in a crisis such knowledge must have given rise to a semi-instinctive skill—like the ability to read without simultaneous awareness of the rules of the language.

Still, in engineering some laws can, after all, be formulated, even though I do not need to keep them constantly in mind. In the realm of political action, laws are far and few indeed: skills are everything. What makes statesmen, like drivers of cars, successful is that they do not think in general terms—that is, they do not primarily ask themselves in what respect a given situation is like or unlike other situations in the long course of human history (which is what historical sociologists, or theologians in historical clothing, such as Vico or Toynbee, are fond of doing). Their merit is that they grasp the unique combination of characteristics that constitute this particular situation—this and no other. What they are said to be able to do is to understand the character of a particular movement, of a particular individual, of a unique state of affairs, of a unique atmosphere, of some particular combination of economic, political, personal factors; and we do not readily suppose that this capacity can literally be taught.

We speak of, say, an exceptional sensitiveness to certain kinds of fact; we resort to metaphors. We speak of some people as possessing antennae, as it were, that communicate to them the specific contours and texture of a particular political or social situation. We speak of the possession of a good political eye, or nose, or ear, of a political sense which love or ambition or hate may bring into play, of a sense that crisis and danger sharpen (or alternatively blunt), to which experience is crucial, a particular gift, possibly not altogether unlike that of artists or creative writers. We mean nothing occult or metaphysical; we do not mean a magic eye able to penetrate into something that ordinary minds cannot apprehend; we mean something perfectly ordinary, empirical, and quasi-aesthetic in the way that it works.

The gift we mean entails, above all, a capacity for integrating a vast amalgam of constantly changing, multicolored, evanescent, perpetually overlapping data, too many, too swift, too intermingled to be caught and pinned down and labeled like so many individual butterflies. To integrate in this sense is to see the data (those identified by scientific knowledge as well as by direct perception) as elements in a single pattern, with their implications, to see them as symptoms of past and future possibilities, to see them pragmatically—that is, in terms of what you or others can or will do to them, and what they can or will do to others or to you. To seize a situation in this sense one needs to see, to be given a kind of direct, almost sensuous contact with the relevant data, and not merely to recognize their general characteristics, to classify them or reason about them, or analyze them, or reach conclusions and formulate theories about them.

To be able to do this well seems to me to be a gift akin to that of some novelists, that which makes such writers as, for example, Tolstoy or Proust convey a sense of direct acquaintance with the texture of life; not just the sense of a chaotic flow of experience, but a highly developed discrimination of what matters from the rest, whether from the point of view of the writer or that of the characters he describes. Above all this is an acute sense of what fits with what, what springs from what, what leads to what; how things seem to vary to different observers, what the effect of such experience upon them may be; what the result is likely to be in a concrete situation of the interplay of human beings and impersonal forces—geographical or biological or psychological or whatever they may be. It is a sense for what is qualitative rather than quantitative, for what is specific rather than general; it is a species of direct acquaintance, as distinct from a capacity for description or calculation or inference; it is what is variously called natural wisdom, imaginative understanding, insight, perceptiveness, and, more misleadingly, intuition (which dangerously suggests some almost magical faculty), as opposed to the very different virtues—very great as these are—of theoretical knowledge or learning, erudition, powers of reasoning and generalization, and intellectual genius.

The quality I am attempting to describe is that special understanding of public life (or for that matter private life) which successful statesmen have, whether they are wicked or virtuous—that which Bismarck had (surely a conspicuous example, in the last century, of a politician endowed with considerable political judgment), or Talleyrand or Franklin Roosevelt, or, for that matter, men such as Cavour or Disraeli, Gladstone or Atatürk, in common with the great psychological novelists, something which is conspicuously lacking in men of more purely theoretical genius such as Newton or Einstein or Russell, or even Freud. This is true even of Lenin, despite the huge weight of theory by which he burdened himself.

What are we to call this kind of capacity? Practical wisdom, practical reason, perhaps, a sense of what will “work,” and what will not. It is a capacity, in the first place, for synthesis rather than analysis, for knowledge in the sense in which trainers know their animals, or parents their children, or conductors their orchestras, as opposed to that in which chemists know the contents of their test tubes, or mathematicians know the rules that their symbols obey. Those who lack this, whatever other qualities they may possess, no matter how clever, learned, imaginative, kind, noble, attractive, gifted in other ways they may be, are correctly regarded as politically inept—in the sense in which Joseph II of Austria was inept (and he was certainly a morally better man than, say, his contemporaries Frederick the Great and the Empress Catherine II of Russia, who were far more successful in attaining their ends, and far more benevolently disposed toward mankind) or in which the Puritans, or James II, or Robespierre (or, for that matter, Hitler or even Lenin in the end) proved to be inept at realizing at least their positive ends.

What is it that the Emperor Augustus or Bismarck knew and the Emperor Claudius or Joseph II did not? Very probably the Emperor Joseph was intellectually more distinguished and far better read than Bismarck, and Claudius may have known many more facts than Augustus. But Bismarck (or Augustus) had the power of integrating or synthesizing the fleeting, broken, infinitely various wisps and fragments that make up life at any level, just as every human being, to some extent, must integrate them (if he is to survive at all), without stopping to analyze how he does what he does, and whether there is a theoretical justification for his activity. Everyone must do it, but Bismarck did it over a much larger field, against a wider horizon of possible courses of action, with far greater power—to a degree, in fact, which is quite correctly described as that of genius. Moreover, the bits and pieces which require to be integrated—that is, seen as fitting with other bits and pieces, and not compatible with yet others, in the way in which, in fact, they do fit and fail to fit in reality—these basic ingredients of life are in a sense too familiar, we are too much with them, they are too close to us, they form the texture of the semiconscious and unconscious levels of our life, and for that reason they tend to resist tidy classification.

Of course, whatever can be isolated, looked at, inspected, should be. We need not be obscurantist. I do not wish to say or hint, as some romantic thinkers have, that something is lost in the very act of investigating, analyzing, and bringing to light, that there is some virtue in darkness as such, that the most important things are too deep for words, and should be left untouched, that it is somehow blasphemous to enunciate them.(2) This I believe to be a false and on the whole deleterious doctrine. Whatever can be illuminated, made articulate, incorporated in a proper science, should of course be so. “We murder to dissect,” wrote Wordsworth (3)—at times we do; at other times dissection reveals truths. There are vast regions of reality which only scientific methods, hypotheses, established truths, can reveal, account for, explain, and indeed control. What science can achieve must be welcomed. In historical studies, in classical scholarship, in archaeology, linguistics, demography, the study of collective behavior, in many other fields of human life and endeavor, scientific methods can give indispensable information.

I do not hold with those who maintain that natural science, and the technology based upon it, somehow distorts our vision, and prevents us from direct contact with reality—”being”—which pre-Socratic Greeks or medieval Europeans saw face to face. This seems to me an absurd nostalgic delusion. My argument is only that not everything, in practice, can be—indeed that a great deal cannot be—grasped by the sciences. For, as Tolstoy taught us long ago, the particles are too minute, too heterogeneous, succeed each other too rapidly, occur in combinations of too great a complexity, are too much part and parcel of what we are and do, to be capable of submitting to the required degree of abstraction, that minimum of generalization and formalization—idealization—which any science must exact. After all, Frederick of Prussia and Catherine the Great founded scientific academies (which are still famous and important) with the help of French and Swiss scientists—but did not seek to learn from them how to govern. And although the father of sociology, the eminent Auguste Comte himself, certainly knew a great many more facts and laws than any politician, his theories are today nothing but a sad, huge, oddly-shaped fossil in the stream of knowledge, a kind of curiosity in a museum, whereas Bismarck’s political gifts—if I may return to this far from admirable man, because he is perhaps the most effective of all nineteenth-century statesmen—are, alas, only too familiar among us still. There is no natural science of politics any more than a natural science of ethics. Natural science cannot answer all questions.

All I am concerned to deny, or at least to doubt, is the truth of Freud’s dictum that while science cannot explain everything, nothing else can do so. Bismarck understood something which, let us say, Darwin or James Clerk Maxwell did not need to understand, something about the public medium in which he acted, and he understood it as sculptors understand stone or clay; understood, that is, in this particular case, the potential reactions of relevant bodies of Germans or Frenchmen or Italians or Russians, and understood this without, so far as we know, any conscious inference or careful regard to the laws of history, or laws of any kind, and without recourse to any other specific key or nostrum—not those recommended by Maistre, or Hegel or Nietzsche or Bergson or some of their modern irrationalist successors, any more than those of their enemies, the friends of science. He was successful because he had the particular gift of using his experience and observation to guess successfully how things would turn out.

Scientists, at least qua scientists, do not need this talent. Indeed their training often makes them peculiarly unfit in this respect. Those who are scientifically trained often seem to hold Utopian political views precisely because of a belief that methods or models which work well in their particular fields will apply to the entire sphere of human action, or if not this particular method or this particular model, then some other method, some other model of a more or less similar kind. If natural scientists are at times naive in politics, this may be due to the influence of an insensibly made, but nevertheless misleading, identification of what works in the formal and deductive disciplines, or in laboratories, with what works in the organization of human life.

I repeat: to deny that laboratories or scientific models offer something—sometimes a great deal—of value for social organization or political action is sheer obscurantism; but to maintain that they have more to teach us than any other form of experience is an equally blind form of doctrinaire fanaticism which has sometimes led to the torture of innocent men by pseudo-scientific monomaniacs in pursuit of the millennium. When we say of the men of 1789 in France, or of 1917 in Russia, that they were too doctrinaire, that they relied too much on theories—whether eighteenth-century theories such as Rousseau’s, or nineteenth-century theories such as Marx’s—we do not mean that although these particular theories were indeed defective, better ones could in principle be discovered, and that these better theories really would at last do the job of making men happy and free and wise, so that they would not need, any longer, to depend so desperately on the improvisations of gifted leaders, leaders who are so few and far between, and so liable to megalomania and terrible mistakes.

What we mean is the opposite: that theories, in this sense, are not appropriate as such in these situations. It is as if we were to look for a theory of tea-tasting, a science of architecture. The factors to be evaluated are in these cases too many, and it is on skill in integrating them, in the sense I have described, that everything depends, whatever may be our creed or our purpose—whether we are utilitarians or liberals, communists or mystical theocrats, or those who have lost their way in some dark Heideggerian forest. Sciences, theories no doubt do sometimes help, but they cannot be even a partial substitute for a perceptual gift, for a capacity for taking in the total pattern of a human situation, of the way in which things hang together—a talent to which, the finer, the more uncannily acute it is, the power of abstraction and analysis seems alien, if not positively hostile.

A scientifically trained observer can of course always analyze a particular social abuse, or suggest a particular remedy, but he can do little, as a scientist, to predict what general effects the application of a given remedy or the elimination of a given source of misery or injustice is going to have on other—especially on remote—parts of our total social system. We begin by trying to alter what we can see, but the tremors which our action starts sometimes run through the entire depth of our society; levels to which we pay no conscious attention are stirred, and all kinds of unintended results ensue. It is semi-instinctive knowledge of these lower depths, knowledge of the intricate connections between the upper surface and other, remoter layers of social or individual life (which Burke was perhaps the first to emphasize, if only to turn his perception to his own traditionalist purposes), that is an indispensable ingredient of good political judgment.

We rightly fear those bold reformers who are too obsessed by their vision to pay attention to the medium in which they work, and who ignore imponderables—John of Leiden, the Puritans, Robespierre, Lenin, Hitler, Stalin. For there is a literal sense in which they know not what they do (and do not care either). And we are rightly apt to put more trust in the equally bold empiricists, Henry IV of France, Peter the Great, Frederick of Prussia, Napoleon, Cavour, Lincoln, Lloyd George, Masaryk, Franklin Roosevelt (if we are on their side at all), because we see that they understand their material. Is this not what is meant by political genius? Or genius in other provinces of human activity? This is not a contrast between conservatism and radicalism, or between caution and audacity, but between types of gift. As there are differences of gifts, so there are different types of folly. Two of these types are in direct contradiction, and in a curious and paradoxical fashion.

The paradox is this: in the realm presided over by the natural sciences, certain laws and principles are recognized as having been established by proper methods—that is, methods recognized as reliable by scientific specialists. Those who deny or defy these laws or methods—people, say, who believe in a flat earth, or do not believe in gravitation—are quite rightly regarded as cranks or lunatics. But in ordinary life, and perhaps in some of the humanities—studies such as history, or philosophy, or law (which differ from the sciences if only because they do not seem to establish—or even want to establish—wider and wider generalizations about the world)—those are Utopian who place excessive faith in laws and methods derived from alien fields, mostly from the natural sciences, and apply them with great confidence and somewhat mechanically.

The arts of life—not least of politics—as well as some among the humane studies turn out to possess their own special methods and techniques, their own criteria of success and failure. Utopianism, lack of realism, bad judgment here consist not in failing to apply the methods of natural science, but, on the contrary, in over-applying them. Here failure comes from resisting that which works best in each field, from ignoring or opposing it either in favor of some systematic method or principle claiming universal validity—say the methods of natural science (as Comte did), or of historical theology or social development (as Marx did)—or else from a wish to defy all principles, all methods as such, from simply advocating trust in a lucky star or personal inspiration: that is, mere irrationalism.

To be rational in any sphere, to display good judgment in it, is to apply those methods which have turned out to work best in it. What is rational in a scientist is therefore often Utopian in a historian or a politician (that is, it systematically fails to obtain the desired result), and vice versa. This pragmatic platitude entails consequences that not everyone is ready to accept. Should statesmen be scientific? Should scientists be put in authority, as Plato or Saint-Simon or H.G. Wells wanted? Equally, we might ask, should gardeners be scientific, should cooks? Botany helps gardeners, laws of dietetics may help cooks, but excessive reliance on these sciences will lead them—and their clients—to their doom. The excellence of cooks and gardeners still depends today most largely upon their artistic endowment and, like that of politicians, on their capacity to improvise. Most of the suspicion of intellectuals in politics springs from the belief, not entirely false, that, owing to a strong desire to see life in some simple, symmetrical fashion, they put too much faith in the beneficent results of applying directly to life conclusions obtained by operations in some theoretical sphere. And the corollary of this over-reliance on theory, a corollary alas too often corroborated by experience, is that if the facts—that is, the behavior of living human beings—are recalcitrant to such experiment, the experimenter becomes annoyed, and tries to alter the facts to fit the theory, which, in practice, means a kind of vivisection of societies until they become what the theory originally declared that the experiment should have caused them to be. The theory is “saved,” indeed, but at too high a cost in useless human suffering; yet since it is applied in the first place, ostensibly at least, to save men from the hardships which, it is alleged, more haphazard methods would bring about, the result is self-defeating. So long as there is no science of politics in sight, attempts to substitute counterfeit science for individual judgment not only lead to failure, and, at times, major disasters, but also discredit the real sciences, and undermine faith in human reason.

The passionate advocacy of unattainable ideals may, even if it is Utopian, break open the barriers of blind tradition and transform the values of human beings, but the advocacy of pseudo-scientific or other kinds of falsely certified means—methods of the sort advertised by metaphysical or other kinds of bogus prospectuses—can only do harm. There is a story—I don’t know how true—that when the Prime Minister Lord Salisbury was one day asked on what principle he decided whether to go to war, he replied that, in order to decide whether or not to take an umbrella, he looked at the sky. Perhaps this goes too far. If a reliable science of political weather-forecasting existed, this would, no doubt, be condemned as too subjective a procedure. But, for reasons which I have tried to give, such a science, even if it is not impossible in principle, is still very far to seek. And to act as if it already existed, or was merely round the corner, is an appalling and gratuitous handicap to all political movements, whatever their principles and whatever their purposes—from the most reactionary to the most violently revolutionary—and leads to avoidable suffering.

To demand or preach mechanical precision, even in principle, in a field incapable of it is to be blind and to mislead others. Moreover, there is always the part played by pure luck—which, mysteriously enough, men of good judgment seem to enjoy rather more often than others. This, too, is perhaps worth pondering.

1. "Un historien est un babillard qui fait des tracasseries aux morts." The Complete Works of Voltaire, Volume 82 (University of Toronto Press, 1968), p. 452.

2. In this spirit Keats wrote: "Do not all charms fly/At the mere touch of cold philosophy?… Philosophy will clip an Angel's wings,/Conquer all mysteries by rule and line…." Lamia (1820).

3. In "The Tables Turned" (1798).


In Praise of Slowness

$
0
0
"There is more to life than increasing its speed" - Mahatma Gandhi


THE SLOW SCIENCE MANIFESTO

We are scientists. We don’t blog. We don’t twitter. We take our time.

Don’t get us wrong—we do say yes to the accelerated science of the early 21st century. We say yes to the constant flow of peer-review journal publications and their impact; we say yes to science blogs and media & PR necessities; we say yes to increasing specialization and diversification in all disciplines. We also say yes to research feeding back into health care and future prosperity. All of us are in this game, too.

However, we maintain that this cannot be all. Science needs time to think. Science needs time to read, and time to fail. Science does not always know what it might be at right now. Science develops unsteadi­ly, with jerky moves and un­predict­able leaps forward—at the same time, however, it creeps about on a very slow time scale, for which there must be room and to which justice must be done.

Slow science was pretty much the only science conceivable for hundreds of years; today, we argue, it deserves revival and needs protection. Society should give scientists the time they need, but more importantly, scientists must take their time.

We do need time to think. We do need time to digest. We do need time to mis­understand each other, especially when fostering lost dialogue between humanities and natural sciences. We cannot continuously tell you what our science means; what it will be good for; because we simply don’t know yet. Science needs time.

—Bear with us, while we think.






THE SLOW FOOD MANIFESTO

The Slow Food international movement officially began when delegates from 15 countries endorsed this manifesto, written by founding member Folco Portinari, on December 10, 1989.

Our century, which began and has developed under the insignia of industrial civilization, first invented the machine and then took it as its life model.

We are enslaved by speed and have all succumbed to the same insidious virus: Fast Life, which disrupts our habits, pervades the privacy of our homes and forces us to eat Fast Foods.

To be worthy of the name, Homo Sapiens should rid himself of speed before it reduces him to a species in danger of extinction.

A firm defense of quiet material pleasure is the only way to oppose the universal folly of Fast Life.

May suitable doses of guaranteed sensual pleasure and slow, long-lasting enjoyment preserve us from the contagion of the multitude who mistake frenzy for efficiency.

Our defense should begin at the table with Slow Food.

Let us rediscover the flavors and savors of regional cooking and banish the degrading effects of Fast Food.

In the name of productivity, Fast Life has changed our way of being and threatens our environment and our landscapes. So Slow Food is now the only truly progressive answer.

That is what real culture is all about: developing taste rather than demeaning it. And what better way to set about this than an international exchange of experiences, knowledge, projects?

Slow Food guarantees a better future.

Slow Food is an idea that needs plenty of qualified supporters who can help turn this (slow) motion into an international movement, with the little snail as its symbol.


_ _ _ _

SLOW SCHOLARSHIP

You have likely heard of the “Slow Food Movement” -- the momentum of diners, chefs, gardeners, vintners, farmers and restaurateurs who have taken a critical look at how our society has shifted to a position where for most, food is something to be consumed, rather than savoured, to be served up and eaten “fast” on the way to doing something else. “Slow Food,” by contrast, is something to be carefully prepared, with fresh ingredients, local when possible, and enjoyed leisurely over conversation around a table with friends and family.

“Slow Scholarship” is a similar response to hasty scholarship. Slow scholarship, is thoughtful, reflective, and the product of rumination – a kind of field testing against other ideas. It is carefully prepared, with fresh ideas, local when possible, and is best enjoyed leisurely, on one’s own or as part of a dialogue around a table with friends, family and colleagues. Like food, it often goes better with wine.

In the desire to publish instead of perish, many scholars at some point in their careers, send a conference paper off to a journal which may still be half-baked, may only have a spark of originality, may be a slight variation on something they or others have published, may rely on data that is still preliminary. This is hasty scholarship.

Other scholars send out their quick responses to a talk they have heard, an article they read, an email they have received, to the world via a Tweet or Blog. This is fast scholarship. Quick, off the cuff, fresh -- but not the product of much cogitation, comparison, or contextualization. The Tweetscape and Blogoshere brim over with sometimes idle, sometimes angry, sometimes scurrilous, always hasty, first impressions.

Slow Scholarship emerges from my own experience of taking 17 years from the start of a Ph.D. to the publication of the book which had its origins in the dissertation. It was when this book won the Harold Adams Innis prize for the best book in the Social Sciences in Canada, that I began to reflect on the benefits of the long journey, the many rewrites, the reconsideration, and the additional research that took place in those years. Then I noticed a couple of M.A. theses that I was an examiner of, which took three to five years, were remarkable pieces of scholarship, many times more valuable than the one and most two year MA theses, and I have begun to see other fruits of slow scholarship.

In a scholarly world where citation indices which count how many times an article is cited, not whether it is cited as a good or bad example, the thoughtful, reflective, write a book only a few times in a long career scholar, has lost prestige and, because pay is often linked to frequency of publication, money. Slow scholarship is a celebration of those authors who create a small but mighty legacy.


_ _ _ _

THE SLOW MEDIA MANIFESTO 

The first decade of the 21st century, the so-called ‘naughties’, has brought profound changes to the technological foundations of the media landscape. The key buzzwords are networks, the Internet and social media. In the second decade, people will not search for new technologies allowing for even easier, faster and low-priced content production. Rather, appropriate reactions to this media revolution are to be developed and integrated politically, culturally and socially. The concept “Slow”, as in “Slow Food” and not as in “Slow Down”, is a key for this. Like “Slow Food”, Slow Media are not about fast consumption but about choosing the ingredients mindfully and preparing them in a concentrated manner. Slow Media are welcoming and hospitable. They like to share.

1. Slow Media are a contribution to sustainability. Sustainability relates to the raw materials, processes and working conditions, which are the basis for media production. Exploitation and low-wage sectors as well as the unconditional commercialization of user data will not result in sustainable media. At the same time, the term refers to the sustainable consumption of Slow Media.

2. Slow media promote Monotasking. Slow Media cannot be consumed casually, but provoke the full concentration of their users. As with the production of a good meal, which demands the full attention of all senses by the cook and his guests, Slow Media can only be consumed with pleasure in focused alertness.

3. Slow Media aim at perfection. Slow Media do not necessarily represent new developments on the market. More important is the continuous improvement of reliable user interfaces that are robust, accessible and perfectly tailored to the media usage habits of the people.

4. Slow Media make quality palpable. Slow Media measure themselves in production, appearance and content against high standards of quality and stand out from their fast-paced and short-lived counterparts – by some premium interface or by an aesthetically inspiring design.

5. Slow Media advance Prosumers, i.e. people who actively define what and how they want to consume and produce. In Slow Media, the active Prosumer, inspired by his media usage to develop new ideas and take action, replaces the passive consumer. This may be shown by marginals in a book or animated discussion about a record with friends. Slow Media inspire, continuously affect the users’ thoughts and actions and are still perceptible years later.

6. Slow Media are discursive and dialogic. They long for a counterpart with whom they may come in contact. The choice of the target media is secondary. In Slow Media, listening is as important as speaking. Hence ‘Slow’ means to be mindful and approachable and to be able to regard and to question one’s own position from a different angle.

7. Slow Media are Social Media. Vibrant communities or tribes constitute around Slow Media. This, for instance, may be a living author exchanging thoughts with his readers or a community interpreting a late musician’s work. Thus Slow Media propagate diversity and respect cultural and distinctive local features.

8. Slow Media respect their users. Slow Media approach their users in a self-conscious and amicable way and have a good idea about the complexity or irony their users can handle. Slow Media neither look down on their users nor approach them in a submissive way.

9. Slow Media are distributed via recommendations not advertising: the success of Slow Media is not based on an overwhelming advertising pressure on all channels but on recommendation from friends, colleagues or family. A book given as a present five times to best friends is a good example.

10. Slow Media are timeless: Slow Media are long-lived and appear fresh even after years or decades. They do not lose their quality over time but at best get some patina that can even enhance their value.

11. Slow Media are auratic: Slow Media emanate a special aura. They generate a feeling that the particular medium belongs to just that moment of the user’s life. Despite the fact that they are produced industrially or are partially based on industrial means of production, they are suggestive of being unique and point beyond themselves.

12. Slow Media are progressive not reactionary: Slow Media rely on their technological achievements and the network society’s way of life. It is because of the acceleration of multiple areas of life, that islands of deliberate slowness are made possible and essential for survival. Slow Media are not a contradiction to the speed and simultaneousness of Twitter, Blogs or Social Networks but are an attitude and a way of making use of them.

13. Slow Media focus on quality both in production and in reception of media content: Craftsmanship in cultural studies such as source criticism, classification and evaluation of sources of information are gaining importance with the increasing availability of information.

14. Slow Media ask for confidence and take their time to be credible. Behind Slow Media are real people. And you can feel that.

Stockdorf and Bonn, Jan 2, 2010

Benedikt Köhler
Sabria David
Jörg Blumtritt

_ _ _ _

Also see the following related posts in Virtue Ethics Info Centre:


The Future of the Brain

$
0
0
Scientific Research & Outreach into Politics, Education & Society

Susan Greenfield

As Professor of Synaptic Pharmacology at Oxford University, Susan Greenfield leads a multi-disciplinary team investigating the physical basis of the mind and its implications for our understanding of human behaviour, work and society. Through her reputation as a leading scientist, author and thought leader in the dynamics and evolution of the human brain, Susan Greenfield has also extended her reach to influence and inform other aspects of our society.

What is a Polyarchy?

$
0
0
Dahl introduced the term polyarchy to characterize American politics and other political systems that are open, inclusive, and competitive (Polyarchy, 1971). The concept allowed him to make a distinction between an ideal system of democracy and institutional arrangements that approximate this ideal. Thus, polyarchies are based on the principle of representative rather than direct democracy and therefore constitute a form of minority rule, yet they are also (imperfectly) democratized systems that limit the power of elite groups through institutions such as regular and free elections.

Despite his critique of elite-power theory, Dahl was faulted after the publication of Who Governs? for underestimating the importance of broad-based civic participation. Indeed, in Who Governs? Dahl had argued that democracy does not require mass participation and in fact rests on the consent of a relatively apathetic population. Later, in Democracy and Its Critics (1989), he recognized the value of an active citizenry and associated polyarchy with political rights such as freedom of expression and association.*


* “Robert A. Dahl.” Encyclopedia Britannica. Encyclopedia Britannica Online Academic Edition. Encyclopedia Britannica Inc., 2013. Web. 20 Oct. 2013.

Interview with Philippa Foot

$
0
0

Philippa Foot (1920-2010)
Philosophy Now, Issue 41, May/June 2003

Philippa Foot has for decades been one of Oxford’s best-known and most original ethicists. Her groundbreaking papers won her worldwide recognition but at the dawn of the new century she has finally published her first full-length book. Editor Rick Lewis asked her about goodness, vice, plants and Nietzsche.

Your book Natural Goodness has recently been published. I wonder if you could tell us in a few sentences, what is the main idea that you want to get across in the book?

I’m explaining a notion that I have called ‘natural goodness’. An admired colleague of mine, Michael Thompson, has said of my work that I believe that vice is a form of natural defect. That’s exactly what I believe, and I want to say that we describe defects in human beings in the same way as we do defects in plants and animals. I once began a lecture by saying that in moral philosophy it’s very important to begin by talking about plants. This surprised some people!

What I believe is that there are a whole set of concepts that apply to living things and only to living things, considered in their own right. These would include, for instance, function, welfare, flourishing, interests, the good of something. And I think that all these concepts are a cluster. They belong together.

When we say something is good, say one’s ears or eyes are good, we mean they are as they should be, as human ears ought to be, that they fulfil the function that ears are needed for in human life. Which of course is different from the particular function that, say, the ears of a gull serve, because gulls have to be able to recognise the sound of their chick among thousands of others on a cliffface from some way out to sea, and our ears don’t have to be quite as good as that. Similarly, we don’t have to see well in the dark. There’s nothing wrong with our eyes because we can’t see in the dark. But owls’ eyes are defective if they can’t see in the dark. So there’s this notion of a defect which is species-relevant. Things aren’t just good or bad, they’re good in a certain individual, in relation to the manner of life of his or hers or its species. That’s the basic idea. And I argue that moral defects are just one more example of this kind of defect.

So let’s take plants. A plant needs strong roots, and in the same sort of way human beings need courage. When one is talking about what a human being should do, one says things like, “look, he should be able to face up to danger in certain circumstances, for his own sake and for the sake of others.” But this is like saying, “an owl should be able to see in the dark, should be able to fly” or “a gull should be able to recognize the sound of its chick among all the cacophony of the cliff.” And if you think of it in this way then you’re not going to think that there’s a gap between facts and evaluation – between description of facts, such as ‘owls hunt by night’, that’s a description of fact, and another description, such as ‘that owl’s got weak eyesight; it’s doesn’t seem to be able to manage in the dark’. These are the central notions. And that’s why I thought we should start moral philosophy by talking about plants.

But why say that owls should have good eyes, rather than just saying simply that an owl does have good eyes?

What’s very important, it’s really the centre of the whole thing, is the idea of a certain kind of proposition – unquantifiedpropositions.

What do you mean by that?

Well, the quantifiers are all or some. I mean, you can say all rivers have water in them, or some rivers go down to the sea. But there are also some peculiar propositions, propositions which only apply to living things, which are neither all nor some. And this kind of proposition really is about the standard; it’s about how it should be. It takes one towards what I have called ‘natural goodness’. For example, we say “humans have thirty two teeth.” Not all humans do, in fact, but we have defective teeth if we don’t have thirty two. Either we’ve never had the full complement or we’ve lost some. Elizabeth Anscombe put out a very important article about this kind of proposition, called ‘Modern Moral Philosophy’ which was published in Mind I think in the 1950s. She didn’t make a great deal out of it but a lot of people refer to it now.

The thought is, that first of all there is a difference in the way we talk about living things and non-living things. Just leave aside artifacts – they’re a bit like one and a bit like the other. I’ll put that aside.

Rivers are interesting because they’re natural things and they have a pattern of development through the seasons as living things do, and yet you cannot talk about a river as being defective. Of course it can be defective from our point of view, from the point of view of irrigation or animals or something like that but not in its own right, not autonomously as I say.

Does that apply to non-living things like stars, as well? I’m thinking of an astronomer looking at a star and saying “this should have developed into a classic yellow class 2 star but it ran out of hydrogen and that prevented it from doing so.” So surely you can say ‘should’ in relation to them?

In the everyday use of language we do say “it should” meaning “it was about to” or “they usually do” or something like that. But that’s not the same ‘should’; that pattern doesn’t give you the kind of natural defect. This is what I’m identifying here – the difference between the two. That’s why, as I said, I think moral evaluation belongs within a whole set of concepts which apply to living things only. You see, rivers don’t flourish. Of course we can say the river is flourishing, but then it is a sort of jokey use. It doesn’t literally flourish, it doesn’t literally die. You could say the star died but you obviously would mean something different because they’re not members of a species of living things. I’m not in the least fighting the everyday language, but a star being born is very, very different to any member of a species being born. They haven’t got this pattern of one and then another of the same kind coming from it. Rivers don’t spawn rivers. They can’t literally be born or die, and there is in their case no species in which a function could be identified. Of course anything we make can have a function, but the parts of animals and their movements can have function quite apart from anything that we do or want. A spider’s web has a function. What’s the function of it? Is it to keep predators at bay? No, it’s to catch food. That’s a very straightforward, ordinary thing to say. That’s the function of it. Then the function of whatever part of the spider secretes sticky stuff is for making webs; and webs are for getting food. And food is for sustenance, to keep the spider going, and other things it will need in order to reproduce.

Is it only in the case of entities who have interests that you can have the idea of function from the point of view of the entity itself?

‘Interests’ I think is excellent. Rivers particularly, look so much like living things; they have seasonal progressions and so on. But they don’t have interests, as artifacts do not. I like that thought of yours!



This helps me to see how you look at human beings and say ‘well this is a good, well-functioning human being’, or ‘this is a human being who is defective in one way or another’. But if you’re actually talking to a human being who seems to you defective in some way, lacking courage in a situation where courage would be necessary, or lacking compassion, or some other virtue, then does your approach to moral philosophy allow you to cajole them or to give them reasons why they should behave differently? Or is it a purely descriptive kind of philosophy that allows you to say ‘You are a good person or a bad person,’ but then they might say, ‘well yes, such is life, I’m a bad person, there’s nothing to be done about it.’

Well, it’s a description, if I say to this person, “look, you have reason to do this.” That’s a description of his state, an absolutely blunt description. And if he says, “why do you say that?” I reply, “well what do you think having reasons is? What do you think you have reason to do?” And if he says, “well, I think I’ve only got reason to get what I want,” I say, “well, why actually do you think you have reason to do that – how do you establish that reason? And what about something that you don’t care about at the moment – like your own health maybe, like not getting cancer later on? Haven’t you got reason to give up smoking, let’s say, even though it’s not related to your present desires? You’re young, the risks don’t make you tremble but the dangers are still there.”

Now, probably, he will say, this chap, “all right, you’ve shown me that I’ve got a reason to give up smoking, but you haven’t shown me that I should do it.” I’d say, “what on Earth do you think ‘should’ means?” You lose the sense of ‘should’ if you go on saying “why should I?” when you’ve finished the argument about what is rational to do, what you’ve got reason to do. You can’t say “why should I?” Of course, you may very well say, “I’m bloody well not going to,” but that’s another matter.

So to say that you should do something is just to say that you have a reason to do something?

That’s right, certainly. ‘Should’ simply speaks of reasons – it’s not a kind of pushing or a word with an oomph or something for expressing my attitude, is it? If someone’s asking about why he should do something, he’s asking to be shown that he has a reason to do it. And so we have to explore the notion of having a reason.

But what if I say “what is the reason to take good care of children?” and you say “If you don’t look after children they will die.” And I say, “but what do I care about that? I’m so selfish that I don’t care what happens to the next generation.”

It’s like the man who doesn’t care what happens to his later years. The fact that he doesn’t care doesn’t mean that he is rational after all. At least, one would need a very special view, very Humean, about reasons for actions to think he doesn’t have a reason unless he cares. That’s why my old paper ‘Morality as a System of Hypothetical Imperatives’ was so wrong, as I did think that then.

Taking again the case of not smoking I don’t know if it’s the influence of your earlier paper on me but I can see those reasons as being rootable in hypothetical imperatives. “If you want to avoid getting cancer, then give up smoking.”

Yes, that’s right. I mean, there are some hypothetical imperatives. “If you want some more tea, then come and get it!”

Is that an example or an actual invitation to get some more tea?

Both! But there are also other imperatives that are not hypothetical.

And you do explore what it is to have a reason to do something. You talk in your book about practical rationality, and it becomes apparent that you have a broader idea of practical rationality than most philosophers. I wonder if you could tell us what ‘practical rationality’ is for you?

Practical rationality is being as one should be, being a non-defective human being in respect of those things done for reasons, which is a whole enormous area of life of course. I’m saying that practical rationality is goodness in respect of reason for actions, just as speculative rationality, rationality of thinking, is goodness in respect of beliefs on conclusions drawn from premises and so on. That has to do with what one should believe, as practical rationality has to do with what one should do, what one has reason to do. But then, you have to remember animals acting on instinct – and if they haven’t got the right instincts, as a lioness who doesn’t look after her cubs hasn’t got the right instincts, then they are defective. Human beings work on instinct too, of course, but also they’re taught to think. Practical rationality is essential to the life of human beings. It’s the way that they survive. If you couldn’t bring up a child to recognise reasons (and no doubt this is how it is with some severely defective children) you might get him to obey orders but not actually ever to say “So, I’ll do such-and-such.” And that means he would lack practical rationality. Getting “So I’ll do such-and-such” right: saying it when there is a genuine “so”, is related in a particular way to human life.

The first thing you teach your child, after all, is to not hang out of the window, not hold things over the fire, not to go near the fire and so on… “It’ll hurt me so I’ll go away.” “It’s dangerous so I won’t do it.” “It’s alight, so I’ll be careful of it.” “It’s high up so I might fall.” ‘So I won’t go there’ is the first things that children have to learn. All these “so’s”. And this is simply learning part of practical rationality.

But people think that sometimes there is a difficulty reconciling morality with rationality.

They do, but I believe it is a mistake to think you’ve got an independent idea of rationality; that there is one idea of rationality and one idea of morality and somehow you have to reconcile them. They’re not separate. From the beginning, if you like, morality leads rationality and not the other way round.

This is very important, because it’s just there that people think there is a problem. They think that I will be acting irrationally if, say, I lose a great deal through not being willing to cheat or something like that. But I want to ask “What’s your idea of rationality, if you think that you have somehow to reconcile moralitywith it? You haven’t got a full idea of rationality until you’ve got morality within it, as prudence is within it, going for what you need, looking out for danger and so on. These are all just different parts of practical rationality. Prudence is one part, the part that has to do with getting what you just happen to want is another part, and morality belongs here too. One shouldn’t think so differently about morality and prudence. Prudence should be thought of as one of the virtues. And why should it be thought that while prudence is certainly rational morality isn’t?

So one has to attack the idea that people have of rationality, when they think that rationality is something selfstanding with which it may be difficult to reconcile morality. I don’t think it is like that.

In your book, you devote a whole chapter to discussing the nature of human happiness. Would you like to tell us how that fits in with your overall approach to morality?

Yes, it’s very important indeed. Look at what a plant or an animal needs to do for the sake of its flourishing, so that it will have a good life in the sense that things will go well for it. The owl needs to see in the dark; things don’t go well for it if it can’t see in the dark. It starves, presumably. Because of this, the notion of flourishing is central to the book. What is beneficial to the owl is what allows it to flourish, or makes it more possible for it to flourish. And in the case of human beings that is straightforwardly happiness. You can’t say that human beings flourish if they just survive to a ripe old age and reproduce themselves. In an animal or a plant that may be enough for flourishing, but if a human being just does that with no happiness they must live a wretched life. So what is for a person’s good certainly must have some relation to their happiness. That’s why I had to tackle the problem of happiness, and I found it extremely difficult. It’s an articulated concept, it’s very complex. I tried to describe it, to spread it out, but I was left with a really difficult problem which I couldn’t solve and I indicated in the book that I couldn’t solve it. There is a really deep problem about the relation between virtue and happiness.

Can one describe a wicked person as a happy person? Of course one can, look how the wicked flourish like the bay tree. But there are some examples that make me stick with the idea that someone could say, “I cannot get happiness through wickedness, through acting badly, through selling my friends down the river. That’s not something that I could count as happiness and it’s not just that I wouldn’t be happy afterwards because I would be so ashamed. It would be true even if I was going to be given some drug, or if a happy brick would drop on my head after I’d done this thing so that I’d never remember that I’d done it.” Someone might well say, “nothing that I could get by really wicked actions, by desperate corruption, by betraying my friends, is anything that I would count as happiness, and anything that made me do it I would not count as having benefited me. I think here of the example of those I called the ‘Letter- Writers’ in my book. [see box below]

Such an example really does let us see the problem very clearly. We cannot totally divorce the ideas of virtue and of happiness. There seems to be a necessary conceptual connection between them. And this is suggested by the fact that while one of the Letter Writers might have said, “I’m willing to sacrifice all my future happiness”; they might rather have said, “Happiness is just not possible for me if I can only
avoid death by going along with the Nazis, by betraying my comrades in the Resistance, or by obeying orders to join the SS.”

So they were pursuing happiness by choosing not to co-operate with the Nazis even knowing the terrible consequences of that, through avoiding even greater unhappiness?

No, just that they had to say, it’s too bad. Happiness is not my lot.

I see, yes of course.

I don’t want to say, as some do, that no loss that could only be avoided by acting badly is a loss at all. That seems to be goody-goody in some way and I want to insist might be a terrible loss.

I understand the problem now. It’s crazy to say that they weren’t suffering a loss in that situation.

They were losing everything. They said things that showed how much they were giving up. There were letters to their sweethearts, their children, who they would never see grow up, and their beloved wives. One of them I remember said something like, “How wonderful it would be just to smell the cooking in the kitchen.” You’ve got a vivid sense of this family life which he could have returned to, if he’d simply been willing to do what the Nazis’ wanted. And that’s why the problem is so difficult. I suspect that the answer is somehow in a connection between the concept of my goodness and my good that I haven’t got out. But anyway I haven’t got it out, so I can only say “This is really difficult… I’ll tell you what I can about happiness because flourishing, the human good, is central to this book, but at the moment I have to say that the case of the Letter-Writers shows a real difficulty.”

What is the nature of the problem – is it a question of just getting our concepts straight so that we don’t trip over them in some way?

Well, it is in some way. But it must be that there is some deeper notion of one’s good. I simply can’t do it, that’s all. I’m stuck.

Given that you argue that there is an objective basis for morality, that morality can be rooted in nature and facts about being in nature, how do you deal with the great disagreements that exist over morality between individuals and between cultures? Can your approach to morality settle those disagreements?

That’s a very important question. I wish I’d said more about it in the book. First of all there’s no reason why there shouldn’t be some indisputable moral facts. I’d take for instance one about looking after children. Someone who was cruel to children for fun, for his own pleasure, I think we could say that there was something wrong with that person, that he had a vice. It seems to me that that must be true in any civilisation, at any time that there were human beings. Torture, also, is never morally defensible; it’s something which in no circumstances could be justified. But let’s stick with this one judgement, about someone who is cruel to children, who torments them: that such a one has a defect, has a vice. Vice is a defect of the will, of the human will. And that judgement about those who abuse children seems to me to apply in any circumstances, in any age, in any culture. But, of course, virtues will take very different forms at different times, in different civilisations, different cultures. There’s no question about that. Courage for instance will take different forms, as different things will be needed. There’s so much difference in lifestyles in human beings, much more than there is, I imagine, among owls or any species of animals. And therefore, the moral judgements will not be exactly the same everywhere. What will be good in one culture will be bad in another just because the circumstances are so different. Different things are needed in these circumstances: by nomads, for instance, as opposed to city-dwellers, or people who live in great scarcity, or people surrounded by cruel enemies. What is right and wrong for different societies will often be different. Things will be justifiable in one situation and not in another. And, of course, one of the determining factors will be religion, what people believe the gods will do; will offend them or bring their wrath down on the community. After all, it would be totally wrong to bring the wrath of the gods on your community – so religion comes in too. Likewise what in a certain community is seen as pollution, or as a demeaning task, obviously determines what is for instance cruel or disrespectful. In that sense, a lot of what’s right and wrong will be relative to different cultures, of course. But that doesn’t mean there isn’t the same underlying basis for right and wrong.

So you’re both an objectivist and a cultural relativist about morality?

That’s right, to a certain extent a cultural relativist. How much there is in the way of grey areas, I don’t know. I think it is an advantage of a position like mine, that it could allow for universality where we really find it, as with the case of what we do to children, but that where we really find diversity because of different lifestyles, cultures, religions, or just grey areas where it seems you could say one thing or say the other, that’s what we should be ready to say. In this way we wouldn’t try to tighten everything up or claim to be able to look ahead and see where each conflict of opinions would end. It is one of my objections to the old kind of subjectivism that philosophers thought they could describe the breakdown point ahead of the particular argument, because one person would have one ultimate principle and another person the opposite ultimate principle. I don’t believe in these ultimate principles that must simply be affirmed or denied, but rather in an appeal to the necessities of human life. Arguing on this basis we shall look at the particular conflicts of moral opinion and take what comes in the way of universal truths, cultural relativism or grey areas.

So, given this rather relaxed version of moral objectivity, what would you say to ‘immoralists’ such as Nietzsche?

In my book I take Nietzsche on. I say, “Look, what you’re suggesting might be possible for some race of beings, but not for humans. I know you think that if only people will read you and believe you, human beings will become quite different, but I don’t believe a word of that. You want to judge actions not by their type, by what is done, but by their relation to the nature of the person who does them. And that is poisonous.” When we think of the things that have been done by Hitler, Stalin, Pol Pot, what we have to be horrified at is what was done. We don’t need to inquire into the psychology of these people in order to know the moral quality of what they did. Nietzsche thought that a quite different taxonomy of human action was the only one that really got down to things. Goodness or badness was in the nature of the person who did them: it was this that determined whether the the act was good or bad. But I think it is possible to explain the basis on which you can judge that the oppressive things that Nietzsche would have countenanced – or at least spoke of without disapproval as merely pranksome – cannot be done by a good human being. It’s wrong-headed to leave aside, as he does, the question of what human beings as such need, or what a society needs in the way of justice, fastening instead on the spontaneity, the energy, the passion of the individual agent. Those things are important in their place, but to fasten on them is like fastening on the scent of some flower when it isn’t part of its life. I’m inventing this example on the spur of the moment, maybe the scent of a plant always serves to attract pollinating insects and so is part of its life. But there are other things that are not, so you get the idea.

Nietzsche had, in a way, an aesthetic view of human life. But his isn’t a suitable way of life for human beings as they are, and if Nietzsche is reckoning on being able to change them into something different, he’d better think again.

Thank you for this interview!

This conversation took place at Philippa Foot’s Oxford home in the Autumn of 2001. Due to the extreme decrepitude of the Philosophy Now cassette recorder, the resulting tape was crackly beyond belief, which is why this interview hasn’t appeared sooner. Grateful thanks go to Karen Adler for somehow managing to transcribe it anyway. R.L.

The Letter-Writers
Philippa Foot’s interest in the complicated connection between virtue and the pursuit of happiness was partly inspired by a 1950s book, now unobtainable, called Dying We Live. The book is a collection of prison letters from Germans who defied the Nazis and were executed as a result. The writers were people from a wide variety of social backgrounds, including aristocratic anti-Nazi plotters; a pastor who refused to stop preaching against the persecution of the Jews; farm labourers and many others Their farewell letters to their loved ones sometimes explain why they had chosen a path which would lead to their own destruction. The example below is from a farm boy from the Sudetenland:

February 3, 1944
Dear Parents: I must give you bad news – I have been condemned to death, I and Gustave G. We did not sign up for the SS, and so they condemned us to death.... Both of us would rather die than stain our consciences with such deeds of horror. I know what the SS has to do....


from Dying We Live: The Final Messages and Records of Some Germans Who Defied Hitler ed. by H. Gollwitzer, K. Kuhn & R. Schneider.

The World According to Monsanto

$
0
0

Pollution, Corruption, and the Control of the World's Food Supply


A film (2008) by: Marie-Monique Robin 

Dr. Vandana Shiva, Indian environmental activist and anti-globalization author: “There's nothing they are leaving untouched: the mustard, the okra, the bringe oil, the rice, the cauliflower. Once they have established the norm: that seed can be owned as their property, royalties can be collected. We will depend on them for every seed we grow of every crop we grow. If they control seed, they control food, they know it – it's strategic. It's more powerful than bombs. It's more powerful than guns. This is the best way to control the populations of the world.”

The story starts in the White House, where Monsanto often got its way by exerting disproportionate influence over policymakers via the “revolving door”. One example is Michael Taylor, who worked for Monsanto as an attorney before being appointed as deputy commissioner of the US Food and Drug Administration (FDA) in 1991. While at the FDA, the authority that deals with all US food approvals, Taylor made crucial decisions that led to the approval of GE foods and crops. Then he returned to Monsanto, becoming the company’s vice president for public policy.

Thanks to these intimate links between Monsanto and government agencies, the US adopted GE foods and crops without proper testing, without consumer labeling and in spite of serious questions hanging over their safety. Not coincidentally, Monsanto supplies 90 percent of the GE seeds used by the US market.

Monsanto’s long arm stretched so far that, in the early nineties, the US Food and Drugs Agency even ignored warnings of their own scientists, who were cautioning that GE crops could cause negative health effects. Other tactics the company uses to stifle concerns about their products include misleading advertising, bribery and concealing scientific evidence.

Standing in Livestock’s ‘Long Shadow’: The Ethics of Eating Meat on a Small Planet

$
0
0

By Brian G. Henning
Ethics & the Environment; Fall 2011; Vol. 16 (2)

A primary contribution of this essay is to provide a survey of the human and environmental impacts of livestock production. We will find that the mass consumption of animals is a primary reason why humans are hungry, fat, or sick and is a leading cause of the depletion and pollution of waterways, the degradation and deforestation of the land, the extinction of species, and the warming of the planet. Recognizing these harms, this essay will consider various solutions being proposed to “shrink” livestock’s long shadow, including proposed “technical” or “market” solutions, a transition to “new agrarian” methods, and a vegetarian or vegan diet. Though important and morally relevant qualitative differences exist between industrial and non-industrial methods, this essay will conclude that, given the present and projected size of the human population, the morality and sustainability of one’s diet are inversely related to the proportion of animals and animal products one consumes.

In 2007, 275 million tons of meat (1) were produced worldwide, enough for 92 pounds for every person (Halweil 2008, 1). On one level, this fourfold increase in meat production since 1960 might be seen as a great success story about the spread of prosperity and wealth. President Herbert Hoover’s memorable 1928 campaign pledge to put “a chicken in every pot and a car in every garage” has, at least for many in the developed world, largely been realized. This juxtaposition of chickens and cars is appropriate in a way that Hoover did not intend: in an important sense, the same industrial processes that have put a “car in every garage” now make it possible to “put a chicken in every pot” or a burger on every plate. What has made it possible to realize the “prosperity” in Hoover’s promise is the industrialization of food production, and livestock are no exception. By applying some of the same principles that organized Henry Ford’s assembly lines to agriculture (combined with the economically distorting effects of vast agricultural subsidies and other environmental and economic externalities), once-expensive food items—such as beef, pork, and chicken—are now within the reach of billions of people; indeed, they are often cheaper than fresh fruits and vegetables.

On Hoover’s measure, then, the shift to intensive, industrial methods
of livestock production have been wildly successful. Thanks in large part
to the adoption of intensive methods, worldwide more than 56 billion
animals are slaughtered each year; an average of 650 animals are killed
every second of every day (Halweil 2008, 2). At eight times the size of the
human population, livestock cast a very long shadow indeed. A primary
contribution of this essay is to provide a survey of the human and environmental
impacts of livestock production. We will find that, considering
both the direct and indirect effects, the overconsumption of animal meat is
now a (if not the) leading cause of or contributor to both malnourishment
and obesity, chronic disease, antibiotic resistance, and the spread of infectious
disease; the livestock sector may now be the single greatest source of
freshwater use and pollution, the leading cause of rainforest deforestation,
and the driving force behind spiraling species extinction; finally, livestock
production is among the largest sectoral sources of greenhouse gas emissions
contributing to global climate change.

Recognizing the inefficient and environmentally destructive nature of
intensive livestock production, this essay will consider various solutions
being proposed to “shrink” livestock’s long shadow, including “technical”
or “market” fixes, a transition to “new agrarian” methods, and the movement
to a vegetarian or vegan diet. Though important and morally relevant
qualitative differences exist between industrial and non-industrial
methods, this essay will conclude that, given the present and projected
size of the human population, the morality and sustainability of one’s diet
are inversely related to the proportion of animals and animal products
one consumes.

Meat, nutrition, and public health

Humans now derive, on average, one-third of their daily protein and
17 percent of their energy (calories) from animal sources (Steinfeld et al.
2006, 269). Yet, as one would expect, these averages mask great differences
in meat-eating patterns, from a low of 6.6 pounds of meat consumed
per person annually in Bangladesh (Fiala 2008, 413) to a high of
273 pounds per person annually in the United States (Steinfeld et al. 2006,
269). The way that people interact with livestock also varies greatly. While
many wealthy people only interact with animals when they are on their
plate, raising livestock is the primary livelihood of one billion (36%) of
the world’s poorest individuals (those who live on less than $2 US per
day) (Steinfeld et al. 2006, xx and 268). Reflecting this complex reality,
livestock production methods vary considerably, from small-scale operations
using extensive, pasture methods, to large-scale operations using intensive,
industrial methods. While several decades ago the geographical
distribution of these methods, extensive and intensive, would largely have
corresponded to developing and developed nations respectively, this is no
longer the case, with extensive methods increasingly being championed by
environmentally conscious consumers in developed nations and developing
nations seeking to meet rising demand and achieve economies of scale
through the adoption of intensive methods.

Despite these seemingly divergent trends, 80 percent of the considerable
growth in the livestock sector worldwide is from industrial livestock
production (278). The vast majority of the billions of animals raised for
food each year are not wandering the barnyard of a bucolic farm leading
long, relatively carefree lives until the day of slaughter. Most livestock
today, in both developed and developing nations, are raised using
intensive methods in what the industry calls “concentrated animal feeding
operations” (CAFOs, pronounced KAY-foes).(2) As Peter Singer recognized
decades ago in Animal Liberation, animals are no longer raised; they are
produced in modern factory farms where specially bred stocks of animals
are maintained in confined spaces and quickly fattened to slaughter
weight through a high-protein diet, often of corn or soy.(3) Rather than
being raised by many skilled farmhands, a large herd or flock can easily be
 “managed” by low-skilled (read low-wage) workers who maintain feeding
machines, occasionally remove dead or dying animals (“downers”),
and scrape waste into vast “lagoons.” Cows, pigs, sheep, and chickens are
no longer unique and valued (albeit instrumentally) members of an integrated
farm community, they are protein conversion machines; low-value
protein (e.g., corn or soy) goes in and high-value protein (animal flesh)
comes out.

Yet, at the heart of our global food supply is an insidious paradox.
“Today our food supply is nothing less than cornucopian, favoring the
world with unprecedented quantities and varieties of food. Yet more people
and a greater proportion of the world today are malnourished—hungry,
deficient in vitamins or minerals, or overfed—than ever before in
human history” (Gardner and Halweil 2000, 10). Taken on a global scale,
it is estimated that poor nutrition, whether through hunger or overeating,
“easily account[s] for more than half of the global burden of disease”
(35). Many policy makers and health professionals are rightly focused on
the introduction of fat, salt, and sugar (often in the form of corn derivatives)
involved in the industrial processing of our food products, whereas
the over-consumption of animals and animal products receives comparatively
little attention. Yet, by contributing to the spread of antibiotic resistant
infections, the spread of infectious diseases, and the occurrence of
chronic diseases, the mass production and overconsumption of meat now
constitutes one of the single greatest threats to public health. Let us briefly
consider each of these three factors in turn.

In CAFOs cattle are often crammed into feedlots shoulder to shoulder
knee deep in their own excrement, pigs are kept in confined sow crates
with little room to move, and chickens are frequently kept in poorly ventilated
sheds with less than a sheet of paper’s worth of space in their overcrowded
cages. Because of the intense confinement and unclean spaces
found in CAFOs, producers are “forced” to give their herds and flocks
large doses of antibiotics in hopes of avoiding the rapid spread of disease
(and the attending loss of profit). Indeed, half of all antibiotics produced
worldwide are now administered to livestock (Steinfeld et al. 2006, xx
and 273). This routine, preventive use of antibiotics in industrial livestock
production is increasingly recognized as exacerbating what some are calling
an “epidemic” of antibiotic resistant infections (Spellberg 2008). As
within the human community, the overuse of antibiotics is facilitating
the evolution of more antibiotic resistant infections, threatening both the
human and non-human population with treatment-resistant strains and
further burdening already taxed health systems.

Secondly, the proximity of CAFOs to population centers is quickly
becoming a strong vector for the spread of infectious disease to the human
population. As the British medical journal The Lancet reports, this is a
particular challenge for officials in developing nations where the siting
of CAFOs close to population centers is facilitating “the emergence of
zoonotic infections, including various viral haemorrhagic fevers, avian influenza,
Nipah virus from pig farming, and BSE [“mad cow” disease] in
cows and its human variant” (McMichael et al. 2007, 1261). The World
Bank goes so far as to claim that the “extraordinary proximate concentration
of people and livestock poses probably one of the most serious environmental
and public health challenges for the coming decades” (cited in
Halweil 2008, 2).

Beyond antibiotic resistance and facilitating the spread of infectious
diseases, the overconsumption of meat is now a leading cause of obesity
(with its attendant health affects) as well as a leading cause of many
chronic or noncommunicable diseases, both in developed and developing
nations.(4) Indeed, the majority of those living in the developed world and
a growing number of individuals in developing nations receive far more
nutrition from animal sources than is healthy. Despite persistent claims
to the contrary, there is little debate among doctors and nutrition experts
that one can have a healthy plant-based diet.(5) For instance, contrary to
the protein myth surrounding a vegetarian diet, on average both vegetarians
and non-vegetarians consume more than the recommended daily allowance
(RDA) of 56 g of protein. For instance, the average meat-eating
American consumes 77 g of animal protein and 35 g of plant protein daily
for a total of 112 g, twice the RDA for protein suggested by the United
States Department of Agriculture (USDA). Yet, the average vegetarian
consumes 89 g per day (Pimentel and Pimentel 2003, 661s).

As the average person now derives one-third of his or her daily protein
and 17 percent of daily calories from animal sources (Steinfeld et al.
2006, 269), health professionals are increasingly recognizing the link between
high intakes of meat and the rise of non-communicable or chronic
diseases. A diet high in animal-sourced foods contributes significantly to,
among other things, hypertension, heart disease, certain types of cancer,
diabetes, gallstones, obesity, stroke, and food-borne illness (Gardner and
Halweil 2000, 41–42; Steinfeld et al. 2006, 269). With an estimated 66
percent of Americans reported as being overweight or obese,(6) the costs of
treating the effects of obesity continue to escalate. According to the Centers
for Disease Control, in 2000 the total cost of obesity in the United
States was estimated to be $117 billion, which accounts for nearly 10%
of the nation’s health care tab. (7)

Given that half the world is malnourished and that more than half
of all disease is linked to poor diet (Gardner and Halweil 2000, 43), it is
no exaggeration to claim that we are in the midst of a nutritional crisis, a
crisis that is largely of our own making. What is often overlooked is the
ethical significance of the overconsumption of animal products and the
role that it plays in this global nutrition crisis. It is a sad testimony to the
great disparity in wealth that exists in the world that, perhaps for the first
time in human history, there are more overfed (about 1 billion) individuals
than malnourished (about 800 million) (Steinfeld et al. 2006, 6). What is
important to note in this context is the sense in which these two figures
are related.

A Protein Factory In Reverse

Though industrial livestock production has dramatically increased
production, this economic efficiency has come at the price of dramatic
ecological inefficiency: animals now detract far more from the total global
food supply than they provide (270). Because only a small portion of
the total energy consumed by an animal is converted into edible biomass,
each movement up the trophic pyramid away from primary producers
results in a significant loss of energy. According to the USDA, the ratio
of kilograms of grain to animal protein is 0.7 to 1 for milk, 2.3 to 1for
chicken, 5.9 to 1 for pork, 11 to 1 for eggs, 13 to 1 for beef, and 21 to
1 for lamb (cited in Bellarby et al. 2008, 36). In other words, it takes 21
kg of edible grain (or 30 kg of forage) to yield 1 edible kg of lamb and
13 kg of edible grain (or 30 kg of forage) for one kg of beef. Yet a 13:1
protein ratio for beef seems efficient compared to a more comprehensive
energy analysis that includes all “inputs,” such as fertilizers and pesticides,
required to produce a kilogram of beef. According to one study,
to produce one calorie of beef requires 40 calories of fossil fuel (40:1),
compared to 14:1 for milk and 2.2:1 for grain (Baroni et al. 2007, 285). If
animals are now seen by the meat production industry as protein conversion
machines—converting “low value” grain or forage into “high value”
animal protein—then they are very inefficient machines. Indeed, as Francis
Moore Lappé aptly put it, they are more nearly “a protein factory in
reverse” (1991 [1975], 70).

With a full third of the annual global harvest of grains being fed to
livestock, the scale of lost edible nutrition is as staggering as it is morally
unacceptable. “At present, the US livestock population consumes
more than seven times as much grain as is consumed directly by the entire
American population” (Pimentel and Pimentel 2003, 661s). Indeed, the
grain fed to US livestock alone could feed all of the world’s 800 million
malnourished individuals (Ibid.). While concerns regarding dependency,
distribution and corruption are justified, in a world with increasingly
stressed ecosystems, a rapidly growing human population, and political
unrest caused by high food prices, it is difficult to morally justify this
profligate use of edible nutrition. As high as the human costs in terms of
health and lost nutrition are, much of livestock’s long shadow falls on the
Earth’s water, land, and air.

Water Pressure (8)

For those of us fortunate enough to live in wealthy nations where
sanitation and indoor plumbing are taken for granted and where fresh
water is available in seemingly limitless quantities, it is hard to fathom
the idea that, worldwide, one in six people do not have access to fresh
water and more than twice that number, 2.4 billion people, lack access to
adequate sanitation facilities (United Nations Environment Programme
[UNEP] 2003). It is no exaggeration to say there is a growing freshwater
crisis. Worldwide, humans use three times more water today than in 1960
(Houghton 2009, 188). John Houghton—the founding chair of the Intergovernmental Panel on Climate Change (IPCC)—notes that in many areas the use of freshwater far exceeds the replenishment rate.

The demand is so great in some river basins, for instance the Rio
Grande and the Colorado in North America, that almost no water
from them reaches the sea. Increasingly, water stored over hundreds
or thousands of years in underground aquifers is being tapped for current
use and there are now many places in the world where groundwater
is being used much faster than it is being replenished; every
year the water has to be extracted at deeper levels. For instance, over
more than half the land area of the United States, over a quarter of
the groundwater withdrawn is not replenished and around Beijing in
China the water table is falling by 2 m[eters] a year as groundwater
is pumped out. (188)

According to the United Nations Food and Agriculture Organization
(FAO), “The world is moving towards increasing problems of freshwater
shortage, scarcity and depletion…” (Steinfeld et al. 2006, xxii). By the
year 2025, the FAO estimates that 64% of the world’s population may
live in “water-stressed” basins (Ibid.).(9) And by 2050 the number of individuals
living in severely stressed water basins is projected to rise from 1.5
billion to 3 to 5 billion (Houghton 2009, 193). While it is certainly true
that the rapid growth of the human population is behind many of these
figures, how freshwater is used has as much or more to do with this crisis
than just how many people use it. What many often neglect is the key role
that agriculture, and livestock in particular, play in both the depletion and
degradation of freshwater supplies.

“Domestic” use of water accounts for only 10% of freshwater consumption
while agriculture accounts for 66–70% of global freshwater
usage, making it the single largest user of freshwater.(10) Hidden in this
percentage of water used for agriculture is the amount dedicated to livestock
production, which currently accounts for more than eight percent of
global water use (Steinfeld et al. 2006, xxii). For instance, according to a
study by the National Geographic (2010), it takes 1,799 gallons of water
to create one pound (0.5 kg) of beef, 576 gallons for one pound of pork,
468 gallons for one pound of chicken, and 216 gallons for one pound of
soy beans. Overall, it is estimated that producing one kilogram of animal
protein requires 100 times more water than producing one kilogram of
grain protein (Pimentel and Pimentel 2003, 662s).

The negative implications of livestock production are not limited to
the grossly inefficient use of increasingly scarce freshwater. Livestock production
also has far-reaching impacts on both the replenishment and quality
of freshwater stocks.(11) In the United States, livestock produce ten times
more waste than the human population (Singer 2002 [1975], 168) but,
unlike human waste, which must be cleaned in waste treatment facilities,
livestock effluent is collected in vast lagoons that often leak into aquifers
and waterways. As Schlosser and Wilson vividly describe it, “Each steer
deposits about 50 pounds of urine and manure every day. Unlike human
waste, this stuff isn’t sent to a treatment plant. It’s dumped into pits—gigantic
pools of pee and poop that the industry calls lagoons. Slaughter-house
lagoons can be as big as 20 acres and as much as 15 feet deep, filled
with millions of gallons of really disgusting stuff” (2006, 166). To further
illustrate the sheer volume of livestock waste, Schlosser and Wilson go
on to note that the two cattle feedlots outside Greeley, Colorado produce
more in animal waste than the humans in the cities of Denver, Boston,
Atlanta, and St. Louis combined (167).

The problems with animal waste polluting aquifers and rivers are further
compounded by the agricultural practices used to create the crops
fed to animals. While global figures are not available, the FAO reports
that “in the United States, with the world’s fourth largest land area, livestock
are responsible for an estimated…37 percent of pesticide use…and
a third of the loads of nitrogen and phosphorus into freshwater resources”
(Steinfeld et al. 2006, xxii). These pesticides and fertilizers make their way
into the groundwater and run off into waterways, polluting freshwater
sources and weakening or destroying already stressed marine ecosystems.
Given the vast quantities of manure, pesticides, and fertilizers generated
by intensive livestock production, we can begin to understand why the
FAO finds that the livestock sector “is probably the largest sectoral source
of water pollution, contributing to eutrophication, ‘dead’ zones in coastal
areas, [and] degradation of coral reefs…” (Ibid., italics added).(12) Even before
the explosion and sinking of a deepwater drilling rig off the coast of
Louisiana (April 2010) dumped millions of gallons of oil into its waters,
the “dead zone” in the Gulf of Mexico was bigger than the state of Massachusetts
(Venkataman 2008).

In a world with already fragile marine ecosystems and increasingly
scarce freshwater, we can ill afford to continue raising animals by such
methods. Indeed, given that eating meat is nutritionally unnecessary(13) and
detracts more from the global supply of food than it provides,(14) not only is
the inefficient and wasteful use of increasingly scarce freshwater ecologically
unsustainable, it is morally unacceptable to continue to preference
the acquired taste of meat over the need for life-giving freshwater. Unfortunately,
the impact of industrial livestock production is not limited to the
quantity and quality of freshwater or the damage done to fragile marine
ecosystems. The impacts of livestock production on the land and the flora
and fauna that depend on it are equally severe and unsustainable.

 Land degradation, deforestation, and the sixth great extinction

For millennia, agricultural production has been the driving force
behind what is euphemistically referred to as “land conversion.” As the
human population races toward an estimated nine billion people by midcentury,
the dimensions of this “conversion” are massive. Nearly a third
of the Earth’s land surface has already been cleared to make way for a
global farm and the rate of clearing is accelerating (Steinfeld et al. 2006,
xxi, 5, and 271–72).

Though few people connect the steak on their plate to deforestation
in the Amazon, the link is now undeniable. “In the Amazon, cattle ranching
is now the primary reason for deforestation” (Steinfeld et al. 2006,
272). Indeed, the ever-expanding demand for beef is the single greatest
contributor to deforestation worldwide. “In Latin America where the
greatest amount of deforestation is occurring—70 percent of previous
forested land in the Amazon is occupied by pastures, and feed crops
cover a large part of the remainder” (xxi). Moreover, after a brief period
of decline, the rate of deforestation for pasture land is once again
increasing, reaching an annual rate of more than 13 million hectares
(over 32 million acres) a year, “an area the size of Greece or Nicaragua”
(UNEP 2003). Not only is the rate of clearing unsustainable, but also the
way that these cleared lands are subsequently being “cultivated” is of
great concern.

The FAO reports that, worldwide, 20 percent of all pastures and
rangelands and nearly 75 percent of those in “dry areas” are being degraded,
“mostly through overgrazing, compaction and erosion…” (Steinfeld
et al. 2006, xxi). In the United States, nearly all (90%) of crop land is
being depleted thirteen times faster than the natural replacement rate of
one ton per hectare per year (Pimentel and Pimentel 2003, 662s). Overall,
in the United States, livestock are responsible for an estimated 55 percent
of soil erosion (Steinfeld et al 2006, 273). In some parts of the world the
conversion of forest and grasslands to pasture or feed crops is depleting
the land causing “desertification.”(15)

In hastening the destructive spread of deserts across ever-larger portions
of the globe, livestock production is threatening not only livestock
and agriculture, but the remaining, already-stressed ecosystems.(16) As
farmers and ranchers clear forested land and draw ever-larger checks on
the non-renewable stores of fossil energy to fuel our global farm, we are
pushing many species to extinction.

There is wide consensus among biologists that the present rate of
extinction is 50 to 500 times the normal “background rate” revealed by
the fossil record (Woodruff 2001, 5471). It is because of this that some
claim that we are in the midst of the sixth great extinction in the history
of our planet. Though many environmental philosophers recognize
the seriousness of rapid anthropogenic species extinction, few note that
the production of meat may now be “the leading player in the reduction
of biodiversity, since it is the major driver of deforestation, as well as
one of the leading drivers of land degradation, pollution, climate change,
overfishing, sedimentation of coastal areas and facilitation of invasions by
alien species” (Steinfeld et al. 2006, xxiii, italics added). To adapt a memorable
phrase from Peter Singer: we are quite literally gambling with the
future of millions of forms of life on Earth for the sake of hamburgers.(17)

Cooking the Planet

In considering responses to global climate change, what has largely
been lost in all of the “green” talk about fuel efficient cars and compact
fluorescents, windmills and photovoltaics, is the fact that the food we
eat contributes more to global climate change than what we drive or the
energy we use. Worldwide, emissions from agriculture exceed both power
generation (McMichael et al. 2007, 1259) and transportation (Steinfeld et
al 2006, xxi; Pelletier and Tyedmers 2010a, 2), contributing as much as a
third of all greenhouse gas emissions (Bellarby et al., 2008, 5).(18) The portion
of these emissions dedicated to livestock production is substantial,
constituting approximately 18 percent of global anthropogenic greenhouse
gas (GHG) emissions (Steinfeld et al. 2006, xxi; Halweil 2008, 2;
Pelletier and Tyedmers 2010a, 2). Beyond the unstated taboo against publicly
criticizing the morality of various food choices, part of the reason that
the livestock sector is often omitted or ignored in discussions of global
climate change may be that it is responsible for a relatively small portion
of direct global carbon dioxide emissions (9%), primarily from the burning
of biomass (deforestation) to create feedcrops or pasture. However, a
closer analysis reveals that meat production has a much larger role in the
emission of methane (CH4), a potent heat-trapping gas.

Whereas carbon dioxide concentrations in the atmosphere have increased
by more than a third over pre-industrial levels, the concentration
of methane has more than doubled in the last two centuries (Houghton
2009, 20, 50). Methane is formed through anaerobic breakdown of organic
matter. Thus, there are “natural” sources of methane, the most
important of which are wetlands and termite mounds. The major anthropogenic
sources are coal mining, leakage from natural gas pipelines and
oil wells, rice paddies, biomass burning (burning of wood and peat), and,
most important for present purposes, waste treatment (manure) and enteric
fermentation (bovine flatulence) (Houghton 2009, 50).(19) Though still
present in the atmosphere in far smaller amounts than carbon dioxide
(1.775 parts per million (ppm) vs. 380 ppm), methane plays a disproportionate
role in global warming, contributing 21 percent of all anthropogenic
warming (35). The reason for this has to do with differences in the
molecular properties of atmospheric methane.

Unlike carbon dioxide, which is gradually “taken up” by land biota
or the ocean,(20) methane is chemically broken down in the atmosphere,
lasting an average of only twelve years.(21) This relatively short lifecycle is
offset by the fact that methane is far more potent at trapping heat than
carbon dioxide. Indeed, molecule-for-molecule, methane traps twentythree
times as much heat as carbon dioxide. Taking this differing global
warming potential into account, we can calculate the overall footprint of
livestock production in terms of carbon dioxide equivalent. According
to a recent study, “to produce 1 kg of beef in a US feedlot requires the
equivalent of 14.8 kg of CO2. As a comparison, 1 gallon of gasoline emits
approximately 2.4 kg of CO2. Producing 1 kg of beef thus has a similar
impact on the environment as 6.2 gallons of gasoline, or driving 160 miles
in the average American mid-size car” (Fiala 2008, 413). Overall then,
factoring in both direct and indirect emissions and the differences in lifecycle
and potency of different gases, the livestock sector is responsible for
nearly a fifth (18%) of all GHG emissions worldwide. It would seem that
the chickens in our pots are more responsible for global climate change
than the cars in our garages.(22)

This realization is alarming as the effect of even the relatively small
amount of warming (0.6oC ± 0.2 oC) in the twentieth century is already
being felt, particularly in northern latitudes, where the effects are amplified.(23) 
In the coming decades these changes will accelerate with the rising
temperature. Though there will be regional winners and losers, generally
those least responsible for causing the heat trapping gases (the developing
nations) are expected to be most severely affected by the changing climate,
including melting icecaps and glaciers, rising sea levels, shifting weather
patterns, more intense storms, drought, desertification, species extinction,
salinization of freshwater, spread of infectious disease, and millions of
environmental refugees.

In sum, we have found that livestock cast a very long shadow indeed. The
mass consumption of animals (and the intensive, industrial methods that
make them possible) is a primary reason why humans are hungry, fat,
or sick and is a leading cause behind the depletion and pollution of waterways,
the degradation and deforestation of the land, the extinction of
species, and the warming of the planet. The urgency of this realization becomes
even more apparent when considered in light of the rapidly accelerating
rate of meat consumption, which is expected to more than double
by 2050 from the 1990 level of 229 million tons per year to 465 million
tons (Steinfeld et al. 2006, xx). As the FAO notes, the “environmental
impact per unit of livestock” must be halved just to maintain the current
level of environmental damage, which is itself already environmentally
unsustainable (ibid.).

Even in its characteristically guarded manner, the FAO is surprisingly
direct: “Better policies in the livestock sector are an environmental requirement,
and a social and health necessity” (4). Given that livestock’s
“contribution to environmental problems is on a massive scale…its potential
contribution to their solution is equally large. The impact is so
significant that it needs to be addressed with urgency. Major reductions in
impact could be achieved at reasonable cost” (xx). Let us transition, then,
to consider how, according to the FAO, livestock’s long shadow might be
shortened.

Efficiency, technology, and the market

The FAO suggests the following specific measures to mitigate the environmental
impact of livestock production.

• Agricultural subsidies—Governments should commit to the
gradual elimination of “often perverse subsidies,” which too
often “encourage livestock producers to engage in environmentally
damaging activities” (xxiii-xxiv).
• Overgrazing—The impact of grazing can be mitigated
through the institution of grazing fees (pricing the commons),
and restricting livestock access to waterways, which
reduces erosion, sedimentation, and pollution (xxi).
• Freshwater—Irrigation water should be properly priced.
Livestock access to waterways and riparian areas should
be strictly limited. Producers should utilize irrigation practices
and technology that reduce loss of freshwater through
evaporation and leakage (xxii).
• Manure—Research and implementation of integrated manure
management practices should be accelerated, including
biogas digestion and methane capturing systems. This technology
has the benefit of capturing heat trapping methane
as an energy source, reducing water pollution, and creating
high-quality fertilizer that can return nutrients to the soil
(279).24
• Soil conservation—Soil erosion and degradation can be
mitigated through already known practices, such as avoiding
bare fallow, the appropriate use of fertilizers, “silvopastorlism,”
and controlled exclusion from “sensitive areas”
(xxi).25
• Decentralization—Zoning laws should be created or
changed to site CAFOs away from population centers. This
will mitigate infectious disease vectors and “bring waste
generated into line with the capacity of accessible land to
absorb that waste” (279).
• CAFOs—Developing nations should accelerate the transition
to intensive, industrial livestock production to increase
resource efficiency and decrease environmental damage per
unit of livestock (278).

The FAO suggests that industry and political leaders worldwide
should urgently consider implementing these changes to how animals are
raised for food. For centuries the price (if they are priced at all) of water,
land, and feed have not reflected their actual scarcity. The failure to internalize
the cost of these “externalities” has led to artificially low prices
and the “overexploitation and pollution” of the global commons (xxiii
and 277). From an economic perspective, better “internalizing” costs will
allow market forces to moderate demand; paying the “true cost” of meat
will make it more expensive, which in turn is likely to result in a reduction
in consumption and production. The elimination of agricultural subsidies
and the pricing of water and pastureland would help to reduce the ongoing
destruction of the commons. Given the entrenched nature of global
subsidies schemes around the world, the political viability of this route is
in doubt.

From the perspective of ethicists and activists concerned with animal
welfare, the FAO’s most controversial recommendation is likely to be
that nations should hasten the transition to CAFOs. In its report the FAO
claims that the environmental problems caused by industrial livestock
production are not from their “large scale” or “production intensity,” but
from their “geographical location and concentration” (278). For instance,
the FAO argues that raising animals in concentrated animal feeding operations
(CAFOs), rather than using pasture-based methods, will decrease
deforestation for pasture, thereby reducing a major source of greenhouse
emissions caused by the livestock sector.(26)

I will evaluate the sustainability of adopting the FAO’s suggestions
more fully in the final section. Presently I note that, as important as many
of the FAO’s suggested changes are, it is misleading to suggest that they
would significantly mitigate livestock production’s high cost to animals,
human health, and the environment. For instance, while increasing the
intensity of livestock production would likely decrease deforestation for
pasture, it would do nothing to reduce (and may in fact increase) deforestation
for feedcrops. Further, increasing the industrial production of
livestock would result in a corresponding increase in the loss of edible
nutrition, use of freshwater, spread of antibiotic resistant disease, and increase
in disease caused by the overconsumption of animals.

As Pelletier and Tyedmers conclude in their analysis of the FAO report:
“Given the magnitude of necessary efficiency gains, it would appear
highly unlikely that technological improvements alone will be sufficient
to achieve the objective of maintaining the proportional contribution of
the livestock sector to cumulative anthropogenic contributions to these issues…”
(Pelletier and Tyedmers 2010a, 3). As I will argue more fully in the
final section, even if all of the FAO’s recommended measures were implemented,
meat production practices would remain woefully unsustainable.
As Pelletier and Tyedmers put it, there is a “profound disconnect between
the anticipated scale of potential environmental impacts associated with
projected livestock production levels and the most optimistic mitigation
strategies relative to these current, published estimates of sustainable biocapacity”
(2).

In focusing exclusively on reforming livestock production methods
and refusing to recommend explicitly the reduction of meat consumption,
the FAO’s report gives the false impression that current meat consumption
practices can indefinitely continue, if only methods were made more “efficient”
by applying industrial techniques.(27) Unfortunately, as I will show,
these market-based “technical fixes” would do little more than slow the
bleeding of a gaping, infected wound. Indeed, in a telling passage the FAO
seems to recognize this, noting that “by applying scientific knowledge and
technological capability” we can at best “offset” some of the damage.
“Meanwhile, the vast legacy of damage leaves future generations with
a debt” (Steinfeld et al. 2006, 5). Recognizing that current industrial agricultural
and livestock production methods are unsustainable, some are
calling for more dramatic changes to the way animals are raised.

Let them Eat Grass

 A raft of largely popular books decrying the industrialization of food
production has reached a new high-water mark, led most vocally and
eloquently by the journalist Michael Pollan.(28) Unlike the philosophers and
activists of an earlier generation who, inspired by the work of Peter Singer
and Tom Regan, fought against industrial farming because of the excessive
suffering caused to animals, this “new agrarian farming movement”
is focused more on the human and environmental costs of industrialized
food production.(29) Though the movement is diverse, it is largely characterized
by a return to more “natural” methods of producing food and raising
animals, including local, organic produce and free-range animals. Thus,
there is a hue and cry for a movement away from CAFOs, not necessarily
because of the pain and suffering that they undeniably cause to the animals,
but because of the human and environmental damage they inflict.
While a complete analysis of the new agrarian movement is not possible
here, it is important to consider whether and how a move away from
intensive, factory farming and toward extensive, pasture-based methods
would address the significant human and environmental harms currently
caused by livestock production.

First, although perhaps not its explicit intention, new agrarian methods
would dramatically improve the lives of livestock. As philosophers and
animal activists have rightly noted for decades, intensive factory farming
methods (especially in the United States) are unimaginably cruel. There
is little dispute that most of the animals raised in CAFOs lead short lives
of intense suffering. “The crucial moral difference,” Pollan rightly notes,
“between a CAFO and a good farm is that the CAFO systematically deprives
the animals in it of their ‘characteristic form of life’” (2007, 321).(30)
Animals should be returned, Pollan argues, to their rightful evolutionary
role as members of a complex farming community symbiotically related
in complex webs of interdependence.(31)

The new agrarians argue that the elimination of CAFOs would not
only be good for the animals themselves, it would also be good for humans.
First, the widespread adoption of new agrarian methods would reduce the
spread of treatment resistant infections by eliminating the preventive use
of antibiotics. Second, by eliminating the confined, unsanitary conditions
of CAFOs and their close proximity to population centers, pasture-based
livestock production would reduce the risk of spreading infectious diseases
from livestock to the human community. However, the most significant
benefit to human health would probably come from the reduction of meat
consumption caused by dramatically higher meat prices. Presumably, the
methods advocated by the new agrarian movement would entail much
smaller herds and flocks which, combined with the proposed elimination
of agricultural subsidies, would dramatically increase the price of meat
(and other industrially processed foods). This decrease in supply and increase
in price of meat would likely result in a reduction in consumption,
which would have significant benefits for human health. As The Lancet
found in its recent study, a “substantial contraction” in meat consumption
should benefit human health “mainly by reducing the risk of ischaemic
heart disease…, obesity, colorectal cancer, and, perhaps, some other cancers”
(McMichael et al. 2007, 1254). In this way, proponents of the new
agrarian movement argue, meat would remain a part of the human diet,
but it would play a noticeably smaller role.

This return to a more “traditional diet” was first championed by the
Rachel Carson of the food movement, Francis More Lappé (1991 [1971],
13). Animal flesh has been part of homo sapiens’ diet for millions of years,
but until recently it has always played a minor role. This evolutionary
perspective on meat eating is also at the heart of Pollan’s discussion in
his acclaimed The Omnivore’s Dilemma. Pollan takes issue with animal
welfare advocates who equate the domestication and raising of animals
with “exploitation” or “slavery,” arguing that this portrays a fundamental
misunderstanding of the relationship between humans and livestock.
“Domestication is an evolutionary, rather than a political, development”
Pollan writes. “It is certainly not a regime humans somehow imposed on
animals some ten thousand years ago” (2007, 320). Rather, Pollan argues,
the raising of animals for food and labor is an instance of human predation
and, as such, it is an instance of “mutualism or symbiosis between
species” (Ibid.). The suggestion, then, is that humans should see the raising
and consuming of animals not as a regrettable moral failing but as an ecologically
vital part of our evolutionary heritage. “Indeed,” Pollan argues,
“it is doubtful you can build a genuinely sustainable agriculture without
animals to cycle nutrients and support local food production. If our concern
is for the health of nature—rather than, say, the internal consistence
of our moral code or the condition of our souls—then eating animals may
be the most ethical thing to do” (327).

Overall, then, advocates of the new agrarian movement argue that,
compared to the dominant industrial model, the organic, pasture-based
methods are better for the animals raised, for the humans who eat them,
and for our shared natural environment. As a comparative judgment, I am
in agreement with this claim. The methods of the new agrarian movement
are in many ways an improvement over the industrial livestock practices
encouraged by the FAO and used by the majority of producers around
the world.

Further, advocates of the new agrarian movement are right to note
that vegetarians and vegans should not presume that the elimination of
meat automatically makes their diet environmentally sustainable. The
more industrial the agricultural processes involved in producing one’s
food, whether meat or plants, the greater the ecological impact. Ecologically
speaking, a vegetarian diet based on heavily processed meat substitutes
made out of plants that were raised in monoculture on formerly
forested lands using large quantities of pesticides and fertilizers may be
more ecologically destructive than eating a grass-fed cow.

Thus, I join those in the new agrarian movement in recognizing that
the act of eating (whether plants or animals) is a fundamentally ecological
act. The consumption of one organism by another is perhaps the most
basic form of ecological relation. Through the act of consumption, the
other literally becomes part of one’s being. Indeed, it is important to recognize
that every organism destroys others that it might live and thrive;
such destruction is at the very heart of the act of living. As Alfred North
Whitehead once noted “Life is robbery…” (Whitehead 1978 [1929], 105).
Every organism takes from others to sustain itself. This view is consistent
with an appropriate, ecological view of our world. Ecologically speaking,
the destruction of life is a vital part of the flow of energy through natural
systems. And yet while life does indeed involve robbery, as Whitehead
rightly recognized, “the robber requires justification” (105). As moral
agents, our robbery of life must be justified.

Given the ecological standpoint adopted here, the morality of one’s
diet is not merely determined by what is eaten, but also how what is eaten
is produced. That is, the question is not whether one’s diet is environmentally
destructive, but how destructive it is. While there are important,
morally relevant differences between plants and animals, vegetarians and
vegans should not be seduced into thinking that their hands are clean because
they don’t eat animals. Once we appreciate the embedded nature of
our ecological existence, we realize that no living being has “clean hands.”
Every living organism must destroy others in order that it might sustain
itself. Humans are no exception. It is not possible for humans—or any
other living being—to sustain themselves without destroying other beautiful
and complex forms of life. Such a moral position resists the temptation
to reduce the moral life to simplistic binary states of “good” and “bad.” In
the final analysis, there are only ameliorative grades of better and worse
relative to that ever-evolving moral ideal. In a world replete with beautiful
and unique achievements of life, our aim as moral agents should be
to avoid destroying or maiming another being unless such destruction is
necessary in order to achieve the most robust, rich, and beautiful result
possible.(32) The act of eating is an inherently moral act; our robbery of life
must be continually justified.

Yet is pointing, as Pollan and Lappé do, to the evolutionary basis of
our meat consumption a sufficient moral justification of continuing the
practice? No. Explaining the genesis of a practice is not yet to give its
moral justification. Indeed, Pollan himself makes this point. “Do you really
want to base your moral code on the natural order? Murder and rape
are natural, too. Besides, we can choose: Humans don’t need to kill other
creatures in order to survive; carnivorous animals do” (2007, 320). Given
that humans don’t need to kill other creatures in order to survive or even
thrive, we need to morally justify the choice. Beyond the evolutionary
argument, the moral weight of the argument for continuing to eat animals
would seem to rest on the claim that truly sustainable agriculture requires
the use of livestock to complete the nutrient cycle. Yet is this the case? To
conclude that such methods are better than industrial methods is not yet
to have shown they are good. Is in fact eating meat “the most ethical thing
to do”?

In his recent essay Vasile Stãnescu has noted that there is an often unrecognized
“dark side” to Pollan’s and Kingsolver’s new agrarian model.(33)
By creating “an idealized, unrealistic, and, at times, distressingly sexist and
xenophobic literary pastoral…” the new agrarian movement encourages
“traditional” gender roles and national or regional identities over against
foreign workers and food (Stãnescu 2010, 10). While there is no necessary
connection between the adoption of pasture-based livestock production
and a nostalgia for supposed “traditional ways,” Stãnescu is right to question
whether, embedded within the call to return animals to the land, is
also a call to return women to the kitchen and men to the range.

However, Stãnescu’s critique goes beyond questioning the narrative
that underlies the new agrarianism. He also notes that the problem with
the new agrarian model is that “it is simply factually untrue” (12). Given
the world’s current and projected rate of meat consumption, he argues
that it is doubtful whether it is physically possible to raise livestock via
pasture-based methods. “[L]ocally based meat, regardless of its level of
popularity, can never constitute more than either a rare and occasional
novelty item, or food choices for only a few privileged customers, since
there simply is not enough arable land left in the entire world to raise
large quantities of pasture fed animals necessary to meet the world’s meat
consumption” (Stãnescu 2010, 14–15). This brings us finally to the crux
of the issue: is it in fact possible to feed sustainably the present and projected
human population on a diet based significantly on the consumption
of animals?

A More Sustainable Diet

The human population will soon pass the seven billion mark.(34) Over
the next forty years (by 2050), the United Nations estimates that at least
two billion more humans will be born.(35) Those billions of people will
need significant quantities of freshwater and food. If present trends are
any indication, much of this food will be in the form of animal products.
Assuming the wide adoption and continued improvement of livestock
production methods as suggested by the FAO’s report, what are the likely
environmental impacts of a future with nine billion meat eaters? Is the
FAO right that livestock production can be made sustainable through the
intensification of livestock production? Or are advocates of the new agrarianism
right that the only form of sustainable agriculture is one based on
pasture-raised animals? On our increasingly small planet, what form of
diet is the most ethically responsible and environmentally sustainable?

To help answer these crucial questions, I turn to a recent study of the
FAO’s report by Pelletier and Tyedmers. In their study they use “simplified
but robust models to conservatively estimate” the likely environmental
impacts in 2050 of different dietary scenarios for meeting the USDA
recommendations for protein consumption (2010a, 3). The “FAO projection
scenario” represents the status quo baseline of projected increases
in animal product consumption, which as we have seen is expected to be
double that of 1990 levels (Steinfeld et al. 2006, xx). In the “substitution
scenario,” less efficient ruminant products (cows, sheep, goats, milk) are
replaced by monogastic products (chickens, turkeys, eggs). Finally, Pelletier
and Tyedmers consider the anticipated environmental impact of a
“soy protein scenario,” in which the recommended daily allowance (RDA)
of protein is derived entirely from soy protein sources (vegan diet).

This study is particularly useful for our purposes because each of these
scenarios is then compared against recent estimates of “environmental
boundary conditions” for sustainable greenhouse gas emissions, reactive
nitrogen mobilization,(36) and anthropogenic biomass appropriation. These
boundary conditions are defined as “biophysical limits which define a safe
operating space for economic activities at a global scale” (Pelletier and Tyedmers
2010a, 1–2). For instance, citing work by Allison, et al., Pelletier and
Tyedmers suggest that—if warming this century is to be limited to two degrees
Centigrade, which is required to avoid the most severe environmental
disruptions projected by the IPCC—annual per capita greenhouse emissions
must be limited to one metric ton (2).(37) On the other hand, Pelletier and Tyedmers
use Bishop, et al.’s estimate that humanity can “sustainably appropriate
9.72 billion tons of net primary production annually without undermining
the biodiversity support potential of global ecosystems” (2010b, 3).38

Although far from a complete account of sustainability, Pelletier
and Tyedmers’ study provides a helpful model for evaluating whether
human activity is sustainable with regard to these three critical areas. All
of human activity—including not only food production, but also energy
production, manufacturing, transportation—must fall within these “environmental
boundary conditions” if humanity is to avert “irreversible
ecological change” (2010a, 3).

The results of Pelletier and Tyedmers’ study are staggering. While recognizing
that their models still embody “considerable uncertainty,” they
find that “by 2050, the livestock sector alone may either occupy the majority
of, or considerably overshoot, current best estimates of humanity’s safe
operating space in each of these domains” (2).(39) Specifically, by 2050, in
order to meet FAO projected livestock demand (FAO scenario), livestock
production will require 70% of the sustainable boundary conditions for
greenhouse gas emissions, 88% of sustainable biomass appropriation, and
294% of sustainable reactive nitrogen mobilization (2). Thus, according
to these conservative estimates, if humans consume animal-sourced proteins
at the rates projected by the FAO, livestock production alone will
consume the majority of or exceed entirely the sustainable boundary conditions
in these three critical areas.

Note that, since they are limited to direct greenhouse gas emissions
and direct appropriation of biomass, these figures are, if anything, likely
to be overly conservative. If indirect emissions and biomass appropriations
are included, for instance by including the effects of land-use conversion,
then it is likely that the sustainable boundary conditions for both
GHG emissions and biomass appropriation would also be exceeded (Pelletier
and Tyedmers 2010b, 3). In modeling the likely direct emissions and
biomass appropriation, Pelletier and Tyedmers provide an important response
to the widely touted work of Pitesky, Stackhouse, and Mitloehner,
which takes issue with several of the FAO’s conclusions.(40) Relevant here
is the claim that increasing the intensity of livestock production in developing
nations would alleviate the need for deforestation and would be
sufficient to make livestock emissions sustainable. However, Pelletier and
Tyedmers’ model demonstrates that this reasoning is likely to be mistaken.
Even with the widespread use of the most “efficient” livestock production
methods, livestock production would use an unsustainable portion of the
environmental boundary conditions for carbon dioxide emissions, nitrogen
emissions, and, especially, biomass appropriation.

What if, instead of relying on ruminant sources of protein (beef,
sheep, goat, and milk), humans derived their protein from more efficient,
monogastric sources (chicken, turkey, and eggs) as in the substitution scenario?
(41) According to Pelletier and Tyedmers, if poultry products were
consumed instead of ruminants, “anticipated marginal CO2-e emissions
would rise by 22% and biomass appropriation would increase by 15%
relative to year 2000 levels.… However, relative to the FAO projections
scenario, substituting poultry for marginal ruminant production would
reduce greenhouse gas emissions by only 13%, biomass appropriation by
5%, and reactive nitrogen mobilization by 8%” (Pelletier and Tyedmers,
2010b, 3). Thus, overall, the substitution scenario would only yield an
aggregate reduction in impacts of 5–13% over that of the FAO projection
scenario, suggesting that the sustainability of a diet of mainly monogastric
animals is also doubtful.

What if all humans obtained their recommended daily intake of protein
from plant (in this case soybean) sources as in the soy protein scenario?
Creating the 457,986 thousand tons of soy beans (ibid.) necessary
to feed the projected nine billion humans in 2050 would no doubt have
a considerable impact on the environment. However, relative to the FAO
scenario for 2050, it would represent a 98% reduction of greenhouse gas
emissions, a 94% reduction in biomass appropriation, and a 32% reduction
in reactive nitrogen mobilization. Thus, the entire human population
could, in principle, meet its protein needs from plant sources and only
contribute 1.1% of sustainable greenhouse gas emissions, 1.1% of sustainable
biomass appropriation, and 69% of sustainable reactive nitrogen
mobilization (ibid.). Thus, a plant-based diet is not only more healthful
than the other diets,(42) it is also the most sustainable form of diet.(43)

Thus, even under the most optimistic scenarios for technological improvements
in livestock efficiency, nine billion humans could not continue
to eat animals at the current and projected rates and avoid catastrophic
environmental harms. “As the human species runs the final course of rapid
population growth before beginning to level off midcentury,” Pelletier and
Tyedmers (2010a) write, “reining in the global livestock sector should be
considered a key leverage point for averting irreversible ecological change
and moving humanity toward a safe and sustainable operating space” (3).
In the end, the more animal products one consumes, the more destructive
one’s diet is to the environment. Though important and morally relevant
qualitative differences exist between industrial and non-industrial
methods, given the present and projected size of the human population,
the morality and sustainability of one’s diet are inversely related to the
proportion of animals and animal products in one’s diet. Thus, if we are
to ensure adequate food and water for all humans without exceeding the
Earth’s capacity to support life, we must find the courage to address directly
the morality of eating meat on an increasingly small planet.

ACKNOWLEDGEMENTS

The title of this work was inspired by the report of the Food and Agriculture
Organization (FAO) of the United Nations. Henning Steinfeld et
al., Livestock’s Long Shadow: Environmental Issues and Options, Food
and Agriculture Organization of the United Nations, 2006, http://www
.fao.org/docrep/010/a0701e/a0701e00.HTM. The author wishes to extend
his sincere thanks to the generous blind reviewer, whose comments
and suggestions have greatly improved this essay, and to Suzie Henning
and David Perry for their keen copyediting skills.

NOTES

1 Although “meat” should be inclusive of all forms of animal flesh, including
aquatic, following standard usage in this field, the term “meat” will largely
refer to beef, pork, chicken, and lamb.
2 According to Halweil (2008), “Factory farms account for 67 percent of poultry
meat production, 50 percent of egg production, and 42 percent of pork
production” (2).
3 See Singer 2002 [1975].
4 Cf. “Worldwide the number of overweight people (about 1 billion) has now
surpassed the number of malnourished people (about 800 million). And a
significant part of the growth in obesity occurs in the developing world. For
example, the World Health Organization (WHO) estimates that there are 300
million obese adults and 115 million suffering from obesity-related conditions
in the developing world” (Steinfeld et al. 2006, 6).
5 Cf. “It is the position of the American Dietetic Association that appropriately
planned vegetarian diets, including total vegetarian or vegan diets, are healthful,
nutritionally adequate, and may provide health benefits in the prevention
and treatment of certain diseases” (“Position of the American Dietetic Association:
Vegetarian Diets” 2009, 1266).
6 Cf. “Results from the 2005–2006 National Health and Nutrition Examination
Survey (NHANES), using measured heights and weights, indicate that an
estimated 32.7 percent of US adults 20 years and older are overweight, 34.3
percent are obese and 5.9 percent are extremely obese” (Centers for Disease
Control and Prevention 2008).
7 Cf. Centers for Disease Control and Prevention 2010; Gardner and Halweil
2000, 8.
8 This appropriate heading was used in a recent issue of the National Geographic
focused on water use (National Geographic 2010).
9 See also, “The extent to which a country is water stressed is related to the
proportion of the available freshwater supply that is withdrawn for use…”
(Houghton 2009, 188).
10 See Houghton 2009, 188 and Steinfeld et al. 2006, 5. According to Pimentel
and Pimentel, “in the Western United States, agriculture accounts for 85% of
freshwater use” (Pimentel and Pimentel 2003, 662s).
11 Cf. “Livestock also affect the replenishment of freshwater by compacting soil,
reducing infiltration, degrading the banks of watercourse, drying up floodplains
and lowering water tables. Livestock’s contribution to deforestation
also increases runoff and reduces dry season flows” (Steinfeld et al. 2006,
xxii).
12 This quote continues, “The major sources of pollution are from animal
wastes, antibiotics and hormones, chemicals from tanneries, fertilizers and
pesticides used for feedcrops, and sediments from eroded pastures.”
13 Cf. note 6.
14 Cf. “In simple numeric terms, livestock actually detract more from total food
supply than they provide. Livestock now consume more human edible protein
than they produce. In fact, livestock consume 77 million tonnes of protein
contained in feedstuff that could potentially be used for human nutrition,
whereas only 58 million tones of protein are contained in food products that
livestock supply” (Steinfeld et al. 2006, 270).
15 Cf. “Desertification…is the degradation of land brought about by climate
variations or human activities that have led to decreased vegetation, reduction
of available water, reduction of crop yields and erosion of soil” (Houghton
2009, 197).
16 Cf. “The United Nations Convention to Combat Desertification (UNCCD)
set up in 1996 estimates that over 70% of these dry lands, covering over 25%
of the world’s land area, are degraded and therefore affected by desertification”
(Houghton 2009, 197).
17 Cf. “We are, quite literally, gambling with the future of our planet—for the
sake of hamburgers” (Singer [1975] 2002, 169).
18 Cf. “The total global contribution of agriculture, considering all direct and
indirect emissions, is between 8.5–16.5 Pg CO2-eq, which represents between
17 and 32% of all global human-induced GHG emissions, including land use
change…” (Bellarby 2008, 5).
19 For a breakdown of methane emission by source, see Houghton 2009, 53,
table 32.
20 Although the shorthand of one century is often used for the lifetime of carbon
in the atmosphere, the actual lifecycle is more complicated because reservoirs
“turnover” at a wide range of timescales, “which range from less than
a year to decades (for exchange with the top layers of the ocean and the land
biosphere) to millennia (for exchange with the deep ocean or long-lived soil
pools)” (Houghton 2009, 37).
21 Cf. “The main process for the removal of methane from the atmosphere is
through chemical destruction. It reacts with hydroxyl (OH) radicals, which
are present in the atmosphere because of processes involving sunlight, oxygen,
ozone and water vapour. The average lifetime of methane in the atmosphere
is determined by the rate of this loss process. At about 12 years it is
much shorter than the lifetime of carbon dioxide” (Houghton 2009, 50).
22 Cf. “With rising temperatures, rising sea levels, melting icecaps and glaciers,
shifting ocean current and weather patterns, climate change is the most serious
challenge facing the human race. The livestock sector is a major player,
responsible for 18 percent of greenhouse gas emissions measured in CO2
equivalent. This is a higher share than transport” (Steinfeld et al. 2006, xxi).
Pitesky, Stackhouse, and Mitloehner have rightly noted that the FAO’s comparison
of the livestock and transportation sectors is potentially misleading
because it is “based on inappropriate or inaccurate scaling of predictions”
(Pitesky et al. 2009, 33). However, Pitesky, Stackhouse, and Mitloehner do
not dispute that livestock production accounts for 18% of global greenhouse
gas emissions. Rather, their claim is first that the FAO’s comparison of the
livestock and transportation sectors is misleading because, whereas both direct
and indirect emissions are included for the livestock sector, only direct
emissions are counted for the transportation sector. Secondly, they note that
while it is true that the livestock sector has a larger footprint than transportation
in many developing nations, it is not true of the United States (and
most developed nations) where livestock account for only 2.8% of emissions
(4). Thus, Pitesky, Stackhouse, and Mitloehner rightly note that a more precise
formulation would be to say that “agriculture is considered the largest
source of anthropogenic CH4 and N2O at the global, national, and state
level…while transport is considered the largest anthropogenic source of CO2
production”(11).
23 For instance, a June 2009 report of the Government Accountability Office
(GAO) found that 31 native villages face “imminent threats” from “growing
impacts of climate change in Alaska.” At least twelve of these villages have
elected to relocate entirely (United States Government Accountability Office
2009).
24 The immediate viability of manure management systems is questioned by
Fiala, who claims that “this technology is a long way from being used in the
US and Europe, let alone the rest of the world, this is not likely to be a solution
in the near future” (Fiala 2008, 418).
25 Silvopasture is the practice of combining forestry and animal husbandry to
enhance soil preservation and animal welfare. For more on silvopastoralism
see Sharrow 1999, 111–126.
26 Cf. “Expansion of livestock production is a key factor in deforestation, especially
in Latin America where the greatest amount of deforestation is occurring—
70 percent of previous forested land in the Amazon is occupied by
pastures, and feedcrops cover a large part of the remainder” (Steinfeld et al.
2006, xxi).
27 In its otherwise comprehensive and detailed analysis, the FAO makes only
one brief reference to the role of meat consumption. “While not being addressed
in this assessment, it may well be argued that environmental damage
by livestock may be significantly reduced by lowering excessive consumption
of livestock products amoung wealthy people” (Steinfeld et al. 2006, 269).
28 See, for instance, Schlosser 2001; Schlosser and Wilson 2006; Pollan 2007,
2009; Kingsolver 2007; Petrini 2007; Foer 2009; Fairlie 2010.
29 I will use the phrase “new agrarian movement” to refer to the loose collection
of popular writers and scholars who seek to move society away from
industrial food production. This phrase is inspired by the book series created
by The University of Kentucky Press, Culture of the Land: A Series in the
New Agrarianism. (See http://www.kentuckypress.com/newsite/pages/series/
series_agrarianism.html.) My thanks to Lee McBride for bringing this to my
attention.
30 Pollan 2007, 321.
31 On the symbiosis between livestock and humans, see Pollan 2007, 321f.
32 For a more developed defense of this kalocentric or beauty-centered position,
see Henning 2005 and 2009.
33 Kingsolver 2007. See also, James E. McWilliams, Just Food: Where Locavores
Get it Wrong and How We Can Truly Eat Responsibly (Little, Brown and
Company 2009).
34 See United States Census Bureau 2010; UN 2011.
35 Contrary to its earlier projections, the United Nations is no longer expecting
the human population to stabilize midcentury at nine billion people. According
to its most recent estimates, the human population is projected to continue
to climb past ten billion people by 2100. See UN 2011.
36 Cf. “Nitrogen is essential to all life forms and is also the most abundant element
in the Earth’s atmosphere. Atmospheric N, however, exists in a stable
form (N2) inaccessible to most organisms until fixed in a reactive form (N-).
The supply of reactive nitrogen plays a pivotal role in controlling the productivity,
carbon storage, and species composition of ecosystems… Alteration of
the nitrogen cycle has numerous consequences, including increased radiative
forcing [i.e., climate change], photochemical smog and acid deposition, and
productivity increases leading to ecosystem simplification and biodiversity
loss” (Pelletier and Tyedmers 2010a, 1).
37 In 2000 the average American contributed twenty metric tons of carbon dioxide
(CDIAC).
38 Net Primary Production (NPP) is defined as “the net flux of carbon from the
atmosphere into green plants per unit time.… NPP is a fundamental ecological
variable, not only because it measures the energy input to the biosphere
and terrestrial carbon dioxide assimilation, but also because of its significance
in indicating the condition of the land surface area and status of a wide range
of ecological processes” (DAAC 2010).
39 The researchers admit the speculative nature of their models, but also note the
conservative nature of the presuppositions made. Cf. “Modeling the future is
fraught with uncertainties, and we would be remiss to present our estimates
as definitive. We have endeavored to err on the side of caution in developing
what we believe to be conservative forecasts of some of the potential future
environmental impacts of livestock production. For example, it would be impressive,
indeed, were all livestock production globally to achieve resource
efficiencies comparable to those reported for the least impactful contemporary
systems in industrialized countries, effectively reducing global impacts
per unity protein produced by 35% in 2050 relative to 2000—as we have
assumed here” (Pelletier and Tyedmers 2010a, 2).
40 For additional discussion of Pitesky et al., see also note 22 and 43.
41 This is in fact the suggestion of the article responding to Pelletier and Tyedmers
by Steinfeld and Gerber 2010.
42 This is confirmed by the American Dietetic Association (2009): “The results
of an evidenced based review showed that a vegetarian diet is associated with
a lower risk of death from ischemic heart disease. Vegetarians also appear to
have lower low-density lipoprotein cholesterol levels, lower blood pressure,
and lower rates of hypertension and type 2 diabetes than nonvegetarians.
Furthermore, vegetarians tend to have a lower body mass index and lower
overall cancer rates” (1266).
43 Note that this responds to Pitesky, Stackhouse, and Mitloehner’s claim that
the FAO’s report is incomplete because it “does not account for ‘default’ emissions.
Specifically, if domesticated livestock were reduced or even eliminated,
the question of what ‘substitute’ GHGs would be produced in their place has
never been estimated” (35). Pelletier and Tyedmers’ analysis demonstrates
that a plant-based diet is likely to be the only sustainable way of feeding the
current and projected human population.

REFERENCES

Baroni, L., et al. 2007. “Evaluating the Environmental Impact of Various Dietary
Patterns Combined with Different Forms of Production Systems” European
Journal of Clinical Nutrition 61: 279–86.
Bellarby, Jessica, et al. 2008. Cool Farming: Climate Impacts of Agriculture
and Mitigation Potential. Amsterdam: Greenpeace International. Accessed
9 November 2010, http://www.greenpeace.org/international/en/publications/
reports/cool-farming-full-report/.
CDIAC: Carbon Dioxide Information Analysis Center. 2010. Accessed 9 November
2010, http://www.cdiac.ornl.gov.
CDC: Centers for Disease Control and Prevention. 2008. “Prevalence of overweight,
obesity and extreme obesity among adults: United States, trends
1960–62 through 2005–2006.” Accessed 9 November 2010, http://www.cdc
.gov/nchs/data/hestat/overweight/overweight_adult.htm.
CDC: Centers for Disease Control and Prevention. 2010. “Preventing Obesity and
Chronic Diseases Through Good Nutrition and Physical Activity.” Accessed
9 November 2010, http://www.cdc.gov/chronicdisease/resources/publications/
fact_sheets/obesity.htm.
DAAC: Distributed Active Archive Center for Biogeochemical Dynamics, Oakridge
National Laboratory. 2010. “Net Primary Productivity Methods.” Accessed
November 17, http://daac.ornl.gov/NPP/html_docs/npp_est.html.
Durning, Alan B. and Holly B. Brough. 1991. Taking Stock: Animal Farming and
the Environment. Worldwatch Institute.
Ehrlich, Paul and Anne Ehrlich. 1987. Extinction: The Causes and Consequences
of the Disappearance of Species. New York: Ballantine.
Fairlie, Simon. 2010. Meat: A Benign Extravagance. White River Junction, VT:
Chelsea Green Publishing.
Fiala, Nathan. 2009. “The Greenhouse Hamburger” Scientific American February:
72–75.
———. 2008. “Meeting the Demand: An Estimation of Potential Future Greenhouse
Gas Emissions from Meat Production.” Ecological Economics 67: 412–19.
Foer, Jonathan Safran. 2009. Eating Animals. New York: Little, Brown and
Company.
Fox, Michael Allen. 1999. “The Contribution of Vegetarianism to Ecosystem
Health.” Ecosystem Health 5: 70–74.
GAO: United States Government Accountability Office. 2009. “Alaska Native
Villages.” Accessed 9 November 2010, http://www.gao.gov/new.items/d09551
.pdf.
Gardner, Gary and Brian Halweil. 2000. Overfed and Underfed: The Global Epidemic
of Malnutrition. Ed. Jane A. Peterson. Washington, DC: Worldwatch
Institute.
Halweil, Brian. 2008. “Meat Production Continues to Rise” Worldwatch Institute
20 August. Accessed 9 November 2010. http://www.worldwatch.org/node/
5443
Henning, Brian G. 2005. The Ethics of Creativity: Beauty, Morality, and Nature in
a Processive Cosmos. Pittsburgh, PA: University of Pittsburgh Press.
———. 2009. “Trusting in the ‘Efficacy of Beauty’: A Kalocentric Approach to
Moral Philosophy,” Ethics & the Environment 14.1 (2009): 101–28.
Houghton, John. 2009. Global Warming: The Complete Briefing. 3rd ed. Cambridge:
Cambridge University Press.
Kingsolver, Barbara. 2007. Animal, Vegetable, Miracle. New York: Harper
Collins.
Lappé, Frances Moore. [1971] 1991. Diet for a Small Planet. New York: Ballantine
Books.
Lappé, Frances Moore and Anna Lappé. 2002. Hope’s Edge: The Next Diet for a
Small Planet. New York: Putnam.
McMichael, Anthony J., et al. 2007. “Energy and Health 5: Food, livestock production,
energy, climate change, and health.” The Lancet 370: 1253–63.
National Geographic. 2010. “Hidden Water We Use.” April. Accessed 9 November
2010. http://environment.nationalgeographic.com/environment/fresh
water/embedded-water/.
Pelletier, Nathan and Peter Tyedmers. 2010a. “Forecasting potential global environmental
costs of livestock production 2000–2050.” Proceedings of the National
Academy of Science Early Edition 4 October. 10.1073: 1–4.
———. 2010b. “Supporting Information.” Proceedings of the National Academy
of Science Early Edition 4 October. 10.1073: 1–4.
Petrini, Carlo. 2007. Slow Food Nation. New York: Rizzoli Ex Libris.
Pimentel, David and Marcia Pimentel. 2003. “Sustainability of Meat-based and
Plant-0based Diets and the Environment.” The American Journal of Clinical
Nutrition 78: 660s–63s.
Pitesky, Maurice E. Kimberly R. Stackhouse, and Frank M. Mitloehner. 2009.
“Clearing the Air: Livestock’s Contribution to Climate Change.” In Advances
in Agronomy Vol. 103, edited by Donald Sparks, 1–40. Burlington: Academic
Press.
Pollan, Michael. 2007. The Omnivore’s Dilemma: A Natural History of Four
Meals. New York: Penguin.
———. 2009. In Defense of Food: An Eater’s Manifesto. New York: Penguin.
“Position of the American Dietetic Association: Vegetarian Diets.” 2009. Journal
of the American Dietetic Association 109.7: 12661–78.
Rolston, Holmes III. 1988. Environmental Ethics: Duties To and Values In the
Natural World. Philadelphia: Temple University Press.
Sapontzis, Steve F., ed. 2004. Food for Thought: The Debate Over Eating Meat.
New York: Prometheus Books.
Schlosser, Eric. 2001. Fast Food Nation. New York: Houghton Mifflin.
Schlosser, Eric and Charles Wilson. 2006. Chew On This: Everything You Don’t
Want to Know About Fast Food. New York: Houghton Mifflin.
Sharrow, Steven H. 1999. “Silvopastoralism,” in Agroforestry in Sustainable Agricultural
Systems, edited by Louise E. Buck, James P. Lassoie, and Erick C.M.
Fernandes. 1111–26. Boca Raton, FL: CRC Press.
Singer, Peter. [1975] 2002. Animal Liberation. 3rd ed. New York: Avon Books.
Spellberg Berg et al. 2008. “The Epidemic of Antibiotic-Resistant Infections: A
Call to Action for the Medical Community from the Infectious Diseases Society
of America.” Clinical Infectious Disease 46: 155–64.
Stãnescu, Vasile. 2010. “ ‘Green’ Eggs and Ham? The Myth of Sustainable Meat
and the Danger of the Local.” Journal for Critical Animal Studies 8: 8–32.
Steinfeld, Henning, et al. 2006. Livestock’s Long Shadow: Environmental Issues
and Options. Rome, Italy: Food and Agriculture Organization of the United
Nations. Accessed 9 November 2010, http://www.fao.org/docrep/010/a0701e/
a0701e00.htm.
Steinfeld, Henning and Pierre Gerber. 2010. “Livestock production and the global
environment: Consume less or produce better?” Proceedings of the National
Academy of Science Early Edition 26 October 107.43: 18237–38.
Suback, Susan. 1999. “Global Environmental Costs of Beef Production.” Ecological
Economics 30: 79–91.
Tickell, Crispin. 1992. “The Quality of Life: What Quality? Whose Life?” Environmental
Values 1: 65–76.
Venkatarman, Bina. 2008. “Rapid Growth Found in Oxygen-Starved Ocean
‘Dead Zones’” The New York Times 14 April. Accessed 9 November 2010,
http://www.nytimes.com/2008/08/15/us/15oceans.html.
Whitehead, Alfred North. [1929] 1978. Process and Reality, (corrected edition),
eds. David Ray Griffin and Donald W. Sherburne. New York: Free Press.
Woodruff, David S. 2001. “Declines of Biomes and Biotas and the Future of Evolution,”
Proceedings of the National Academy of Sciences of the United States
of America 8 May 98.10: 5471. http:/www.jstor.org/stable/3055650.
UN: United Nations Press Release. 2011. “World Population to reach 10 billion
by 2100 if Fertility in all Countries Converges to Replacement Level” 3 May
2011 (Accessed 13 May 2011) http://esa.un.org/unpd/wpp/Other-Information/
Press_Release_WPP2010.pdf.
UNEP: United Nations Environment Programme. 2003. “Key Facts About Water.”
5 June 2003. (accessed November 9, 2010). http://www.unep.org/wed/2003/
keyfacts.htm
United States Census Bureau. 2010. “International Data Base.” Accessed 9 November
2010, http://www.census.gov/ipc/www/idb/worldpopinfo.html.

Notes on contributors

Greta Gaard serves on the Editorial Board of ISLE: Interdisciplinary
Studies in Literature and Environment, and the Executive Board of the
Association for the Study of Literature and Environment (ASLE). Her
publications include Ecofeminism: Women, Animals, Nature (1993), Ecological
Politics: Ecofeminists and the Greens (1998), Ecofeminist Literary
Criticism (1998), and The Nature of Home (2007). Author of over fifty
articles, Gaard is currently co-editing a volume on Feminist Ecocriticism
with Serpil Oppermann and Simon Estok. E-mail: greta.gaard@uwrf.edu

Benjamin Hale is Assistant Professor in the Philosophy Department and
the Environmental Studies Program at the University of Colorado, Boulder.
He is currently co-editor of the journal Ethics, Policy & Environment
and has published papers in journals such as The Monist, Metaphilosophy,
Public Affairs Quarterly, Environmental Values, Science, Technology, and
Human Values, among others. His book, The Wicked and the Wild: Why
You Don’t Have to Love Nature to be Green, will be appearing from the
University of Chicago Press in Fall 2012. E-mail: bhale@colorado.edu

Brian G. Henning is Associate Professor of Philosophy at Gonzaga University
in Spokane, WA. His work includes the award-winning book The
Ethics of Creativity: Beauty, Morality and Nature in a Processive Cosmos
and the article, “Trusting in the ‘Efficacy of Beauty’: A Kalocentric Approach
to Moral Philosophy” in this journal. His scholarship and teaching
focus on the interconnections among ethics, metaphysics, and aesthetics,
especially as they relate to the ethics of global climate change. E-mail:
henning@gonzaga.edu

Sheila Lintott is an Associate Professor of Philosophy at Bucknell University.
She works in feminist philosophy, philosophical aesthetics, and
environmental philosophy.

Morality is a Culturally Conditioned Response

$
0
0

Jesse Prinz  is a Distinguished Professor of Philosophy at the City University of New York. His books include Gut Reactions, The Emotional Construction of Morals, and Beyond Human Nature.

In this article, Jesse Prinz argues that the source of our moral inclinations is merely cultural.


Philosophy Now, issue 82, January/February 2011

Suppose you have a moral disagreement with someone, for example, a disagreement about whether it is okay to live in a society where the amount of money you are born with is the primary determinant of how wealthy you will end up. In pursuing this debate, you assume that you are correct about the issue and that your conversation partner is mistaken. You conversation partner assumes that you are making the blunder. In other words, you both assume that only one of you can be correct. Relativists reject this assumption. They believe that conflicting moral beliefs can both be true. The stanch socialist and righteous royalist are equally right; they just occupy different moral worldviews.

Relativism has been widely criticized. It is attacked as being sophomoric, pernicious, and even incoherent. Moral philosophers, theologians, and social scientists try to identify objective values so as to forestall the relativist menace. I think these efforts have failed. Moral relativism is a plausible doctrine, and it has important implications for how we conduct our lives, organize our societies, and deal with others.

Cannibals and Child Brides

Morals vary dramatically across time and place. One group’s good can be another group’s evil. Consider cannibalism, which has been practiced by groups in every part of the world. Anthropologist Peggy Reeves Sanday found evidence for cannibalism in 34% of cultures in one cross-historical sample. Or consider blood sports, such as those practiced in Roman amphitheaters, in which thousands of excited fans watched as human beings engaged in mortal combat. Killing for pleasure has also been documented among headhunting cultures, in which decapitation was sometimes pursued as a recreational activity. Many societies have also practiced extreme forms of public torture and execution, as was the case in Europe before the 18th century. And there are cultures that engage in painful forms of body modification, such as scarification, genital infibulation, or footbinding – a practice that lasted in China for 1,000 years and involved the deliberate and excruciating crippling of young girls. Variation in attitudes towards violence is paralleled by variation in attitudes towards sex and marriage. When studying culturally independent societies, anthropologists have found that over 80% permit polygamy. Arranged marriage is also common, and some cultures marry off girls while they are still pubescent or even younger. In parts of Ethiopia, half the girls are married before their 15th birthday.

Of course, there are also cross-cultural similarities in morals. No group would last very long if it promoted gratuitous attacks on neighbors or discouraged childrearing. But within these broad constraints, almost anything is possible. Some groups prohibit attacks on the hut next door, but encourage attacks on the village next door. Some groups encourage parents to commit selective infanticide, to use corporal punishment on children, or force them into physical labor or sexual slavery.

Such variation cries out for explanation. If morality were objective, shouldn’t we see greater consensus? Objectivists reply in two different ways:

Deny variation. Some objectivists say moral variation is greatly exaggerated – people really agree about values but have different factual beliefs or life circumstances that lead them to behave differently. For example, slave owners may have believed that their slaves were intellectually inferior, and Inuits who practiced infanticide may have been forced to do so because of resource scarcity in the tundra. But it is spectacularly implausible that all moral differences can be explained this way. For one thing, the alleged differences in factual beliefs and life circumstances rarely justify the behaviors in question. Would the inferiority of one group really justify enslaving them? If so, why don’t we think it’s acceptable to enslave people with low IQs? Would life in the tundra justify infanticide? If so, why don’t we just kill off destitute children around the globe instead of giving donations to Oxfam? Differences in circumstances do not show that people share values; rather they help to explain why values end up being so different.

Deny that variation matters. Objectivists who concede that moral variation exists argue that variation does not entail relativism; after all, scientific theories differ too, and we don’t assume that every theory is true. This analogy fails. Scientific theory variation can be explained by inadequate observations or poor instruments; improvements in each lead towards convergence. When scientific errors are identified, corrections are made. By contrast, morals do not track differences in observation, and there also is no evidence for rational convergence as a result of moral conflicts. Western slavery didn’t end because of new scientific observations; rather it ended with the industrial revolution, which ushered in a wage-based economy. Indeed, slavery became more prevalent after the Enlightenment, when science improved. Even with our modern understanding of racial equality, Benjamin Skinner has shown that there are more people living in de facto slavery worldwide today than during the height of the trans-Atlantic slave trade. When societies converge morally, it’s usually because one has dominated the other (as with the missionary campaigns to end cannibalism). With morals, unlike science, there is no well-recognized standard that can be used to test, confirm, or correct when disagreements arise.

Objectivists might reply that progress has clearly been made. Aren’t our values better than those of the ‘primitive’ societies that practice slavery, cannibalism, and polygamy? Here we are in danger of smugly supposing superiority. Each culture assumes it is in possession of the moral truth. From an outside perspective, our progress might be seen as a regress. Consider factory farming, environmental devastation, weapons of mass destruction, capitalistic exploitation, coercive globalization, urban ghettoization, and the practice of sending elderly relatives to nursing homes. Our way of life might look grotesque to many who have come before and many who will come after.

Emotions and Inculcation

Moral variation is best explained by assuming that morality, unlike science, is not based on reason or observation. What, then, is morality based on? To answer this, we need to consider how morals are learned.

Children begin to learn values when they are very young, before they can reason effectively. Young children behave in ways that we would never accept in adults: they scream, throw food, take off their clothes in public, hit, scratch, bite, and generally make a ruckus. Moral education begins from the start, as parents correct these antisocial behaviors, and they usually do so by conditioning children’s emotions. Parents threaten physical punishment (“Do you want a spanking?”), they withdraw love (“I’m not going to play with you any more!”), ostracize (“Go to your room!”), deprive (“No dessert for you!”), and induce vicarious distress (“Look at the pain you’ve caused!”). Each of these methods causes the misbehaved child to experience a negative emotion and associate it with the punished behavior. Children also learn by emotional osmosis. They see their parents’ reactions to news broadcasts and storybooks. They hear hours of judgmental gossip about inconsiderate neighbors, unethical coworkers, disloyal friends, and the black sheep in the family. Consummate imitators, children internalize the feelings expressed by their parents, and, when they are a bit older, their peers.

Emotional conditioning and osmosis are not merely convenient tools for acquiring values: they are essential. Parents sometimes try to reason with their children, but moral reasoning only works by drawing attention to values that the child has already internalized through emotional conditioning. No amount of reasoning can engender a moral value, because all values are, at bottom, emotional attitudes.

Recent research in psychology supports this conjecture. It seems that we decide whether something is wrong by introspecting our feelings: if an action makes us feel bad, we conclude that it is wrong. Consistent with this, people’s moral judgments can be shifted by simply altering their emotional states. For example, psychologist Simone Schnall and her colleagues found that exposure to fart spray, filth, and disgusting movies can cause people to make more severe moral judgments about unrelated phenomena.

Psychologist Jonathan Haidt and colleagues have shown that people make moral judgments even when they cannot provide any justification for them. For example, 80% of the American college students in Haidt’s study said it’s wrong for two adult siblings to have consensual sex with each other even if they use contraception and no one is harmed. And, in a study I ran, 100% of people agreed it would be wrong to sexually fondle an infant even if the infant was not physically harmed or traumatized. Our emotions confirm that such acts are wrong even if our usual justification for that conclusion (harm to the victim) is inapplicable.

If morals are emotionally based, then people who lack strong emotions should be blind to the moral domain. This prediction is borne out by psychopaths, who, it turns out, suffer from profound emotional deficits. Psychologist James Blair has shown that psychopaths treat moral rules as mere conventions. This suggests that emotions are necessary for making moral judgments. The judgment that something is morally wrong is an emotional response.

It doesn’t follow that every emotional response is a moral judgment. Morality involves specific emotions. Research suggests that the main moral emotions are anger and disgust when an action is performed by another person, and guilt and shame when an action is performed by one’s self. Arguably, one doesn’t harbor a moral attitude towards something unless one is disposed to have both these self- and other-directed emotions. You may be disgusted by eating cow tongue, but unless you are a moral vegetarian, you wouldn’t be ashamed of eating it.

In some cases, the moral emotions that get conditioned in childhood can be re-conditioned later in life. Someone who feels ashamed of a homosexual desire may subsequently feel ashamed about feeling ashamed. This person can be said to have an inculcated tendency to view homosexuality as immoral, but also a conviction that homosexuality is permissible, and the latter serves to curb the former over time.

This is not to say that reasoning is irrelevant to morality. One can convince a person that homophobia is wrong by using the light of reason to draw analogies with other forms of discrimination, but this strategy can only work if the person has a negative sentiment towards bigotry. Likewise, through extensive reasoning, one might persuade someone that eating meat is wrong; but the only arguments that will work are ones that appeal to prior sentiments. It would be hopeless to argue vegetarianism with someone who does not shudder at the thought of killing an innocent, sentient being. As David Hume said, reason is always slave to the passions.

If this picture is right, we have a set of emotionally conditioned basic values, and a capacity for reasoning, which allows us to extend these values to new cases. There are two important implications. One is that some moral debates have no resolution because the two sides have different basic values. This is often the case with liberals and conservatives. Research suggests that conservatives value some things that are less important to liberals, including hierarchical authority structures, self-reliance, in-group solidarity, and sexual purity. Debates about welfare, foreign policy, and sexual values get stymied because of these fundamental differences.

The second implication is that we cannot change basic values by reason alone. Various events in adulthood might be capable of reshaping our inculcated sentiments, including trauma, brainwashing, and immersion in a new community (we have an unconscious tendency towards social conformity). Reason can however be used to convince people that their basic values are in need of revision, because reason can reveal when values are inconsistent and self-destructive. An essay on moral relativism might even convince someone to give up some basic values, on the ground that they are socially inculcated. But reason alone cannot instill new values or settle which values we should have. Reason tells us what is the case, not what ought to be.

In summary, moral judgments are based on emotions, and reasoning normally contributes only by helping us extrapolate from our basic values to novel cases. Reasoning can also lead us to discover that our basic values are culturally inculcated, and that might impel us to search for alternative values, but reason alone cannot tell us which values to adopt, nor can it instill new values.

God, Evolution, and Reason: Is There an Objective Moral Code?

The hypothesis that moral judgments are emotionally based can explain why they vary across cultures and resist transformation through reasoning, but this is not enough to prove that moral relativism is true. An argument for relativism must also show that there is no basis for morality beyond the emotions with which we have been conditioned. The relativists must provide reasons for thinking objectivist theories of morality fail.

Objectivism holds that there is one true morality binding upon all of us. To defend such a view, the objectivist must offer a theory of where morality comes from, such that it can be universal in this way. There are three main options: Morality could come from a benevolent god; it could come from human nature (for example, we could have evolved an innate set of moral values); or it could come from rational principles that all rational people must recognize, like the rules of logic and arithmetic. Much ink has been spilled defending each of these possibilities, and it would be impossible here to offer a critical review of all ethical theories. Instead, let’s consider some simple reasons for pessimism.

The problem with divine commands as a cure for relativism is that there is no consensus among believers about what God or the gods want us to do. Even when there are holy scriptures containing lists of divine commands, there are disagreements about interpretation: Does “Thou shalt not kill?” cover enemies? Does it cover animals? Does it make one culpable for manslaughter and self-defense? Does it prohibit suicide? The philosophical challenge of proving that a god exists is already hard; figuring out who that god is and what values are divinely sanctioned is vastly harder.

The problem with human nature as a basis for universal morality is that it lacks normative import, that is, this doesn’t itself provide us with any definitive view of good and bad. Suppose we have some innate moral values. Why should we abide by them? Non-human primates often kill, steal, and rape without getting punished by members of their troops. Perhaps our innate values promote those kinds of behaviors as well. Does it follow that we shouldn’t punish them? Certainly not. If we have innate values – which is open to debate – they evolved to help us cope with life as hunter-gatherers in small competitive bands. To live in large stable societies, we are better off following the ‘civilized’ values we’ve invented.

Finally, the problem with reason, as we have seen, is that it never adds up to value. If I tell you that a wine has a balance between tannin and acid, it doesn’t follow that you will find it delicious. Likewise, reason cannot tell us which facts are morally good. Reason is evaluatively neutral. At best, reason can tell us which of our values are inconsistent, and which actions will lead to fulfillment of our goals. But, given an inconsistency, reason cannot tell us which of our conflicting values to drop, and reason cannot tell us which goals to follow. If my goals come into conflict with your goals, reason tells me that I must either thwart your goals, or give up caring about mine; but reason cannot tell me to favor one choice over the other.

Many attempts have been made to rebut such concerns, but each attempt has just fueled more debate. At this stage, no defense of objectivism has swayed doubters, and given the fundamental limits mentioned here (the inscrutability of divine commands, the normative emptiness of evolution, and the moral neutrality of reason), objectivism looks unlikely.

Living With Moral Relativism

People often resist relativism because they think it has unacceptable implications. Let’s conclude by considering some allegations and responses.

Allegation: Relativism entails that anything goes.

Response: Relativists concede that if you were to inculcate any given set of values, those values would be true for those who possessed them. But we have little incentive to inculcate values arbitrarily. If we trained our children to be ruthless killers, they might kill us or get killed. Values that are completely self-destructive can’t last.

Allegation: Relativism entails that we have no way to criticize Hitler.

Response: First of all, Hitler’s actions were partially based on false beliefs, rather than values (‘scientific’ racism, moral absolutism, the likelihood of world domination). Second, the problem with Hitler was not that his values were false, but that they were pernicious. Relativism does not entail that we should tolerate murderous tyranny. When someone threatens us or our way of life, we are strongly motivated to protect ourselves.

Allegation: Relativism entails that moral debates are senseless, since everyone is right.

Response: This is a major misconception. Many people have overlapping moral values, and one can settle debates by appeal to moral common ground. We can also have substantive debates about how to apply and extend our basic values. Some debates are senseless, however. Committed liberals and conservatives rarely persuade each other, but public debates over policy can rally the base and sway the undecided.

Allegation: Relativism doesn't allow moral progress.

Response: In one sense this is correct; moral values do not become more true. But they can become better by other criteria. For example, some sets of values are more consistent and more conducive to social stability. If moral relativism is true, morality can be regarded as a tool, and we can think about what we’d like that tool to do for us and revise morality accordingly.

One might summarize these points by saying that relativism does not undermine the capacity to criticize others or to improve one’s own values. Relativism does tell us, however, that we are mistaken when we think we are in possession of the one true morality. We can try to pursue moral values that lead to more fulfilling lives, but we must bear in mind that fulfillment is itself relative, so no single set of values can be designated universally fulfilling. The discovery that relativism is true can help each of us individually by revealing that our values are mutable and parochial. We should not assume that others share our views, and we should recognize that our views would differ had we lived in different circumstances. These discoveries may make us more tolerant and more flexible. Relativism does not entail tolerance or any other moral value, but, once we see that there is no single true morality, we lose one incentive for trying to impose our values on others.

Contextual Moral Vegetarianism

$
0
0
By Deane Curtin

'Toward an Ecological Ethic of Care.' Hypathia, No. 6, spring 1991, pp. 68-71
 
In this [essay] I provide an example of a distinctively ecofeminist moral concern: our relations to what we are willing to count as food. Vegetarianism has been defended as a moral obligation that results from rights that nonhuman animals have in virtue of being sentient beings (Regan 1983, 330-53). However, a distinctively ecofeminist defense of moral vegetarianism is better expressed as a core concept in an ecofeminist ethic of care. One clear way of distinguishing the two approaches is that whereas the rights approach is not inherently contextual[1] (it is the response to the rights of all sentient beings), the caring-for approach responds to particular contexts and histories. It recognizes that the reasons for moral vegetarianism may differ by locale, by gender, as well as by class.

Moral vegetarianism is a fruitful issue for ecofeminists to explore in developing an ecological ethics because in judging the adequacy of an ethic by reference to its understanding of food one draws attention to precisely those aspects of daily experience that have often been regarded as "beneath" the interest of philosophy. Plato's remark in the Gorgias is typical of the dismissive attitude philosophers have usually had toward food. Pastry cooking, he says, is like rhetoric: both are mere "knacks" or "routines" designed to appeal to our bodily instincts rather than our intellects (Plato 1961, 245).

Plato's dismissive remark also points to something that feminists need to take very seriously, namely, that a distinctively feminist ethic, as Susan Bordo and others argue, should include the body as moral agent. Here too the experiences of women in patriarchal cultures are especially valuable because women, more then men, experience the effects of culturally sanctioned oppressive attitudes toward the appropriate shape of the body. Susan Bordo has argued that anorexia nervosa is a "psychopathology" made possible by Cartesian attitudes toward the body at a popular level. Anorexics typically feel alienation from their bodies and the hunger "it" feels. Bordo quotes one woman as saying she ate because "my stomach wanted it"; another dreamed of being "without a body." Anorexics want to achieve "absolute purity, hyperintellectuality and transcendence of the flesh" (Bordo 1988, 94, 95; also see Chernin 1981). These attitudes toward the body have served to distort the deep sense in which human beings are embodied creatures; they have therefore further distorted our being as animals. To be a person, as distinct from an "animal," is to be disembodied.

This dynamic is vividly exposed by Carol Adams in The Sexual Politics of Meat (Adams 1989, part 1). There are important connections through food between the oppression of women and the oppression of nonhuman animals. Typical of the wealth of evidence she presents are the following: the connection of women and animals through pornographic representations of women as "meat" ready to be carved up, for example in "snuff' films; the fact that language masks our true relationship with animals, making them "absent referents" by giving meat words positive connotations ("That's a meaty question;""Where's the beef?") while disparaging nonflesh foods ("Don't watch so much TV! You'll turn into a vegetable"); men, athletes and soldiers in particular, are associated with red meat and activity ("To have muscle you need to eat muscle"), whereas women are associated with vegetables and passivity ("ladies' luncheons" typically offer dainty sandwiches with no red meat).


As a "contextual moral vegetarian," I cannot refer to an absolute moral rule that prohibits meat eating under all circumstances. There may be some contexts in which another response is appropriate. Though I am committed to moral vegetarianism, I cannot say that I would never kill an animal for food. Would I not kill an animal to provide food for my son if he were starving? Would I not generally prefer the death of a bear to the death of a loved one? I am sure I would. The point of a contextualist ethic is that one need not treat all interests equally as if one had no relationship to any of the parties.

Beyond personal contextual relations, geographical contexts may sometimes be relevant. The Ihalmiut, for example, whose frigid domain makes the growing of food impossible, do not have the option of vegetarian cuisine. The economy of their food practices, however, and their tradition of "thanking" the deer for giving its life are reflective of a serious, focused, compassionate attitude toward the "gift" of a meal.

In some cultures violence against nonhuman life is ritualized in such a way that one is present to the reality of one's food. The Japanese have a Shinto ceremony that pays respect to the insects that are killed during rice planting. Tibetans, who as Buddhists have not generally been drawn to vegetarianism, nevertheless give their own bodies back to the animals in an ultimate act of thanks by having their corpses hacked into pieces as food for the birds.[2] Cultures such as these have ways of expressing spiritually the idea "we are what we eat," even if they are not vegetarian.

If there is any context, on the other hand, in which moral vegetarianism is completely compelling as an expression of an ecological ethic of care, it is for economically well-off persons in technologically advanced countries. First, these are persons who have a choice of what food they want to eat; they have a choice of what they will count as food. Morality and ontology are closely connected here. It is one thing to inflict pain on animals when geography offers no other choice. But in the case of killing animals for human consumption where there is a choice, this practice inflicts pain that is completely unnecessary and avoidable. The injunction to care, considered as an issue of moral and political development, should be understood to include the injunction to eliminate needless suffering wherever possible, and particularly the suffering of those whose suffering is conceptually connected to one's own. It should not be understood as an injunction that includes the imperative to rethink what it means to be a person connected with the imperative to rethink the status of nonhuman animals. An ecofeminist perspective emphasizes that one's body is oneself, and that by inflicting violence needlessly, one's bodily self becomes a context for violence. One becomes violent by taking part in violent food practices. The ontological implication of a feminist ethic of care is that nonhuman animals should no longer count as food.

Second, most of the meat and dairy products in these countries do not come from mom-and-pop farms with little red barns. Factory farms are responsible for most of the 6 billion animals killed for food every year in the United States (Adams 1989, 6). It is curious that steriods are considered dangerous to athletes, but animals that have been genetically engineered and chemically induced to grow faster and come to market sooner are considered to be an entirely different issue. One would have to be hardened to know the conditions factory-farm animals live in and not feel disgust concerning their treatment.[3]

Third, much of the effect of the eating practices of persons in industrialized countries is felt in oppressed countries. Land owned by the wealthy that was once used to grow inexpensive crops for local people has been converted to the production of expensive products (beef) for export. Increased trade of food products with these countries is consistently the cause of increased starvation. In cultures where food preparation is primarily understood as women's work, starvation is primarily a women's issue. Food expresses who we are politically just as much as bodily. One need not be aware of the fact that one's food practices oppress others in order to be an oppressor.

From a woman's perspective, in particular, it makes sense to ask whether one should become a vegan, a vegetarian who, in addition to refraining from meat and fish, also refrains from eating eggs and dairy products. Since the consumption of eggs and milk have in common that they exploit the reproductive capacities of the female, vegetarianism is not a gender neutral issue.[4] To choose one's diet in a patriarchal culture is one way of politicizing an ethic of care. It marks a daily, bodily commitment to resist ideological pressures to conform to patriarchal standards, and to establishing contexts in which caring for can be nonabusive. 

Just as there are gender-specific reasons for women's commitment to vegetarianism, for men in a patriarchal society moral vegetarianism can mark the decision to stand in solidarity with women. It also indicates a determination to resist ideological pressures to become a "real man." Real people do not need to eat "real food," as the American Beef Council would have us believe.

[1] Regan calls the animal's right not to be killed a prima facie right that may be overridden. Nevertheless, his theory is not inherently contextualized.

[2] This practice is also ecologically sound since it saves the enormous expense of firewood for cremation.

[3] See John Robbing (1987). It should be noted that in response to such knowledge some reflective nonvegetarians commit to eating range-grown chickens but not those grown in factory farms.

[4] I owe this point to a conversation with Colman McCarthy. 

The Obligation to Endure

$
0
0


Rachel Louise Carson (1907-1964)

Originally published in Silent Spring (1962)

The history of life on earth has been a history of interaction between living things and their surroundings. To a large extent, the physical form and the habits of the earth's vegetation and its animal life have been molded by the environment. Considering the whole span of earthly time, the opposite effect, in which life actually modifies its surroundings, has been relatively slight. Only within the moment of time represented by the present century has one species—man—acquired significant power to alter the nature of his world.

During the past quarter century this power has not only increased to one of disturbing magnitude but it has changed in character. The most alarming of all man's assaults upon the environment is the contamination of air, earth, rivers, and sea with dangerous and even lethal materials. This pollution is for the most part irrecoverable; the chain of evil it initiates not only in the world that must support life but in living tissues is for the most part irreversible. In this now universal contamination of the environment, chemicals are the sinister and little-recognized partners of radiation in changing the very nature of the world—the very nature of its life. Strontium 90, released through nuclear explosions into the air, comes to the earth in rain or drifts down as fallout, lodges in soil, enters into the grass or corn or wheat grown there, and in time takes up its abode in the bones of a human being, there to remain until his death. Similarly, chemicals sprayed on croplands or forests or gardens lie long in the soil, entering into living organisms, passing from one to another in a chain of poisoning and death. Or they pass mysteriously by underground streams until they emerge and, through the alchemy of air and sunlight, combine into new forms that kill vegetation, sicken cattle, and work unknown harm on those who drink from once pure wells. As Albert Schweitzer has said, "Man can hardly even recognize the devils of his own creation."

It took hundreds of millions of years to produce the life that now inhabits the earth—eons of time in which that developing and evolving and diversifying life reached a state of adjustment and balance with its surroundings. The environment, rigorously shaping and directing the life it supported, contained elements that were hostile as well as supporting. Certain rocks gave out dangerous radiation, even within the light of the sun, from which all life draws its energy, there were short-wave radiations with power to injure. Given time—time not in years but in millennia—life adjusts, and a balance has been reached. For time is the essential ingredient; but in the modern world there is no time.

The rapidity of change and the speed with which new situations are created follow the impetuous and heedless pace of man rather than the deliberate pace of nature. Radiation is no longer merely the background radiation of rocks, the bombardment of cosmic rays, the ultraviolet of the sun that have existed before there was any life on earth; radiation is now the unnatural creation of man's tampering with the atom. The chemicals to which life is asked to make its adjustment are no longer merely the calcium and silica and copper and all the rest of the minerals washed out of the rocks and carried in rivers to the sea; they are the synthetic creations of man's inventive mind, brewed in his laboratories, and having no counterparts in nature.

To adjust to these chemicals would require time on the scale that is nature's; it would require not merely the years of a man's life but the life of generations. And even this, were it by some miracle possible, would be futile, for the new chemicals come from our laboratories in an endless stream; almost five hundred annually find their way into actual use in the United States alone. The figure is staggering and its implications are not easily grasped—500 new chemicals to which the bodies of men and animals are required somehow to adapt each year, chemicals totally outside the limits of biologic experience.

Among them are many that are used in man's war against nature. Since the mid-1940's over 200 basic chemicals have been created for use in killing insects, weeds, rodents, and other organisms described in the modern vernacular as "pests"; and they are sold under several thousand different brand names.

These sprays, dusts, and aerosols are now applied almost universally to farms, gardens, forests, and homes—nonselective chemicals that have the power to kill every insect, the "good" and the "bad," to still the song of birds and the leaping of fish in the streams, to coat the leaves with a deadly film, and to linger on in the soil—all this though the intended target may be only a few weeds or insects. Can anyone believe it is possible to lay down such a barrage of poisons on the surface of the earth without making it unfit for all life? They should not be called "insecticides," but "biocides."

The whole process of spraying seems caught up in an endless spiral. Since DDT was released for civilian use, a process of escalation has been going on in which ever more toxic materials must be found. This has happened because insects, in a triumphant vindication of Darwin's principle of the survival of the fittest, have evolved super races immune to the particular insecticide used, hence a deadlier one has always to be developed—and then a deadlier one than that. It has happened also because, for reasons to be described later, destructive insects often undergo a “flareback”, or resurgence, after spraying, in numbers greater than before. Thus the chemical war is never won, and all life is caught in its violent crossfire.

Along with the possibility of the extinction of mankind, by nuclear war, the central problem of our age has therefore become the contamination of man’s total environment with such substances of incredible potential for harm – substances that accumulate in the tissues of plants and animals and even penetrate the germ cells to shatter or alter the very material of heredity upon which the shape of the future depends.

Some would-be architects of our future look toward a time when it will be possible to alter the human germ plasm by design. But we may easily be doing so now by inadvertence, for many chemicals, like radiation, bring about gene mutations. It is ironic to think that man might determine his own future by something so seemingly trivial as the choice of an insect spray.

All this has been risked – for what? Future historians may well be amazed by our distorted sense of proportion. How could intelligent beings seek to control a few unwanted species by a method that contaminated the entire environment and brought the threat of disease and death even to their own kind? Yet this is precisely what we have done. We have done it, moreover, for reasons that collapse the moment we examine them. We are told that the enormous and expanding use of pesticides is necessary to maintain farm production. Yet is our real problem not one of overproduction?  Our farms, despite measures to remove acreages from production and to pay farmers not to produce, have yielded such a staggering excess of crops that the American taxpayer in 1962 is paying out more than one billion dollars a year as the total carrying cost of the surplus-food storage program. And is the situation helped when one branch of the Agriculture Department tries to reduce production while another states, as it did in 1958, “It is believed generally that reduction of crop acreage under provisions of the Soil Bank will stimulate interest in use of chemicals to obtain maximum production on the land retained in crops.”

All this is not to say there is no insect problem and no need of control. I am saying, rather, that control must be geared to realities, not to mythical situations, and that the methods employed must be such that they do not destroy us along with the insects.

The problem whose attempted solution has brought such a train of disaster in its wake is an accompaniment of our modern way of life. Long before the age of man, insects inhabited the earth – a group of extraordinarily varied and adaptable beings. Over the course of time since man’s advent, a small percentage of the more than half a million species of insects have come into conflict with human welfare in two principal ways: as competitors for the food supply and as carriers of human disease.

Disease-carrying insects become important where human beings are crowded together, especially under conditions where sanitation is poor, as in time of natural disaster or war or in situations of extreme poverty and deprivation. Then control of some sort becomes necessary. It is a sobering fact, however, as we shall presently see, that the method of massive chemical control has had only limited success, and also threatens to worsen the very conditions it is intended to curb.

Under primitive agricultural conditions the farmer had few insect problems. These arose with the intensification of agriculture – the devotion of immense acreage to a single crop. Such a system set the stage for explosive increase in specific insect population. Single-crop farming does not take advantage of the principles by which nature works; it is agriculture as an engineer might conceive it to be. Nature has introduced great variety into the landscape, but man has displayed a passion for simplifying it. Thus he undoes the built-in checks and balances by which nature holds the species within bounds. One important natural check is limit on the amount of suitable habitat for each species. Obviously then, an insect that lives on wheat can build up its population to much higher levels on a farm devoted to wheat than on one in which wheat is intermingled with other crops to which the insect is not adapted.

The same thing happens in other situations. A generation or more ago, the towns of large areas of the United States lined their streets with the noble elm tree. Now the beauty they hopefully created is threatened with complete destruction as disease sweeps through the elms, carried by a beetle that would have only limited chance to build up large populations and to spread from tree to tree if the elms were only occasional trees in a richly diversified planting.

Another factor in the modern insect problem is one that must be viewed against a background of geologic and human history: the spreading of thousands of different kinds of organisms from their native homes to invade new territories. This worldwide migration has been studied and graphically described by British ecologist Charles Elton in his recent book The Ecology of Invasion. During the Cretaceous Period, some hundred million years ago, flooding seas cut many land bridges between continents and living things found themselves confined in what Elton calls “colossal separate nature reserves.” There, isolated from each others of their kind, they developed many new species. When some of the land masses were joined again, about 15 million years ago, these species began to move out into new territories – a movement that is not only still in progress but is now receiving considerable assistance from man.

The importation of plants in the primary agent in the modern spread of species, for animals have almost invariably gone along with the plants, quarantine being a comparatively recent an not completely effective innovation. The United States Office of Plant Introduction alone has introduced almost 200,000 species and varieties of plants from all over the world. Nearly half of the 180 or so major insect enemies of plants in the United States are accidental imports from abroad, and most of them have come as hitchhikers on plants.

In new territory, out of reach of the restraining hand of the natural enemies that kept down its members in its native land, an invading plant or animal is able to become enormously abundant. Thus it is no accident that our most troublesome insects are introduced species.

The invasions, both the naturally occurring and those dependent on human assistance, are likely to continue indefinitely. Quarantine and massive chemical campaigns are only extremely expensive ways of buying time. We are faced, according to Dr. Elton, “with a life-and-death need not just to find new technological means of suppressing this plant or that animal”; instead we need the basic knowledge of animal populations and their relations to their surroundings that will “promote an even balance and damp down the explosive power of outbreaks and new invasions.”

Much of the necessary knowledge is now available but we do not use it. We train ecologists in our universities and even employ them in our government agencies but we seldom take their advice. We allow the chemical death rain to fall as though there were no alternative, whereas in fact there are many, and our ingenuity could soon discover many more if given opportunity.

Have we fallen into a mesmerized state that makes us accept as inevitable that which is inferior or detrimental, as though having lost the will or the vision to demand that which is good? Such thinking, in the words of the ecologist Paul Shepard, “idealized life with only its head out of the water, inches above the limits of toleration of the corruption of its own environment…. Why should we tolerate a diet of weak poisons, a home in insipid surroundings, a circle of acquaintances who are not quite our enemies, the noise of motors with just enough relief to prevent insanity? Who would want to live in a world which is just not quite fatal?”

Yet such a world is pressed upon us. The crusade to create a chemically sterile, insect-free world seems to have engendered a fanatic zeal on the part of many specialists and most of the so-called control agencies. On every hand there is evidence that those engaged in spraying operations exercise a ruthless power. “The regulatory entomologist … function as prosecutor, judge and jury, tax assessor and collector and sheriff to enforce their own orders,” said Connecticut entomologist Neely Turner. The most flagrant abuses go unchecked in both state and federal agencies.

It is not my contention that chemical insecticides must never be used. I do contend that we have put poisonous and biologically potent chemicals indiscriminately into the hands of persons largely or wholly ignorant of their potentials for harm. We have subjected enormous numbers of people to contact with these poisons, without their consent and often without their knowledge. If the Bill of Rights contains no guarantee that a citizen shall be secure against lethal poisons distributed either by private individuals or by public officials, it is surely only because our forefathers, despite their considerable wisdom and foresight, could conceive of no such problem.

I contend, furthermore, that we have allowed these chemicals to be used with little or no advance investigation of their effect on soil, water, wildlife, and man himself. Future generations are unlikely to condone our lack of prudent concern for the integrity of the natural world that supports all life.

There is still very limited awareness of the nature of the threat. This is an era of specialists, each of whom sees his own problem and is unaware of or intolerant of the larger frame into which it fits. It is also an era dominated by industry, in which the right to make a dollar at whatever cost is seldom challenged. When the public protests, confronted with some obvious evidence of damaging results of pesticide applications, it is fed little tranquilizing pills of half-truth. We urgently need an end to these false assurances, to the sugar coating of unpalatable facts. It is the public that is being asked to assume the risks that the insect controllers calculate. The public must decide whether it wishes to continue on the present road, and it can do so only when in full possession of the facts. In the words of Jean Rostand, “The obligation to endure gives us the right to know.”

Money can't buy happiness

$
0
0
By Amy Novotney
American Psychological Association, July/August 2012, 
Vol. 43, No. 7, Page 24

Extremely wealthy people have their own set of concerns: anxiety about their children, uncertainty over their relationships and fears of isolation, finds research by Robert Kenny.

Most of what we think we know about people with a lot of money comes from television, movies and beach novels — and a lot of it is inaccurate, says Robert Kenny, EdD.

In an effort to remedy that, Kenny, a developmental psychologist and senior advisor at the Center on Wealth and Philanthropy at Boston College, is co-leading a research project on the aspirations, dilemmas and personal philosophies of people worth $25 million or more. Kenny and his colleagues surveyed approximately 165 households via an anonymous online survey and were surprised to find that while money eased many aspects of these people's lives, it made other aspects more difficult.
Dr. Robert Kenny

The Monitor spoke to Kenny about his findings and about the significance of his research for those of us who don't have a net worth of $25 million or more.

WHAT PROMPTED YOU TO STUDY WEALTHY FAMILIES?

We wanted to try to understand the deeper motivations of people in high net worth households. They are rarely questioned about this, and instead are asked whether they would like a Mercedes or a Lexus. Do they prefer Tiffany's or Cartier? Most surveys of high net worth households are marketing surveys to sell a product, so the questions that are asked are pretty narrow.

We decided to ask three major questions: First, we asked, "What is the greatest aspiration for your life?" As far as we can tell, no one has ever asked this population that question, yet there are assumptions made about this all the time. The second major question was, "What's your greatest aspiration for your children?" Our third question was, "What's your greatest aspiration for the world?" After each of the major questions we asked, "How does your money help you with your greatest aspiration?" and, "How does your money get in the way?"

WHAT DID YOU FIND?

People consistently said that their greatest aspiration in life was to be a good parent — not exactly the stereotype some might expect. When asked whether their money helps with that, they answered with all the obvious: good schools, travel, security, varied experiences. But when we asked how their money gets in the way, that was a payload. We received response after response on how money is not always helpful. They mentioned very specific concerns, such as the way their children would be treated by others and stereotyped as rich kids or trust fund babies, they wondered if their children would know if people really loved them or their money, whether they'd know if their achievements were because of their own skills, knowledge and talent or because they have a lot of money.

Some were concerned about motivation. They worried that if their children have enough money and don't have to worry about covering the mortgage, what will motivate them? How will they lead meaningful lives? This is where the money might get in the way and make things confusing, not necessarily better. Very few said they hoped their children made a lot of money, and not many said they were going to give all the money to charity and let their kids fend for themselves. They were, however, really interested in helping their children figure out how they could live a meaningful life. Even though they did not have to "make a living," they did need to make a life.

As for the respondents' aspirations for the world, they focused, once again, on how to help the youth in the world live healthy, meaningful and impactful lives. Their answers were consistently youth-focused: They were concerned about being good parents, they were concerned about their children and they were concerned about the children of the world in general. We found that to be very interesting, and even surprising because it runs contrary to so many of the stereotypes about this population.


WHAT HAD YOU EXPECTED TO HEAR?

One could expect that you might hear things like, "I wanted to make a lot of money and become financially independent and be able to do whatever I wanted to do whenever I wanted to do it." But very few said anything like that, although they appreciated the temporal freedom. It was so non-financially focused. I expected that when we asked them about their greatest aspiration for their children, we'd get a lot more people saying they wanted their children to be world leaders, but that's not what they said at all. People said, "I'd like them to think about how to make their world a better place." Not the world, their world — their community, their neighborhood, their family.

WHAT MIGHT PSYCHOLOGISTS FIND MOST INTERESTING ABOUT THIS WORK?

A net worth of $25 million or more brings temporal freedom, spatial freedom and sometimes psychological freedom, but it's not always easy. Eventually temporal freedom — the freedom to do anything you want — raises dilemmas about what the best way to use all your time might be. There's also spatial freedom: You get to build anything you want — a house, a business, a new nonprofit — and people often get lost or befuddled with all of their options. And you get choice. You can go to this restaurant or that one, this resort or that one, buy this car or that one. People can get overwhelmed by all the choices and possibilities, and the amount of freedom that they have.

Then the overwhelming question becomes: What is the best use of my time and resources? After a while one can actually become stymied and even dispirited. There are plenty of folks who are more than willing to make suggestions, but it takes a lot of individual work to develop the psychological freedom to make decisions. For most, that's not a problem because time and money are limited, so the choices are limited. Being willing to try to understand the challenges of having an oversupply of time and money can be difficult for therapists.

The takeaway from all of this is that there seemed to be a trend that said you can't buy your way out of the human condition. For example, one survey participant told me that he'd sold his business, made a lot of money off that and lived high for a while. He said, "You know, Bob, you can just buy so much stuff, and when you get to the point where you can just buy so much stuff, now what are you going to do?"

WHAT'S THE SIGNIFICANCE OF THIS RESEARCH FOR THE VAST MAJORITY OF US WHO AREN'T WEALTHY?

This research shows the rest of the world, who often think that if they just made one more bonus or sold one more item or got one more promotion, then their world and their family's world would be so much better, that this isn't necessarily true. There's another whole level of concerns that parents are going to have about their kids. One of those concerns is this feeling of isolation. That's actually a No. 1 concern for families with a high net worth — this sense of isolation — and the higher the wealth, the worse it gets. We know this is a very powerful feeling when it comes to one's overall sense of well-being, and these people feel very isolated because they have what most of the world thinks they want. But just because you have money doesn't mean you're not going to have a bad day every once in a while. But what you often lose when you have all this money is the friendships that support you through the difficult times.

WHAT HAVE YOU LEARNED THROUGH YOUR YEARS OF WORKING WITH PEOPLE WITH A HIGH NET WORTH?

I think the toughest part about both working with this population and being in this population is that as soon as you say they have a net worth of $25 million, someone will start playing the violin. Like, "Oh, cry me a river, you have all this money and it's causing problems?"

No one is saying, "Poor me, I have a lot of money." In fact, most of them are saying, "I love having a lot of money. But don't get me wrong, there are some downsides."


These people don't have to worry about whether they'll have enough to make the mortgage payment, and they feel very fortunate. But it isn't nirvana either. If their kids have access to a lot of money, and therefore a lot of drugs, that hurts just as much as if they don't have any money and their kids are doing drugs. It doesn't save you from any of that. It's still a parent who has a child who is hurting.

The Omnivore's Dilemma

$
0
0



Michael Pollan

Michael Pollan is the author of “The Omnivore's Dilemma: A Natural History of Four Meals”  which was named one of the ten best books of 2006 by the New York Times and the Washington Post. It also won the California Book Award, the Northern California Book Award, the James Beard Award for best food writing, and was a finalist for the National Book Critics Circle Award. He is also the author of “In Defense of Food: An Eater’s Manifesto”, “The Botany of Desire: A Plant's-Eye View of the World”, “A Place of My Own”, and “Second Nature”.

A contributing writer to the New York Times Magazine, Pollan is the recipient of numerous journalistic awards, including the James Beard Award for best magazine series in 2003 and the Reuters-I.U.C.N. 2000 Global Award for Environmental Journalism. His articles have been anthologized in Best American Science Writing, Best American Essays and the Norton Book of Nature Writing. Pollan served for many years as executive editor of Harper's Magazine and is now the Knight Professor of Science and Environmental Journalism at UC Berkeley.

Smog in our brains

$
0
0

By Kristen Weir
American Psychological Association 
July/August 2012, Vol 43, No. 7

Researchers are identifying startling connections between air pollution and decreased cognition and well-being.

That yellow haze of smog hovering over the skyline isn't just a stain on the view. It may also leave a mark on your mind.

Researchers have known since the 1970s that high levels of air pollution can harm both cardiovascular and respiratory health, increasing the risk of early death from heart and lung diseases. The effect of air pollution on cognition and mental well-being, however, has been less well understood. Now, evidence is mounting that dirty air is bad for your brain as well.

Over the past decade, researchers have found that high levels of air pollution may damage children's cognitive abilities, increase adults' risk of cognitive decline and possibly even contribute to depression.

"This should be taken seriously," says Paul Mohai, PhD, a professor in the University of Michigan's School of Natural Resources and the Environment who has studied the link between air pollution and academic performance in children. "I don't think the issue has gotten the visibility it deserves."     
      
Cognitive connections

Most research on air pollution has focused on a type of pollutant known as fine particulate matter. These tiny particles — 1/30th the width of a human hair — are spewed by power plants, factories, cars and trucks. Due to its known cardiovascular effects, particulate matter is one of six principal pollutants for which the Environmental Protection Agency (EPA) has established air quality standards.

It now seems likely that the harmful effects of particulate matter go beyond vascular damage. Jennifer Weuve, MPH, ScD, an assistant professor of internal medicine at Rush Medical College, found that older women who had been exposed to high levels of the pollutant experienced greater cognitive decline compared with other women their age (Archives of Internal Medicine, 2012). Weuve's team gathered data from the Nurses' Health Study Cognitive Cohort, a population that included more than 19,000 women across the United States, age 70 to 81. Using the women's address history, Weuve and her colleagues estimated their exposure to particulate matter over the previous seven to 14 years. The researchers found that long-term exposure to high levels of the pollution significantly worsened the women's cognitive decline, as measured by tests of cognitive skill.

Weuve and her colleagues investigated exposure to both fine particulate matter (the smallest particles, less than 2.5 micrometers in diameter) and coarse particulate matter (larger particles ranging from 2.5 to 10 micrometers in size).

"The conventional wisdom is that coarse particles aren't as important as fine particles" when it comes to human health, Weuve says. Previous studies in animals and human cadavers had shown that the smaller particles can more easily penetrate the body's defenses. "They can cross from the lung to the blood and, in some cases, travel up the axon of the olfactory nerve into the brain," she says. But Weuve's study held a surprise. She found that exposure to both fine and coarse particulate was associated with cognitive decline.

Weuve's results square with those of a similar study by Melinda Power, a doctoral candidate in epidemiology and environmental health at the Harvard School of Public Health. Power and her colleagues studied the link between black carbon — a type of particulate matter associated with diesel exhaust, a source of fine particles — and cognition in 680 older men in Boston (Environmental Health Perspectives, 2011). "Black carbon is essentially soot," Power says.

Power's team used black carbon exposure as a proxy for measuring overall traffic-related pollution. They estimated each man's black carbon exposure by cross-referencing their addresses with an established model that provides daily estimates of black carbon concentrations throughout the Boston area. Much like Weuve's results in older women, Power and colleagues found that men exposed to high levels of black carbon had reduced cognitive performance, equivalent to aging by about two years, as compared with men who'd had less black carbon exposure.

But while black carbon is a convenient marker of air pollution, it's too soon to say that it's what's causing the cognitive changes, Power says. "The problem is there are a lot of other things associated with traffic — noise, gases — so we can't say from this study that it's the particulate part of the air pollution that matters."

Still, the cumulative results of these studies suggest that air pollution deserves closer scrutiny as a risk factor for cognitive impairment and perhaps dementia.

"Many dementias are often preceded by a long period of cognitive decline. But we don't know very much about how to prevent or delay dementia," Weuve says. If it turns out that air pollution does contribute to cognitive decline and the onset of dementia, the finding could offer a tantalizing new way to think about preventing disease. "Air pollution is something that we can intervene on as a society at large, through technology, regulation and policy," she says.

Young minds

Research is also finding air-pollution-related harms to children's cognition. Shakira Franco Suglia, ScD, an assistant professor at Boston University's School of Public Health, and colleagues followed more than 200 Boston children from birth to an average age of 10. They found that kids exposed to greater levels of black carbon scored worse on tests of memory and verbal and nonverbal IQ (American Journal of Epidemiology, 2008).

More recently, Frederica Perera, DrPH, at the Columbia University Mailman School of Public Health, and colleagues followed children in New York City from before birth to age 6 or 7. They discovered that children who had been exposed to higher levels of urban air pollutants known as polycyclic aromatic hydrocarbons while in utero were more likely to experience attention problems and symptoms of anxiety and depression (Environmental Health Perspectives, 2012). These widespread chemicals are a byproduct of burning fossil fuels.

Meanwhile Mohai, at the University of Michigan, found that Michigan public schools located in areas with the highest industrial pollution levels had the lowest attendance rates and the greatest percentage of students who failed to meet state testing standards, even after controlling for socioeconomic differences and other confounding factors (Health Affairs, 2011). What's worse, the researchers analyzed the distribution of the state's public schools and found that nearly two-thirds were located in the more-polluted areas of their districts. Only about half of states have environmental quality policies for schools, Mohai says, "and those that do may not go far enough. More attention needs to be given to this issue."

Although Michigan and Massachusetts may experience areas of poor air quality, their pollution problems pale in comparison to those of Mexico City, for example. In a series of studies, Lilian Calderón-Garcidueñas, MD, PhD, a neuropathologist at the University of Montana and the National Institute of Pediatrics in Mexico City, has investigated the neurological effects of the city's infamous smog.

In early investigations, Calderón-Garcidueñas dissected the brains of dogs that had been exposed to air pollution of Mexico City and compared them with the brains of dogs from a less-polluted city. She found the Mexico City dogs' brains showed increased inflammation and pathology including amyloid plaques and neurofibrillary tangles, clumps of proteins that serve as a primary marker for Alzheimer's disease in humans (Toxicologic Pathology, 2003).

In follow-up research, Calderón-Garcidueñas turned her attention to Mexico's children. In one study, she examined 55 kids from Mexico City and 18 from the less-polluted city of Polotitlán. Magnetic resonance imagining scans revealed that the children exposed to urban pollution were significantly more likely to have brain inflammation and damaged tissue in the prefrontal cortex. Neuroinflammation, Calderón-Garcidueñas explains, disrupts the blood-brain barrier and is a key factor in many central nervous system disorders, including Alzheimer's disease and Parkinson's disease. Perhaps more troubling, though, the differences between the two groups of children weren't just anatomical. Compared with kids from cleaner Polotitlán, the Mexico City children scored lower on tests of memory, cognition and intelligence (Brain and Cognition, 2008).

Brain changes

It's becoming clearer that air pollution affects the brain, but plenty of questions remain. Randy Nelson, PhD, a professor of neuroscience at the Ohio State University, is using mouse studies to find some answers. With his doctoral student Laura Fonken and colleagues, he exposed mice to high levels of fine particulate air pollution five times a week, eight hours a day, to mimic the exposure a human commuter might receive if he or she lived in the suburbs and worked in a smoggy city (Molecular Psychiatry, 2011). After 10 months, they found that the mice that had been exposed to polluted air took longer to learn a maze task and made more mistakes than mice that had not breathed in the pollution.

Nelson also found that the pollutant-exposed mice showed signs of the rodent equivalent of depression. Mice said to express depressive-like symptoms give up swimming more quickly in a forced swim test and stop sipping sugar water that they normally find attractive. Both behaviors can be reversed with antidepressants. Nelson found that mice exposed to the polluted air scored higher on tests of depressive-like responses.

To find out more about the underlying cause of those behavioral changes, Nelson compared the brains of mice that had been exposed to dirty air with brains of mice that hadn't. He found a number of striking differences. For starters, mice exposed to particulate matter had increased levels of cytokines in the brain. (Cytokines are cell-signaling molecules that regulate the body's inflammatory response.) That wasn't entirely surprising, since previous studies investigating the cardiovascular effects of air pollution on mice had found widespread bodily inflammation in mice exposed to the pollution.

More surprisingly, Nelson also discovered physical changes to the nerve cells in the mouse hippocampus, a region known to play a role in spatial memory. Exposed mice had fewer spines on the tips of the neurons in this brain region. "Those [spines] form the connections to other cells," Nelson says. "So you have less dendritic complexity, and that's usually correlated with a poorer memory."

The changes are alarming and surprising, he says. "I never thought we'd actually see changes in brain structure."

Nelson's mice experienced quite high levels of pollution, on par with those seen in places such as Mexico City and Beijing, which rank higher on the pollution scale than U.S. cities. It's not yet clear whether the same changes would occur in mice exposed to pollution levels more typical of American cities. Another limitation, he notes, is that the animals in his study were genetically identical. Nelson says he'd like to see similar studies of wild-type mice to help tease out whether genetic differences might make some people more or less vulnerable to the effects of pollution. "I would suspect there are people who are wildly susceptible to this and people who are less so, or not at all," he says.

Few studies have investigated connections between depression and air pollution, but Nelson's wasn't the first. A study by Portuguese researchers explored the relationship between psychological health and living in industrial areas. They found that people who lived in areas associated with greater levels of air pollution scored higher on tests of anxiety and depression (Journal of Environmental Psychology, 2011).

Back in Ohio, Nelson plans to study how much — or how little — pollution is necessary to cause changes in the brain and behavior. He's also beginning to look at the effects of air pollution on pregnant mice and their offspring. Though more research is needed to fully understand how dirty air impairs the brain, he says, the picture that's emerging suggests reason for concern.

In the United States, the Environmental Protection Agency reviews the scientific basis for particulate matter standards every five years or so, and completed its last review in 2009.

To date, the EPA hasn't factored psychological research into their standards assessments, but that could change, according to a statement the EPA provided to the Monitor. "Additional research is necessary to assess the impact of ambient air pollutants on central nervous system function, such as cognitive processes, especially during critical windows of brain development. To this end, as the number of … studies continue to increase and add to the weight of overall evidence, future National Ambient Air Quality Standards assessments will again assess and address the adequacy of existing standards."

In the meantime, says Weuve, there's not much people can do to protect themselves, short of wearing special masks, installing special filtration systems in their homes and offices or moving to cities with less airborne pollution. "Ultimately, we're at the mercy of policy," she says.

The good news, Nelson says, is that the mental and cognitive effects of air pollution are finally beginning to receive attention from the mental health research community. "We sort of forget about these environmental insults," says Nelson. "Maybe we shouldn't."

Aristotle

$
0
0
By Russell M. Lawson
World History: Ancient and Medieval Eras


Aristotle is considered the greatest scientist and one of the greatest philosophers of the ancient world. A student of Plato, Aristotle was the teacher of Alexander the Great and the founder of the Peripatetic school of thought. His vast writings include Metaphysics, Physics, Nichomachean Ethics, Politics, and Poetics. Aristotle was one of the first empirical thinkers, though he generally relied on established methods of science: observation, collection and categorization of specimens, analysis of data, induction, and deduction. Aristotle's mastery of the subjects he studied gained him the reputation in subsequent centuries as an infallible guide to natural phenomena and philosophy. After 1500 CE, in light of new discoveries by Nicholas Copernicus, Galileo, Isaac Newton, and other scientists, many of Aristotle's theories were rejected; nevertheless, his influence on modern science is undeniable.

Aristotle was born in 384 BCE in the small town of Stagira in Thrace, a primitive outpost of Greek culture east of Macedonia. His father was a wealthy court physician to the kings of Macedonia, and Aristotle spent his early years at Pella, the capital of King Amyntas III and his successor King Philip II of Macedon. Aristotle, seeking to follow in his father's footsteps as a scientist and physician, journeyed south to Athens in 366. He studied at the Academy, Plato's school in Athens, where he became that philosopher's most famous student. At the Academy, Aristotle fit in as a wealthy aristocrat, but his Thracian and Macedonian background plagued him among condescending Athenians. In the end, Aristotle's superior intellect silenced all criticism.

From Plato, Aristotle learned of the universal truth, which Socrates termed "the Good." Plato taught his students at the Academy that the best means to approach an understanding of truth was through reason, the study of mathematics and music, intuition, and intense and deep contemplation. Aristotle, less the mystical and more the pragmatic thinker, broke from his teacher by adopting the scientific approach to human behavior, natural philosophy, natural science, ethics, and metaphysics. Aristotle also learned from Plato of being (ousia), the divine essence, from which all things derive. Aristotle did not abandon this religious interpretation of the ultimate reality but brought science to bear to discover and to understand it. For Aristotle, then, science is a pious act to discover the nature of goodness, justice, virtue, and being, and human experience is an essential matter for study, since the better sort of human beings echo being itself.

Upon Plato's death, Aristotle left what was no doubt a competitive situation among Plato's students, each jockeying to take the place of the master. Aristotle journeyed to a small kingdom in Asia Minor (present-day Turkey) where he became court philosopher to King Hermias. Aristotle married the king's daughter but soon fled (with his wife) upon the tragic assassination of the king. Aristotle ended up back in Macedonia in 343, this time as tutor to the royal prince Alexander (Alexander the Great). Legend has it that Philip II of Macedon enticed Aristotle to return to Pella, an intellectual and cultural backwater compared to Athens, with a tempting salary and a promise: Stagira having been destroyed and its population enslaved in one of Philip's campaigns, Philip proposed that in return for Aristotle's services the king would rebuild the town and bring the inhabitants out of slavery. Aristotle agreed to the terms.

Alexander eventually became king of Macedonia in 336 upon his father's assassination and then spent the next 13 years of his life conquering Greece, Asia Minor, Palestine, Egypt, Iran, Iraq, and Afghanistan—all of which made up the Persian Empire. Alexander was a warrior and conqueror who thought himself the heroic son of the king of the gods, Zeus. Nevertheless, Aristotle, who eschewed the life of a warrior, had been Alexander's teacher for three years during the years from 13 to 16, and below the surface of Alexander's actions are hints that he had adopted the life of a philosopher and that he thought of himself as a scientist, even a physician. Alexander, for example, composed letters to Aristotle that included samples of plant and animal life that he had gathered for his teacher's collection.


In the meantime, Aristotle had left Macedonia for Athens, where he opened his school, the Lyceum. The philosopher eventually broke with Alexander over the death of Aristotle's grandnephew Callisthenes, a philosopher and historian who accompanied Alexander's expedition. Callisthenes was implicated in a plot to assassinate the king and was executed. Even so, the Athenians associated Aristotle with Alexander, who was very unpopular in Athens. Upon Alexander's death in 323, the Athenians felt free enough to throw off the shackles imposed on them by Alexander—and one shackle was represented by Alexander's former teacher. Aristotle was eventually forced to flee the city and abandon his school. He died soon after, in 322 BCE.

Aristotle is perhaps best known today as a logician. He created a system of thought based on fundamental assumptions that one cannot doubt—the famous a priori truths. Whereas Plato believed that one must accomplish knowledge of truth by means of reason and intuition, Aristotle believed that the philosopher must observe particular phenomena to arrive at an understanding of reality, a scientific technique known as induction. Once truth is known through induction from the particular to the universal, the philosopher can engage in the process of deduction from the basis of the universal to arrive at other particular truths. Aristotle's system of logic is known as syllogism.

Aristotle also made contributions in metaphysics, the study of reality that transcends the physical world. Once again a priori truths are the basis for metaphysical studies. Aristotle assumed that there is a First Cause, an "unmoved mover," that he defined as actuality, in contrast to potency, or the potential, which represents movement. Aristotle argued that all reality can be explained according to cause and effect, act and potential. For example, time is an actual phenomenon—it has existence as a form or essence. Time acts upon human movement, providing a temporal context in which humans are born, live, and die, all the while measuring their lives according to the standard of time. Aristotle further argued in Metaphysics that one must distinguish between art and experience. Art as essence is based on abstract thought—what the Greeks termed the logos—whereas experience is based on a series of particular events occurring in time. In Poetics, Aristotle argued that poetry (art) explores universals and how things ought to be, while history (historia) explains the particulars of human existence and how things are. Wisdom represents the unification of art and experience.

Aristotle's treatise on natural science was Physics. Natural science, he wrote, is concerned with physical movement from the first principles of nature. Aristotle associated nature with the first cause. His unmoved mover was an amorphous divine force of creation which establishes the laws through which movement—plant, animal, and human—occurs. The four causal determinants expressed in nature are: 1) the material substance that forms a physical object; 2) the type or class of phenomenon (genos) to which an object belongs; 3) the cause of change in or movement of an object; and 4) the goal or purpose (telos) of movement.

Aristotle's categorizations had a profound impact on the formation of a vocabulary of science. His notion of type or class is the basis for the notion that a species in nature comprises a set genus. Aristotle's idea of goal or purpose forms the philosophical concept of teleology, the study of the end of natural phenomena.

In addition, Aristotle was one of the first students of the human psyche. He wrote treatises on dreams, memory, the senses, prophecy, sleep, and the soul. Aristotle believed that the soul is the actuality within the potency of the body and is the unmoved mover within each individual human, while the mind (nous) is an expression of the soul. Aristotle argued that each human soul is part of a universal whole which is a world soul, the ultimate actuality, and the first cause. Aristotle's study of dreams provided a rational explanation of what the ancients often considered a supernatural phenomenon. Aristotle argued that the only thing "divine" about a dream is that it is part of nature, which is itself the creation of God and hence divine. That events turn out according to one's dream is either coincidence or the result of the subtle impact of a dream on an individual's actions.


In zoological studies, Aristotle's contributions included the treatises Description of Animals, Parts of Animals, and Generation of Animals. In Parts of Animals, Aristotle noted that although animals are a less profound area of study than the metaphysical, nevertheless it is an inquiry accessible to anyone willing to explore natural history. Consistent with his Platonic background, Aristotle studied animals for the sake of understanding the whole of natural history. He assumed that the source of all good and beauty is the same source of animal and biological phenomena and that hence even animals mirror the divine.

In the study of ethics, Aristotle dealt with the question of how the ultimate basis of behavior, the set of rules that establishes the Good, can be understood according to science. Aristotle believed that the tools of science—observation, categorization, logic,and  induction—could be brought to bear on the study of human behavior. The scientist studies human behavior in its incredible variety of contexts to arrive at general laws of how humans act and how they should act: how humans act is the realm of the scientist, while how humans should act is the realm of the philosopher. Once again, Aristotle combined science and philosophy into one organized study. Aristotle believed that the ultimate end of human existence is happiness, which occurs when humans conform to the Good. The Good is accomplished when humans exercise reason in accordance with virtue. Aristotle studied human behavior to arrive at a definition of virtue, finding that it is an action performed for its own sake, that is, an action performed for the sake of the Good or an action performed out of principle. Aristotle believed that vice, the opposite of virtue, derives from actions committed for selfish reasons or for personal motives.

The Greek philosophers before and during Aristotle's time were the first political scientists. Aristotle's contribution, Politics, applied his philosophical methods and assumptions to the understanding of statecraft. He argued that the state is, as it were, the actual, while the citizens are the potential. The latter are the parts (the particulars) that made up the whole, or the universal body politic. Aristotle conceived of a pluralistic society operating according to natural laws based in part on reason and necessity, a social compact among people to promote security and serve the needs of survival. Within this concept of the state (which represents virtue) people move, act, and struggle for power and wealth. Aristotle argued, based on his experience at Athens, that slavery was justified because of the inferior intellect of slaves. Likewise, he assumed that women lacked the cognitive abilities of males and therefore should not participate in democracy. In The Athenian Constitution, Aristotle provided a detailed analysis of Athenian democracy, providing details into the life and political science of the great Athenian lawgiver Solon.

In the study of astronomy, Aristotle explored his ideas in On the Heavens. Based on observation, Aristotle established the spherical nature of the earth. Viewing a lunar eclipse, Aristotle detected a slight curvature of the shadow of the earth on the moon's surface. He also observed that the altitude of stars changes according to changes in latitude. In On the Heavens, Aristotle concluded that the earth's circumference is 400,000 stadia (40,000–50,000 miles, which was an overestimate of 45%). He advocated the view that there is more water than land on the earth's surface. Much of Aristotle's thought on astronomy, however, was erroneous, as observation with the naked eye was insufficient for the study of the nature of the stars and planets.

Aristotle's ideas were advocated and defended for centuries after the philosopher's death. Aristotle's disciples were known by the master's teaching style of walking about while engaged in discussion or disputation (from which the name "Peripatetic" derives). Theophrastus took over the helm of the Lyceum, Aristotle's school at Athens. He organized Aristotle's papers and writings and pursued Aristotle's theories and investigations in the physical and metaphysical worlds. After Theophrastus's death in 287 BCE, Strato assumed leadership of the Lyceum and the Peripatetic philosophers.

Further Reading
Bambrough, Renford, ed. and trans. The Philosophy of Aristotle. New York: New American Library, 1963; Barnes, Jonathan. Aristotle. Oxford: Oxford University Press, 1982; Schmitt, Charles B. Aristotle and the Renaissance. Cambridge: Harvard University Press, 1983; Turner, William. "Aristotle." Catholic Encyclopedia. New York: The Encyclopedia Press, 1913; Wheelwright, Philip, ed. and trans. Aristotle. New York: Odyssey Press, 1951.

MLA Citation
Lawson, Russell M. "Aristotle." World History: Ancient and Medieval Eras. ABC-CLIO, 2013. Web. 22 Dec. 2013.


Viewing all 89 articles
Browse latest View live