Saturday, March 08, 2008

Free Will and Determinism: The Contours of Moral Responsibilty

Note: This is a paper I wrote for a class on free will in 2002. In looking over my old papers, it turns out I've written quite a bit on free will, which is funny considering how little I'm concerned with it now. This paper, written before I knew much about pragmatism or Rorty, pretty well enunciates why I don't write much about free will anymore--it attempts to dodge the whole question. It also reminds me of "ordinary language philosophy," though this was well before I knew of that, too.

Also, please ignore my arguments, which will prove easy as I myself can't find any. What I do still find useful are my tables.


In discussing the debate between determinism and free will or in simply discussing the dimensions of free will, it is often claimed that free will has a direct, intrinsic link to moral responsibility. Under the heading “The paradoxical consequences of determinism,” Cornman, Lehrer, and Pappas say:
“If, however, it is a consequence of the thesis of determinism that a person’s actions are the inevitable outcome of causal processes that began before he was born and over which he had no control, then, no matter what a person does, he could not have done otherwise. … Consequently, no person may reasonably be held responsible for any of his actions.”[1]

Dennis Stampe claims that an adequate characterization of free will “should fit with the belief that we are morally responsible for those actions we do in the exercise of the freedom of the will, because we were exercising the freedom of the will.”[2] It’s a common enough claim that it should be asked whether it is actually true or not. In the following, I hope to sketch the contours of our “normal” moral reasoning showing: 1) our normal sense of moral reasoning begins with causal responsibility, 2) moral responsibility is dependent upon a moral standard, and 3) where our free will fits into the picture.

Responsibility means “to be the cause or source of.” If A were to throw a baseball, it could be said that A was responsible for throwing a baseball. Very simply, moral responsibility for one’s actions begins before the determination of the freedom of the will, as is commonly supposed. Moral responsibility begins with causal responsibility, or, rather, moral responsibility begins with the differentiation between individuals (ego differentiation), or (to use another vocabulary) with the differentiation between causal chains (the Matt causal chain vs. the Dennis causal chain). This also works in the case of placing responsibility on a group of people, such as placing the responsibility of slavery upon white Americans (if one were so inclined to do). All white Americans would be considered part of the same causal chain if that is how the chain is broken up.[3]

If responsibility is created by the differentiation of the ego from the Other, from me to you, then responsibility for our actions still has meaning in a determinist system. Contrary to reasoning displayed by Cornman, Lehrer, and Pappas, a person may have no control, but they are still responsible. And think of the changes to the legal system. The penalty for murder would be the same across the board. Insanity as a plea would be removed. The mentally handicapped would be tried as equals to those with full mental capabilities. In fact, everyone would be tried on an equal footing, be they age 2 or 100.

Of course, this is not how we reason. Without considering the metaphysical reality of free will, we reason morally with it as a consideration. As Stampe says, we have a belief that free will is linked somehow with moral responsibility. We typically insert “voluntary action” in place of “free will,” but the effect is still the same. We say, “Person X ate that hot dog freely or voluntarily or of his own free will,” without reference to a metaphysical entity called “free will.” The fact is, even in a deterministic system, a consideration of the freedom of the will could still be used in assigning moral responsibility. What I wish to investigate is how, generally, we reason morally. I hope to sketch the outlines of how we reason about moral responsibility to find how exactly the issue of free will plays into it.

Moral responsibility is the degree to which labels of honor or reproach and/or rewards and punishments are adjudicated based upon moral behavior. Moral behavior is desirable behavior. An objection to this very simple definition may be that it does not account for so called “morally neutral” behavior, such as intelligent, skillful, or beautiful behavior that may also be desirable behavior. A person may have made a shrewd chess move, but we typically do not refer to excellent chess playing as morally sound behavior. However, I think this type of objection begs the question of what moral standard we are using. A moral standard, which tells us what moral behavior is, tells us what is right and wrong. The reason you follow any particular moral standard is because “right action” is behavior you want and “wrong action” is behavior you don’t want, i.e. moral behavior is desirable behavior. The moral standard itself tells us what exactly is morally desirable behavior, such that it can exclude astute chess maneuvering.

It may seem that I’m stretching out the definition of moral behavior and morality in general to proportions that aren’t useful. The problem is that it is difficult to talk about moral responsibility in general without immediately begging the question in favor of a particular moral standard. Moral responsibility is assigned according to a system of morality. Break a moral law and the system tells you how to be punished. Perform a moral duty and the system tells you how to be rewarded. We cannot assign any kind of moral responsibility without recourse to some moral system, even if the system is as simple as “Ye are morally responsible for all thou is causally responsible.” The earlier illustration of causal responsibility as the genesis of moral responsibility is, in fact, not a necessary condition, but a contingent one. Theoretically, we could say that A is responsible for B’s problems even if A had nothing causally to do with them. But we don’t reason this way and so we are already within a moral standard of some kind. What I would like to do is take an example of behavior that would generally be considered immoral: killing person X. This is pretty much universally considered undesirable. I hope to sketch the outlines of how we reason morally by taking four different causes of X’s death and seeing how we assign moral responsibility.

Take these four examples:
1) Dog A (with rabies) kills person X: we hold dog A responsible.
2) Insane person B (with no control over his actions) kills person X: we hold person B responsible (to an extent).
3) Person C with a gun to their head is forced to kill person X: we do not hold person C responsible.
4) Person D kills person X: we hold person D responsible.

Table 1: (CR=causal responsibility, VA=voluntary action, MR=moral responsibility)

The Killer: -- CR? -- VA? -- MR?
A ------------ yes --- no ---- yes
B ------------ yes ---- no ---- yes
C ------------ yes ---- no ---- no
D ------------ yes --- yes ---- yes

An objection might be raised about assigning moral responsibility to a dog or an insane person. However, as far as I can tell, moral responsibility is as my definition holds: the basis of handing out labels of honor or reproach and/or rewards or punishments. Given that and what moral behavior is (at root, desirable action) we have the ability to say, “That’s a good dog” or “Bad dog!” And we do say those things. Those are labels of honor and reproach. And we punish the dog for doing bad things and reward it for doing good things, all the while knowing that the dog is not exercising any free will. In the case of the dog with rabies we put the dog down for doing something as morally extreme as killing a person. The same goes for the insane person. If an insane person goes out and kills somebody, we punish him (by sending him to a mental hospital or something of the kind) even though he had no free will. The reasoning is that “he was a detriment or destructive to society and/or harmful towards himself or others,” i.e. he is performing undesirable behavior, a.k.a. immoral behavior.

It can be argued that when we say, “Good dog!” we are not giving the dog a moral label, but merely giving the dog positive reinforcement. This is entirely possible, and is quite probable, but the same thing can be said for saying, “Good job!” when a small child has done something desirable. Without that positive reinforcement, how would the child know if he was doing something wrong or good? We give such positive (and negative) reinforcement to develop a moral standard. However, after such reinforcement is handed out, we still tend to label children and dogs as being good or bad based on their past patterns of behavior.

So what we have in Table 1 is a discrepancy. How can dog A and insane person B be held responsible for killing X, while performing a non-voluntary action, and person C not be held responsible, even though he also performed a non-voluntary action? Why the discrepancy? I would argue that there must be another variable in play. The difference between dog A and insane person B and person C is that person C has the ability to exercise voluntary control over his actions, while A and B do not.

Table 2: (AV=ability to exercise voluntary control)

The Killer: -- CR? -- AV? -- VA? -- MR?
A ------------ yes --- no ---- no ---- yes
B ------------ yes
--- no ---- no ---- yes
------------ yes --- yes --- no ---- no
------------ yes --- yes --- yes --- yes

Table 2 bears out a more accurate sketch of moral responsibility than Table 1. We hold dog A and insane person B morally responsible because they had no ability to exercise voluntary control. We, presumably, do this because they cannot control their actions, so we might as well. However, if you do have control over your actions, as person C did, it is presumed that you would not have performed the undesirable action had not extenuating circumstances intervened. Person C could have chosen to take a bullet to the head instead of killing person X.

Normally it would seem unreasonable to ask someone to die instead of performing action Y, which is why, in Table 2, we did not hold person C morally responsible. If action Y had been, say, kicking person X, it certainly does seem unreasonable to ask person C to choose to die. However, it is not quite as clear-cut in the case of person C killing person X. For instance, if person C is driving down the road and person X jumps in front of person C’s car from out of nowhere, thereby killing person X, our legal system wouldn’t convict person C of first degree murder, but possibly of manslaughter.[4] The point is that we find person C answerable for what he has done, though possibly not as responsible as the person who intentionally runs down person X.

This cleaving of moral responsibility into varying degrees of responsibility based on, say, freedom of action or intentionality, is based on the notion of being “answerable, though not fully responsible.” This notion is based on the fact that we conceive of moral responsibility as beginning first with causal responsibility. All of this, however, rests on any particular conception of a moral standard. A moral standard could, theoretically, have absurd reasoning such as “If thou shalt kill a Bob, thou must kill a Roger” which would totally ignore causal responsibility in identifying moral responsibility. While possible, this is clearly not how we reason.

In this short investigation, I hope to have sketched out, in very broad terms, the way in which we normally reason morally. The first step is seeing that moral responsibility rests on a moral standard. The second is that, within a “normal” paradigm of moral reasoning, causal responsibility is the first consideration of moral responsibility. Free will or voluntary action fits into moral reasoning first as “the ability to exercise voluntary control” and only secondly as “the exercising of voluntary control.” This displacement of the consideration of the freedom of the will makes moral reasoning possible in deterministic systems and frees moral responsibility from being necessarily connected to free will making for a more accurate portrayal of how we reason morally.

[1] Cornman, Lehrer, and Pappas

[2] Dennis Stampe in a handout for Philosophy 530 Freedom, Fate, and Choice.

[3] There is one objection to causal responsibility as the beginning of moral responsibility and it runs along these lines: A may be morally responsible for B’s condition even though it had nothing causally to do with it. For instance, the United States is morally responsible for poverty in Africa (assuming for the sake of argument that the United States had nothing to do with poverty in Africa). Our morality says that we should help the poor in Africa. This, though, is best understood as a moral obligation as opposed to a responsibility. We are not responsible for the poverty in Africa, but we are obliged to do something about it. If we do do something about it, then we are morally responsible for what we did i.e. we can then be rewarded or punished.

[4] Though our legal system may not be a good source of sound moral reasoning in every case, it is a good guide of how we might typically reason in some cases.

Sunday, March 02, 2008

Parmenides, Plato, Aristotle, and Reason

Note: I wrote this in 2000, for a class on the history of science. I've done little editing and I still think it does a decent job in setting a certain pallet, a way of perceiving the history of philosophy (that I was already imbibing from Pirsig, but later drank more fully from Rorty).


In the history of Western thought, the possibility (or impossibility) of change was a question that pushed early thinkers to some of the most important philosophical distinctions of their time period and ours. These distinctions would leave their mark on philosophical inquiry and would not be fully taken up until the Modern era. Some of the most important distinctions were defined by the Eleatics (led by Parmenides), Plato, and Aristotle. While all three theories of change did not grow out of a vacuum, Parmenides more or less followed the force of his own logic, Plato kept Parmenides’ conclusions close at hand, and Aristotle’s theory grew in direct response to both of them. All three also used reason as the primary gateway to knowledge. But in their responses to the problem of change, each one integrates the material world more than the last. This integration allows more satisfactory models, but poses significant problems in itself. Parmenides never runs into these problems because he rejects the material world outright as an illusion. So to address these problems in Plato and Aristotle we must first identify Parmenides’ answer to the problem of change and its consequences: to both change and to the use of reason.

Parmenides’ argument is easily constructed in an analysis of the statement “A becomes B.” In this statement, it can be seen that A is not B because if A=B then there would be no need for change: A is already B. As it is, A is not B and so “A” can be replaced by “not B”. The statement “not B becomes B”, though, does not make sense to Parmenides. A cannot become B unless A has some of B already in it, but for Parmenides this would violate its essential not-Bness: A cannot have any of B in it because A is not B. It is through this kind of argument and logic that Parmenides says that change is impossible and that A does equal B, which equals C ad infinitum. As David C. Lindberg, in his magisterial history of Western science, says,
“What does one do if experience suggests the reality of change, while careful argumentation (with due attention to the rules of logic) unambiguously teaches its impossibility? For Parmenides and Zeno, the answer was clear: the rational process must prevail.”[1]
And so Parmenides declares that all change is merely an elaborate illusion and the underlying reality is a completely stable, unchanging monism.

Enter Plato. The force of Parmenides’ argument, taken to its logical conclusion, is truly stunning, but not quite satisfactory. Plato agrees that the underlying reality must be unchanging, but disagrees that this must be all that reality is. It is here that Plato enunciates the first great metaphysical distinction, between appearance and reality, which Parmenides had used, but not in a systematic way. It is encapsulated in Plato’s divided line, which he elaborates in The Republic. Plato designates the underlying reality as the Realm of Forms. The Realm of Forms is incorporeal, perfect, eternal, insensible, and changeless. In contrast to the Realm of Forms is the material world. It is everything that the Realm of Forms is not: corporeal, imperfect, transitory, sensible, and changing. The divided line separates the two. Plato says that the material world is a reflection of the Realm of Forms and that while the Realm of Forms is completely stable and unchanging, the material world, the reflection, does change as can be plainly seen in everyday life. In this way Plato supplies a rational, fixed underlying reality while still allowing the material world some relevance.

One challenge to this argument would be to question how the Realm of Forms interacts with the material world. How can things that are corporeal interact with things that are insensible? Plato answers this in two ways. First, through a series of arguments, Plato shows that, while we cannot sense the Forms now, our souls were in the Realm of Forms before they were placed in material bodies. We can gain access to the Forms, then, by a process of anamnesis or recollection. Through contemplation we are able to recall the Forms. (Plato does this notably in the Meno by showing how he can teach even a slave the basic mathematics: something that clearly only could be done if a Realm of Forms existed.)

Plato also answered this challenge in the form of the Allegory of the Cave. In summary, the Allegory describes people whose only known reality is the back wall of a cave where they can make shadow puppets. One day, one person ventures outside and discovers that the cave and the shadow puppets are not all that exists. In fact, he realizes that the shadows are really only reflections of the sun. It is through this analogy that Plato shows that the Forms (specifically the Form of the Good) represented by the sun, “illuminates” the material world. Without the Forms, the material world would not, could not exist.

It is important to notice here that Plato is continuing with Parmenides’ placement of reason as the pinnacle of knowledge. The process of anamnesis can also be called “reason.” It is through reason, which Plato made into a method called the “dialectic,” that we can reach the Forms. Plato merely adds a material world to account for the change of our senses. The change is real, but not as important as the unchanging, reason begotten by the Forms.

There is one other important distinction to make before we move to Aristotle. It is the distinction between the General and the Particular. The distinction has actually already been made. The General can be equated to the Realm of Forms and the Particular to the material world. This is important because the difference between Plato and Aristotle can be termed a difference in orientation to the General and the Particular. Plato believes and argues that what is real is the Forms or the General and statements about what is real are found in the material world or the Particular. Aristotle, on the other hand, is just the reverse. What is real is the Particular and statements about what is real is the General. With this distinction in hand we can turn to Aristotle.

Aristotle believes that what is real is the material world and all that the material world is contingent upon. To Aristotle, the Realm of Forms are really just properties possessed by the material Particular, or substance. And like Plato and Parmenides, Aristotle designates that the ultimate reality be unchanging. But how do we account for change now, if the material world, represented as substance and properties, is now unchanging?

To account for change, Aristotle goes back to Parmenides. Parmenides made a distinction between Being and non-Being. Aristotle sees this and says that Being can split into two different types of Being: Potential and Actual Being. So Aristotle agrees with Parmenides that a thing cannot move from non-Being to Being. However, when change occurs, Aristotle says that it is Potential Being changing into Actual Being. Take an acorn and a tree for example. In Parmenides’ model, an acorn (non-tree) cannot change into a tree. In Aristotle’s model, however, an acorn (potential tree) can change into a tree (actual tree) because the acorn houses tree-ness in the form of potentiality. So Aristotle also accounts for change and places an even greater emphasis on the material world than Plato.

What is important about these three philosophers is their use of reason. Each one placed reason on the highest pedestal, but Plato and Aristotle each successively put more and more emphasis on the material world and the senses. The question that arises then is why didn’t they question the use of reason? Parmenides used only reason and he found that change was impossible. He stuck to his guns and followed his own argument to its logical conclusion. But instead of looking around himself and seeing the fallacy of his conclusions, he declared that his use of reason had detected a flaw in the senses and that all the “evidence” to the contrary is illusory. But why does the flaw exist in the senses? Nothing in logic dictates that the flaw be placed upon the senses. It's just as logical to think that the flaw exists in the instrument of argumentation used: reason itself. Parmenides doesn’t think so and neither do Plato or Aristotle. Plato and Aristotle add the senses back into reality to some degree, but the philosophical question that arises out of the problem of change is “How much do the senses and reason play in reality?” Once you let a little in, who’s to say how much? And so the philosophical lines between the Rationalists and Empiricists of the Modern era, who would drag “reality” between these two polar points of reason and the senses, were drawn 2,000 years before the culmination point during the 17th century.

[1] David C. Lindberg, The Beginnnings of Western Science, University of Chicago Press: 1992, p. 33