[00:00:01] (laughter)
[00:00:02] MODERATOR:
Attentive readers among you may have noticed that the, uh, the, uh, uh, moderator today is supposed to be Robert Post, and that, uh, I am not Robert Post. Robert unfortunately had to be out of town today, so I’m going to be pinch-hitting. Let me just say a word about the format.
Um, each of the three commentators will be given up to twenty minutes or so to make some um, additional comments as he or s- she sees fit about any aspects of these lectures. Uh, after each of those comments, Derek will, um, make a brief reply. And after that, um,
(cough)
uh, uh, we will, uh, invite questions from the audience. And following that, um, we’ll we hope to have some general discussion. At the very end, Derek be given, uh, some time to make a final– some final remarks about anything that’s come up that he wishes to, uh, expand on.
This is supposed to be a seminar and a discussion, and we’ll try to, uh, be at least partly faithful to the spirit of that idea. So, uh, without further ado, uh, we should begin. Uh, you’ve already, uh– I’ve already introduced all three commentators, so I won’t do it again.
I did just want to say a word of thanks to the three commentators. Uh, for three busy academics to take a week out of their schedules in the middle of the term, uh, and go to another institution to do something like this is extremely difficult and burdensome. And of course, it’s a tribute to the interest and importance of Derek Parfit’s work that they’re willing to do it, and a great stroke of good fortune for us.
So, uh, thank you to, uh, Alan, Susan, and Tim very much for helping to make, uh, to make this week so rewarding. And the first comment, we’re gonna go in the order, uh, the same order that has that prevailed during the week, so Alan will begin.
[00:02:12] TIM SCANLON:
In my remarks today, I will take up some of the examples Parfit used in his opening lecture. I admit that my remarks go off at a sort of forty-five degree angle from Parfit’s lectures themselves. Let it be a measure of my general agreement with Parfit’s lectures that in order to find something to controvert, I have to wander off in this way.
But examples of this kind and their use in moral philosophy has long been a pet peeve of mine. I’m not sure myself just why, and so I’m going to try to say. I’m very uncertain that what I’m going to be saying today is really true or defensible, but it’s something I’ve wanted to discuss for a long time.
In May of 2001, the Tanner lecturer at Stanford was Dorothy Allison, author of the novel, Bastard out of Carolina. Allison didn’t talk much about moral philosophy as such, but she did discuss a lifeboat problem that she had heard from a philosopher. Her reaction was to reject the problem, to refuse to answer it at all, on the ground that she refuses in principle to choose between one life and five lives.
Even to pose the question in those terms, she said, is already immoral. The real moral issue raised by such examples, she thought, is why provision had not been made for more or larger lifeboats. To many philosophers, your remarks will no doubt seem naive, wrong-headed, and unreasonable, but they seem to me the most sensible and right-minded reaction to such problems that I’ve ever heard from anyone.
I’m going to refer to these kinds of examples not as lifeboat problems, but as trolley problems. None of Parfit’s examples are actually about trolleys, though two of them are about trains. They’re all examples where the main point is that you must choose between saving more people from death and saving fewer.
Since a human death is in general bad, given this information alone, it’s natural to think that the option that involves fewer deaths is to be preferred to the one that involves more deaths. The examples gain their poignancy from the fact that this apparently obvious point suddenly begins to seem questionable or even counterintuitive when the fewer deaths are caused in the wrong way. The problem posed is always what we think we should do when faced with such stark choices, and how what we think about this is supposed to bear on various moral principles that philosophers have proposed or might propose.
To take trolley problems in the spirit in which they’re intended, you’re supposed to think that at least some of the issues they pose might be difficult to decide if you had to face them in real life. You are supposed to think that how we decide these difficult issues must have some important implications for the fundamental principles of moral philosophy. But I don’t think either of these things is true.
Like Allison, I think the right way to deal with trolley problems is to refuse in principle to answer them, or rather, as I would modify it, to refuse in principle to take them in the spirit in which they’re intended. I think that to take in, uh, in the way philosophers want you to might not be merely immoral, but perhaps even worse, it might be bad moral philosophy. I don’t think our so-called intuitions about any examples are worth much to philosophy unless they are honest and critical moral reactions to the kinds of problems that might actually arise in our lives, hence problems to which we could reasonably think that our moral education and experience might serve as some sort of reliable guide.
But trolley problems, as philosophers pose them, are virtually never of that kind. They casually assume we are certain about things that we could never be certain about in real life, and they cavalierly omit facts about the context that we would know about in real life, facts on which what we ought to do would vitally depend. Virtually all trolley problems are such that just witnessing such a situation in the real world would produce feelings of horror and anguish in us, no matter what we did or didn’t do.
If we place most of these problems in a more realistic context, that feature would remain, But it would be– It would not– But it would usually not be difficult to know what the agent should do.
The decision would very likely not turn on any of the factors to which the trolley problem philosophers try to draw our attention. Suppose a moral philosopher gave you the following example: A group of white people are stranded on one rock, and a group of black people are stranded on another. Before the rising tide covers both rocks, we could use a lifeboat to save either the white people or the black people.
Which should we save? Since the philosopher has told you nothing about how many people are in each group or even anything else about them except their skin color, I would hope that you would resist giving any answer at all to the philosopher’s question. If you did have the intuition that you should save the group of white people or even the group whose skin color is the same as your own, then I would hope that you would resist answering on the basis of that intuition, and then you would even dislike yourself for having had that intuition at all.
Certainly, you should not think that agreement with such an intuition ought to serve as a test that basic moral principles ought to pass. What is most objectionable here is the conversational implicature of the philosopher’s question itself. The question implies, namely, that you have been given enough information to answer the question as posed, or at least enough to have some intuition worth reflecting on about, about what the answer should be.
In the case of the example I’ve just described, that implicature is morally offensive all by itself in a very obvious way. Most trolley problems differ from that example, in that in them we have been given information about the situation that is at least prima facie morally relevant. The number of people on each rock is at least not obviously and offensively irrelevant to our decision in a way that their skin color is.
But it may still be true that in trolley problems, we have typically not been given enough information or the right information to have intuitions that are worth much to moral philosophy. In real life, people go to a lot of trouble to arrange things so that no one will ever be placed in the position that, for example, the bystander in the train examples is placed. There are sound moral reasons why this is so, reasons closely connected to Dorothy Allison’s sound observation that it’s already immoral to ask anyone to decide between one person’s life and five people’s lives.
Her point, I believe, is that even if some choice inevitably has the consequence that either one will die or that five will die, it is still immoral to look at the choice only in that way. In light of this point, I want briefly to discuss each of Parfit’s three examples, making, unlike the philosophers who posed the problem, some more realistic assumptions about things that we would know and about things we would not know. Assumptions about facts whose omission or inclusion is responsible for the philosophical deception I think the examples perpetrated on us.
Lifeboat. When faced with a situation like lifeboat, there is only one morally defensible policy. We must seek to rescue all six people as quickly and efficiently as possible.
Perhaps it is true that following this policy, we should set about rescuing the five before we rescue White, but only because in that way we will go farther, faster, and with greater certainty toward achieving our goal, which is rescuing all six people. If for some people, for any reason, we thought we could go farther, faster, and with greater certainty toward the goal of saving all six by rescuing White first, say, because White’s rock is right on our way to the rock with the other five on it, then we should do that. It is also highly relevant here that in the real world, if both rocks are in imminent danger of being swept under the water, then you could not know for certain that you must choose between saving White and saving the five.
In the real world, if you set out to save all six and took the best means to this end, then there would always be some chance that you would succeed in saving all six. And if both rocks were about to go under, there would also probably be a significant chance that no matter what you did, all six would drown. If a philosopher simply stipulates that we are certain that we can save all and only the inhabitants of exactly one rock, then we should be clear that he’s posing a problem sufficiently different from otherwise similar problems we might face in real life, that any intuitions we have in response to the philosopher’s problem should already be suspect.
What is certainly clear about a situation such as lifeboat is our intuition that if any of the six drown, the result is tragic, probably a ground for grief and traumatic memories for years afterwards. We should regard ourselves as having failed significantly no matter what we did, even if our failure was inevitable and not our fault. Another vivid intuition is that we need to call to account whoever is to blame for the fact that there were not enough lifeboats, find out why this happened, and take steps to minimize the chances of its ever happening again.
If you are a decent and thoughtful person, these intuitions would be at least as strong as any intuition you might have about what you should actually do about White and the five. Yet trolley problem philosophers focus attention short-sightedly on what we should do in the immediate situation, and they would tend to deride us as philosophically dense if we even bothered to express the intuitions that are clearer and more natural. The fact that those intuitions are irrelevant to what interests them ought already to make us distrust both their moral and their philosophical judgment.
Tunnel. Trains and trolley cars are either the responsibility of public agencies or private companies that ought to be, and usually are, carefully regulated by the state with a view to ensuring public safety and avoiding loss of life. I take the following five points to be reasonable guesses about the kinds of rules and policies that would result from their responsible regulation.
One, every precaution should be taken to prevent runaway trains or trolleys. Where trains or trolleys run out of control, a heavy burden of responsibility is borne by those in charge of the rail system and by those who are responsible for regulating it. When any such example is mentioned, the first thing a right-thinking person will be reminded of is the unconscionable privatization of the British railways under the Thatcher regime, which is directly to blame for several derailments and many deaths.
Two, people should not be permitted to be on tracks where they might be endangered by runaway trains or trolleys. There would ideally be provisions for physically preventing anyone from being there. If either the five or White in the tunnel have nevertheless succeeded in disobeying the rules and entered such dangerous areas, they should be regarded as doing so entirely at their own risk.
Three, mere bystanders, too, ought to be physically prevented from getting at the switching points of a train or trolley. They have no business touching in such equipment under any circumstances or for any reason. Four, mere bystanders who meddle with the switches are morally to blame for any harm they cause and should be liable to criminal prosecution.
If that meddling causes the death of anyone, whether it caused the death of one or the death of five, then they should be held criminally responsible for any death or injury they may have caused. Five, if the person near the switching points is not a mere bystander, but an employee of the railway system whose job it is to deal with emergencies, then that person must be given strict rules how to deal with cases like Tunnel. What the employee ought to do is then strictly determined by the rules.
His primary aim would be to ensure that no one will ever be injured or killed in such situations. It would be a ground for criminal prosecution of the railway, railway’s management if the employee were given no guidance except to consult their private moral intuitions or only very general principles like kill as few people as possible, or the duty not to kill takes precedence over the duty to save lives. Such abstract moral principles might play a big role in the justification of the instructions the employee was given, but it would be a hysteron proteron to regard reflections on the employee’s situation as a good way of justifying those principles.
Now let’s apply these points to the tunnel. As mere bystanders, we have no business touching the switching points for any reason. It is relevant that in the real world, we could not be sure h– we n-know how to operate the mechanism properly.
For all we know, our attempt to save the five might result in wrecking the runaway train and killing dozens of people on board. Further, in the real world, if we see five people in one tunnel and one person in another tunnel, we have no way of knowing whether farther down the track from the one there are not also many more people we would also be killing by switching the points. In the real world, for all the bystander could know, the five people might be interlopers present on the track illegally and entirely at their own risk, perhaps with some criminal intent. While– while White is an employee of the railway who is there on the job, of course, with every guarantee that it will be a safe place to work.
Of course, if I were the bystander who correctly did nothing, I might very well second-guess myself in my nightmares for years afterwards, torturing myself with the thought that there might have been something I could have done to save the five. This might be a natural human reaction to the horrible scene I had witnessed, but my feelings of guilt and self-reproach, though perhaps understandable, would be irrational. Far worse would be the truly monstrous state of mind of the bystander who switched the points, killing White but saving the five, and who then thought that he had been treated unjustly when he was sentenced to prison for manslaughter.
Perhaps the five interlopers would show up at his parole hearing to argue on his behalf, but the parole board would be unwise to listen to them. I would also recommend that he not spend his time in prison reading journal articles about trolley problems. It would not be good for his moral rehabilitation.
Bridge. Many of the same points apply here as apply to tunnel, except that here the wrongdoing of the bystander who acts to save the five is obviously far graver. For here, the bystander surely must suppose that White, in walking on the bridge over the train, is walking in a place where people have a perfect right to walk, and to regard themselves as free from the risk of harm from the deeds either of railway employees or meddling bystanders.
The five, however, can be presumed to have entered a forbidden zone at their own risk. To kill White to save the five would be, in this case, not merely manslaughter, but murder. The bystander does have the consolation, as he sits in his cell contemplating the long, dismal life before him, that some of the great phil– greatest philosophical minds in the world, holding professorships of moral philosophy at prestigious insti- universities think it worthwhile to reflect on the moral intuitions that put him where he is.
I hope I will be forgiven for wishing I could deprive him of his last one solace. If cases like Tunnel or Bridge occurred in the real world, there would or should be a big public outcry against the railway and perhaps against the regulating agencies of the government. The question whether one died or five died would be of much less importance to the protesters than the fact that a runaway train had caused death.
If it were further to come to light that the choice of who died had been at the mercy of some bystander acting on their private moral intuitions, this would be a further ground for public outrage, and it would or should make little difference whether the bystander had chosen the death of one or the death of five. Examples like lifeboat, the tunnel, and the bridge seem theory-driven to the extent that they appear to assume the basic subject matter of normative, normative ethics consists solely in reckoning up the goodness or badness of states of affairs, also taking into account the various causal relations human actions may have to those states of affairs. Trolley problems are often little more than vehicles for representing abstract principles deciding these matters.
And I suppose the examples are natural ones to reflect on for anyone who has made that assumption. But those of us who do not accept it should not be expected to take the examples at face value. It has occurred to me that perhaps my animus against trolley problems comes from my rejection of consequentialism.
But part of my thought is that it is a bad consequence when someone has to make these kinds of decisions on these grounds. And I would think it a good feature of some moral principles that they would lead to the prevention of situations where people find it, uh, find themselves asking and these questions and trying to answer them on these grounds. On the other hand, trolley problems appear to some people, Or, or one reason trolley problems appeal to some people is that at times it seems to them that the only honest way to confront many social policy decisions is to see them as frank trade-offs between the deepest interests of different people.
Surely it is true of many social policy decisions that if they are made one way, then these people will be hurt, and if they are made the other way, then those other people will be hurt. In some moods, I think evil moods, or under the influence of some moral theories, bad theories, it may seem that the only honest way to look at any moral problem is simply to see it starkly in these terms. I do agree that there are indeed desperate situations in real life, in conditions of war or anarchy, pestilence, famine, or natural disaster, where it can look as if the only way to think rationally about them is simply to consider coldly and grimly the numbers of people, the amount of benefit and harm, and the kind of actions available to you that will produce the ben-benefit and harm.
(breathing)
So I’m not saying that trolley problems can’t ever help us to think about what we ought to do in these extreme situations. It’s rather that such situations really are much rarer than trolley problems may lead us to believe, and that we don’t necessarily have such clear intuitions about how to deal with these extreme situations as reflections on trolley problems may lead you to think we have. In real life, situations of this kind, in war for instance, what happens is that we have been deprived of humanizing social institutions like those that pro– should provide enough lifeboats or prevent runaway trains and keep interlopers off the tracks and away from switches that make it possible to look at the world in better ways or impossible to look at the world as trolley problems might lead you to.
We think of war as a morally unacceptable condition, in part because in war it can sometimes seem rational to look at the world in monstrous ways. Our first task as moral beings, I think, is to view things in better ways than this, and even to change the world so far as we can to bring it about that there are other ways of viewing it rationally. But if you take situations in life that are not as barbarous as war, or even take human life as a whole and choose to view it as trolley problems lead you to, then that amounts to a voluntary choice on your part to turn that situation, or even human life as a whole, into something barbarous like war.
Some people may think these, m- these sentiments must be motivated by something like a principled belief in the sanctity of human life. I’ve thought about this a good deal, and I think this interpretation of my objections would be wrong.
For one thing, I don’t even believe in the inherent sanctity of human life. I think that a lot of humbug and pernicious superstition are involved in the popular moral use of this idea. For another, I think I would have much the same objections to trolley problems even if the examples involved lesser goods and evils than life and death.
I think my objections have more to do with something like the Kantian ideal of a realm of ends. The rough idea is that we should not think about moral problems in terms of trade-offs between competing human ends, but should try to understand the answer to every problem as one that treats all people as ends and leaves out no human ends except those that exclude themselves from the harmonious system or realm of rational ends. I don’t think this line of thinking is really opposed to the line that Parfit has presented in his lectures.
If Kant’s contractualist formula involves trying to create a realm of ends, then everyone has reason to accept the principle that we should want no one to be in the position of the bystander, uh, or life… or the lifeboat operator, or to have to think about situations in the way these people seem forced to think about them in the trolley problems. If this principle is also more important and more basic than any principle about what a person in that situation should do, then I think that shows that the trolley problems involve misleading implications about what kind of problems and ways of thinking are most basic and essential for moral philosophy.
(mm-hmm)
Thank you.
[00:22:44] MODERATOR:
Derek, can I reply?
[00:22:46] DEREK PARFIT:
Yes. Well, I’ll just say, uh, I agree that it makes a difference what kind of examples we’re appealing to, and I think the simplest distinction is three kinds of imagined space. Um, there are some cases that involve deeply implausible assumptions.
There are others that, no, could happen, but it’s rather rare. And then there are cases that are very common. Now, I think the cases you’ve been discussing, none of them are in the first category, but I think that tunnel and bridge are pretty rare.
So is Kant’s example, should you lie to the would-be murderer? And some utilitarians say it’s no objection to my view that it implies that it would be right to kill one person as a means by saying we needn’t consider that kind of case. Similarly, we could say it’s no objection to Kant’s view that you should never lie, that we needn’t consider whether it’s a would-be murderer who asks you where the victim is.
We could just say, ignore cases that don’t often occur. Now, but lifeboat, I think, is a case that occurs all the time. I think we’re in lifeboat all the time.
But there’s an assumption that I think we always bring to bear on these cases. You assume there’s no other relevant difference. You can save one person or five, but there’s no other relevant difference.
Now, perhaps you can’t be certain that the chances of saving one are about as good as the chances of saving five. And I think cases in which we have a decision how many people to save are extremely common. There are basically four kinds of answers.
Some people think you should always try and save the larger number. Others, some Kantians think, well, you certainly ought to save at least one person, but you don’t have to maximize and try and save more if you can. Some people think you don’t have any obligation to save these people’s lives, even if you fairly easily could.
That’s most people’s view. And then some people think it’s immoral even to ask the question. Now, I think when you said that you were rewarding us for a rather strenuous week.
But, uh, you have very strong intuitions about all of these examples. And I’m somewhat puzzled why-
[00:24:54] TIM SCANLON:
Mm-hmm.
[00:24:55] DEREK PARFIT:
-you think that we shouldn’t discuss them. But we-we’ll come back to that.
[00:25:01] MODERATOR:
Okay. Um, we will defer further discussion of that puzzle for later. Susan.
[00:25:09] SUSAN:
Thank you. Uh, can you hear me? I don’t know if this is on or… Can you hear me?
[00:25:14] MODERATOR:
It’s supposed to be on. Move in a little closer.
[00:25:16] SUSAN:
Okay. Uh, first of all, before I get distracted and forget, um, since Sam started by thanking us, I would like, on behalf of us, to thank you, the Philosophy Department, the Tanner Committee, the Tanner Fund, Ellen Gobler, if she’s here, uh, and Derek Parfit for an incredibly stimulating week. Uh, I’m sure it’s been all of our pleasures, certainly mine, so.
Uh, I’ve discovered about myself that whenever I’m giving lectures in, in class or at home to my children or anywhere else, I always need to make three points. And, um, so for no other reason, I have three comments to offer, uh, right now. Uh, the first you’ll be happy to hear is very small.
Ah, it’s just this, uh, it’s natural given the content of these lectures, and perhaps even more natural given, uh, the philosophical dispositions of my other distinguished commentators, um, that the discussion has been among the various versions of contractualism, uh, Kantianism and consequentialism. You know, which one’s right, which one’s the best, how do they relate to each other, and so on. But I, I think it is useful at some point to step back and remember that that doesn’t exhaust, uh, the options of moral thought, that there are, uh, excellent philosophers whose, um, response to both Kantian slash contractualism and consequentialism is a curse on both your houses.
Um, or less emotively, uh, to say, look, there are some assumptions made by theorists in both these camps that, uh, they have in common and that there might be some reason to question. And, um, maybe I wasn’t holding up my end of, uh, um, bargain of your inviting me here by not br- mentioning that before. Um, the– there’s only one aspect of that that I wanted specifically to bring out.
Um, and that is that what Kantianism and contractualism in probably any form, but certainly in the Kantian forms and consequentialism have in common is the, uh, maybe not assumption, but at least the operating, uh, principle that, that we’re looking for one supreme moral law. That there’s supposed to be one fundamental criterion of rightness and wrongness, and that-that’s what we’re looking for. Um, that certainly in the written version of, of the lectures, uh, gets mentioned.
You know, this– we’re looking for the supreme moral law. This can’t be it, this… Maybe this is it.
Um, and there’s at least one way in which I think that, that has substantive consequences, which I will bring out in a minute because it connects to the second point I wanna make. Uh, the second point, uh, basically takes off from and concerns what went on in the first lecture, so I hope, uh, you all remember that. Uh, it began, uh, with, uh, Derek’s mentioning Kant’s formula of humanity, formula of ends in themselves that says, “Act in such a way that you treat humanity, whether in your own person or in the person of another, always at the same time as an end and never simply as a means.”
Now, Parfit began by focusing on the idea that you shouldn’t treat someone as a mere means. Uh, that, that aspect of the formula is what he started with, and he understood this to be equivalent to the idea that, uh, in treating someone, um, in using someone for some purposes, you don’t treat him as, as a mere means if you restrict the way you’re willing to treat him, um, according, you know, by some morally significant criterion, presumably some morally significant criterion having to do with their value or their good. Um, now I didn’t go back and look at the exact wording.
Um, that was the way I, I remembered it, and I’m not sure whether your use of morally significant there actually, uh, fits with your later use of moral versus non-moral grounds here because it wasn’t, uh, as I understood it, uh, to treat someone as not a mere means, just meant that there was… it wasn’t– didn’t mean that you restricted yourself on grounds of what was morally wrong, but on some… Because this was supposed to be a way of figuring out what was morally wrong.
However that may be, um, I wanted to say something about about that condition. That is, that interpretation of what it is to not treat someone as a mere means. Uh, it seems to me a perfectly reasonable way to use the words to ha-have it mean what you do, but that strikes me as a very broad and very
, uh, weak criterion Um, so much so that I suspect that only a psychopath, um, would actually be someone who doesn’t, uh,
(coughing)
who doesn’t obey the principle of not treating people as a mere means. I mean, even your ordinary gangster, you know, doesn’t kill someone unless they have some reason to. Um, uh, and mere inconvenience doesn’t usually constitute a reason.
Of course, there’s always the issue of whether they get caught. I mean, it would be hard to tell perhaps. But I think the, uh, there is a very strong, uh, human reaction not to treat people as mere means in that very, uh, broad sense of, well, I won’t just do anything, uh, to them in order to meet my aims as efficiently and conveniently as possible.
Um, as, uh, as Derek himself had mentioned, you know, the, uh, animal researcher doesn’t treat her animals as a mere means if, um, you know, there are some ways in which s-she won’t perform experiments because they cause certain kinds of undue pain or, uh, mutilation. Uh, the Chinese robbers didn’t treat your mother as a mere means because they only took half of her, um, her belongings. Uh, actually, when I started thinking about it, it, it seemed to me, except for kind of used tissues and, you know, a-actual garbage, there’s almost nothing that I treat as a mere means.
I mean, I don’t, I don’t think I treat my houseplants as a mere means. I, I– you know, even though, uh, I often just would like to be rid of them, I can’t just- I just can’t throw them out.
Like, you know, they’re living things. I try to bring them in.
(breath)
And, um, you know, as, as some of you, uh, know, I’ve just recently moved house. I have these huge boxes and bags of stuff that I shipped from storage, and I have to go through it all and try, you know, try to sort it out and, you know, cut down. You know, I can’t- There are things that, that really are literally mere means.
I mean, they are tools, but if they’re in perfectly good condition, I can’t just throw them out or smash them to bits. I, you know, I have to give them to Goodwill or do some- All right?
So it seems to me that I’m not even treating mere means as mere means. And that is, I think, just a way of showing that that as a condition is really very weak. Um,
(laughter)
but, uh, Derek then goes on to take, uh, what he takes to be a, an interpretation of what Kant has in mind about how we treat persons, and that’s his principle of rational consent. Don’t treat someone in a way in which
(clears throat)
to– they could not rationally consent. Now, that is a more significant proposal. I don’t think when I, I place that restriction on my houseplants, uh, on my treatment of houseplants and so on.
Um, but it’s very far from what I had at least previously understood Kant to be interested in, uh, um, or saying with the formula of ends in themselves. Um, I don’t really, uh, I wanna try to defend my interpretation of Kant, but what I had previously understood at least as Kant’s, uh, concern is one that, whether it was Kant’s concern or not is, I think, a recognizable moral concern. Um, which is, as I say, I think very far from the constraint that you don’t do things to them or with them to which they could not rationally consent.
Uh, what I had understood, um, well, it was closer to what Allen Wood said on, uh, in his reply to that lecture. Uh, you know, when Kant says treat humanity as an end, uh, oddly enough, what he means by humanity is rational nature, which is, um, it is very odd because that doesn’t seem to be in any way equivalent to humanity. But it was– the idea was to, to treat humans qua rational beings as ends, and in particular to respect them as rational beings, to respect them as… at least this is what I thought, and what in any case seems to me a worthwhile moral aspiration, to respect people as themselves possessors and centers of reason, including practical reason, as beings who give the law to themselves, who make their own value judgments and can make their own choices about what to do and what to allow to be done to them.
And so as I understood it, the idea that we should treat people as ends in themselves, at least in its fullest, um, realization, would involve allowing them, as far as possible, to express and pursue their own values, to exercise their own reason, insofar as that’s compatible with other rational beings being allowed to exercise theirs and pursue their ends. Um, but that understanding of what’s going on, um, in treating people as ends in themselves would necessarily urge us to restrict ourselves according to what they actually consent to, um, or at least what they– what we have reason to think they would consent to if we were able to check with them, rather than restricting s-themselves as Parfit’s condition of rational consent says to what they could rationally consent to in, in the sense merely that there is sufficient reason. So if they did consent to that, it wouldn’t be, uh, irrational.
Um, it’s interesting that– I mean, the story of the Chinese robbers can actually illustrate the difference between Parfit’s condition of rational consent and this other condition that, of actual consent, in that the robbers, uh, not only restricted themselves by saying, “I’ll only steal half of your mother’s stuff,” but she– they asked her, did, did she want the engagement ring or did you want to keep the wedding ring? Um, That is, they asked for her actual consent, her actual choice.
And so to that extent, they respected as a
[00:36:57] TIM SCANLON:
Her?
[00:36:58] SUSAN:
center of practical reason, whereas it would have been compatible with the principle, with Parfit’s principle of rational consent to look at the two rings to say, “Well, this one’s more valuable.
[00:37:08] TIM SCANLON:
Okay.
[00:37:09] SUSAN:
You know, it’s worth more money or something, so we’ll leave you that one.” Right? I mean, that would be certainly, um, something they could
[00:37:16] TIM SCANLON:
Okay.
[00:37:17] SUSAN:
say she could rationally, you know, agree to. But she might not, and the fact that they asked her actually showed that they were, at least to that degree— um, honoring and it– you know, the principle that I regarded as more, more Kantian or in any case, um, a better thing. It was better that they asked her and let her decide which ring to keep.
Uh, to take another example, uh, let us suppose that opera is a good thing, um, that it would therefore be, you know, good for you to go to operas, to learn about operas, to cultivate an interest in operas, and so on. Um, now, if that’s right, as I understand Parfit’s way of thinking, then you could, you could rationally consent to going to an opera since it would be good for you if you did. Um, and that therefore, if I were just to force you to go to the opera, I would be meeting–
I mean, other things being equal, uh, I would be meeting the requirement of rational consent. I would be doing something to you that you could rationally consent to. Um, but I would not be respecting your rational nature in the way in which I understood that, um, if I just unilaterally decide that you have to go and watch opera whether you want to or not.
Uh, if my kids had heard, you know, had thought of this, this would have been their reply when I made them go to the opera. But, um, in any case, it was all right when they were my kids. I mean, they’re still my kids, but they’re older now.
Um, but most of us think there is something wrong with this, with just making people do things because they could rationally consent if they don’t, and if we know that they wouldn’t, given the choice. Uh, now, Parfit did consider the idea that we restrict ourselves, uh, not to what people could rationally consent to, but to what they actually did, um, actually consent to or would actually consent to. Um, or even to the subset of things that they would rationally actually consent to.
Uh-
[00:39:33] TIM SCANLON:
Well, well, yeah,
[00:39:34] SUSAN:
that is, it has to be- any rational, I mean, that is, leave out the irrational things that they want, right? Um, he referred to that as the veto principle or, or some-something with veto in the title, and points out that that’s too strong.
(mm-hmm)
Um, For White, for example, might not actually consent to being passed over so that you can save the five others, uh, even in the lifeboat case, um, much less than any of the others. And, and of course, the others might not consent to our passing over them to save White. So if, I mean, it’s unacceptable, obviously, to, uh, you know, to accept the veto principle as a condition of wrongness that would, you know, that would lead to unacceptable conclusions.
I agree that it leads to an unacceptable conclusion that, you know, there are times when you cannot, uh, you know, get somebody’s actual consent, uh, or,
(Thank you,)
or act only, or restrict yourself according to what they would actually consent to. Sometimes you have, you know, you just have to, uh, bypass that, and that therefore, I agree with Parfit that the veto principle is too strong as a criterion of wrongness, as an absolute criterion. You can– You know, this is a, a condition of something morally permissible.
Um, but this is how my first point actually connects to the second point. That is, um, the fact that it, uh, that it’s too strong as an absolute requirement of moral permissibility in Parfit’s case is, well, that’s too strong, so let’s go on to find some other different principle that might not be too strong. Which if you’re searching exclusively for the principle that will, that will settle, all the, uh, questions is an appropriate thing to do.
But if you’re looking to understand the moral world by saying, well, you know, if you’re just looking to figure out, well, what things are morally important considerations? What aspirations should we have? What kinds of things should we morally aim for?
What’s a wrong-making feature? Then to discard what seems to me to be a very plausible, um, moral desideratum, namely that, that you get people’s actual consent or something sufficiently close, uh, to that, uh, because it’s too strong for a final criterion is to then lose something, uh, you know, very important. And so it makes a, a very big, uh, it’s a very big loss to the moral picture, I think.
(breathing)
Um, I-I– One last re-remark in connection with this, Um, it’s often seemed, uh, that a difference between consequentialist thinking on the one hand and K- and Kantian outlooks, um, and contractualist outlooks on the other, is that consequentialists tend to make or, uh, don’t have a problem making unilateral decisions about how to deal with the world. Um, and there’s this, you know, it sometimes seems like a certain kind of arrogance that they’re going to figure out what’s best for everyone or the world as a whole. Whereas a Kantian or contractualist perspective is gonna take more seriously and as, uh, you know, um, as as a real restriction on what they can do, what other people think, want, and consent to it.
And, um, so in that respect, the difference between what– restricting yourself according to the principle that says what they could rationally consent to, and restricting yourself according to a principle, um, restricting yourself as much as is possible according to a principle of what they actually consent to is, um, uh, tracks this difference. If I’ve gone too long, I will… I, I have uh, another point, but I won’t…
I can just forget it.
[00:43:27] MODERATOR:
Another minute or two, perhaps?
[00:43:29] SUSAN:
Yeah, okay. I can do… I can make a…
The last point is very sketchy, and I could actually just make it very short. Um, and it has to do with the theory of reasons that Parfit wants to supplement, uh, his contractualist formula with. Right, um, the idea is that if we’re gonna restrict ourselves to what everyone could rationally consent to, then of course it’s important to know what they could rationally consent to, which, uh, is to say, what is it that you think there is sufficient reason to approve of or, or, uh, prefer and so on.
Um, I mean, the shortest way to make my comment is just to say, uh, it would be very nice to hear a lot more of that because there hasn’t been a chance other than saying it’s value-based rather than desire-based. Um, we haven’t heard very much more of it. Um, I guess the, the one worry I had that was– that I would specifically like addressed is that, um, even though in comments you had said at one point, uh, it, it struck you that a lot of moral philosophy, um, seemed to think there were only two kinds of reasons, self-interested reasons, um, or desire-based reasons and moral reasons, and that was it, and you thought there are lots more, and I certainly agree with that claim.
Um, there was something in the way you talked that suggested that you thought, well, there are, there are only two kinds of reasons, or um, self-interested reasons and impartial reasons. Um, and that’s it. And I, I guess the, the one little clue that made me nervous about this was that in the, the one case that I recall in which you felt pretty confident, uh, something might be irrational was the case of someone, uh, consenting to the idea that we save Black’s leg rather than his life.
And I thought, well, why… You know, unless the only two options are do something that’s in your self-interest or do something that’s best from an impartial point of view, Why would that be irrational?
[00:45:38] DEREK PARFIT:
Well, a large number of very good points here. I don’t have a very good memory, and that’s the first I’ve heard them. I’ll try and go backwards just over the main ones very quickly, and then we’ll go on.
Um, you need to assume other things are relevant and equal. Um, if Black, Uh, is 90 and you are a ballet dancer age 20, I think Black could kill himself to save your leg because he could plausibly think what he had to lose is not very much in saving you from a great burden. But I think if you don’t assume there are these other morally relevant differences, it would be in s– one way admirable if I sacrificed my life to save your leg.
But I don’t think you have a sufficient reason to do it, and I think people who tried to persuade you not to do it would be doing so for that reason. Um, with respect to the rational consent principle, I never intended it to be put forward as the supreme principle of morality. I suggested that you might have three principles.
Uh, one says, when your acts would affect only one person, then you might say he should have a veto. It’s up to him to decide what you do. I’d have a slight rider, unless his decision is very irrational.
Two, there may be some things that you should never do to people without their actual consent, even when it affects other people. Those would be the veto. But then the question is with respect to all other matters.
Now, um, I was working from the phrase that the man whom I deceive cannot possibly consent to my treatment of him. And of course, Korsgaard, O’Neill, and others take that to support the veto principle that you should never treat anyone in any way to which they don’t actually consent or they would not actually consent. You agree that’s much too strong, and I was simply trying to see whether there is some plausible principle that you could add, having given the veto to certain restricted cases and said, “Well, you have to choose it, you’re the only person affected.”
Can we do anything else in terms of predicting what people could rationally consent to? But it wasn’t meant to be a single principle. I mean, the supreme principle.
And the case of you being forced to go to the opera sounds rather like my case of rape. I don’t think you could rationally consent in advance to being forced to go to an opera if at the time you didn’t consent. So I think that’s okay.
That’s not an objection to the principle.
(laughter)
Um, uh, turning then to merely as a means. Um, I said, uh, significantly and relevantly affected because if there are certain trivial things that you wouldn’t do to someone, even if it would serve your purposes, then you’re very close to treating the person merely as a means. Uh, so it is a matter of degree.
Um, I think very often people do treat each other merely as a means, treat other people merely as a means, and I think that’s certainly wrong. Um, but now it raises a wider question, which is: Should we use simple terms and phrases in anything other than their ordinary sense? I’m assuming that when Kant claims you shouldn’t treat someone merely as a means, he means that, And I think my account is the only thing that it could mean.
Now, some Kantians say, “No, no, he’s using it in a special sense.” For example, some of them say, if you see someone in great distress and you ignore them, you just walk on, you’re treating them merely as a means. Now, that’s just false.
You’re not treating them as a means at all, let alone merely as a means. The thing to say there is you’re treating them as a mere thing, as if their well-being doesn’t matter. Similarly, Korsgaard and O’Neill say that if you treat someone in some way to which they don’t actually consent, or you use any force to stop them acting in some way, you’re treating them merely as a means.
Suppose you and I are in a desperate situation from which only one of us can escape alive. I stop you from sacrificing your life to save me so that I can sacrifice mine to save you. Of course, Korsgaard and O’Neill, I’m treating you merely as a means.
Well, I don’t admire G.E. Moore nearly as much as other people do, but I do like his literal-mindedness. And if we say what Kant says, for example, you should never lie.
(coughing)
But Kant doesn’t mean never by never, he means often, sort of.
[00:50:21] AUDIENCE MEMBER:
So then you shouldn’t say this.
(laughter)
[00:50:23] DEREK PARFIT:
He says we should never lie.
(laughter)
And merely as a means has a perfectly clear sense.
[00:50:27] AUDIENCE MEMBER:
That’s what happens.
[00:50:29] DEREK PARFIT:
Uh, then the map. Well, I quite agree. I agree.
Uh, I mean, I would broaden the map perhaps by saying most of us hold ordinary common sense pluralist morality. Then you might say, well, there are four kinds of systematic theory, not just two. There’s consequentialist utilitarianism.
Then there’s Kant, then there’s contractualism, and then there’s virtue ethics. Now, I’m inclined to think that those four systematic theories actually are all going up the same mountain, and we’ll find they reach the same place at the top. Uh, but they then– they may then disagree very strongly with ordinary common sense morality.
And I’m in some ways deeply suspicious of these systematic theories. And the main respect I am is this, and this cuts across the map. It’s the question of the method.
Sidgwick was an act utilitarian, but he thought the method of moral thinking is to examine your deepest moral convictions and submit them to the greatest scrutiny. Whereas Kant and contractualism have a method according to which you shouldn’t appeal to your moral intuitions, and I think that’s deeply questionable. Why shouldn’t you appeal to your moral intuitions?
And so I wasn’t intending to scorn these systematic theories. I was trying to discuss what their most striking features were, and people say that’s the wrong way to do it. I just think that would be wrong.
I, I treat that with tremendous respect.
[00:51:57] SUSAN:
Um.
[00:52:00] MODERATOR:
Thank you. Tim?
[00:52:02] ALAN:
Well, here we are back in the realm of moral theory with me, but I’ll try to describe, uh, moral theory in a way that incorporates some of the pluralism that, uh, Susan and Derek has spoken out for. So this tries to be, although it may not seem that, it may not seem that way to you at first, uh, but I hope it’ll look more like moral theory with a human face.
[00:52:24] STAFF MEMBER:
Can you take the microphone?
[00:52:25] ALAN:
Oh, sorry. Yeah. Yeah. Okay. Is that better?
[00:52:30] TIM SCANLON:
K-can– Am I audible now? Maybe I’ll move it over here.
(microphone shuffling)
Good. Thank you. Moral theory is generally understood, I think, or at least by theorists, as, as having two sorts of aims.
Uh, one aim is to clarify the content of morality as we ordinarily understand it. And on the other hand, it aims to explain why these requirements, as we ordinarily understand them, should have the distinctive importance that they seem to have. Morality seems to identify a set of principles, requirements, or standards of behavior that are distinctively significant.
We should not obey them, disobey them, violate them trivially. So why not? What’s so important about them?
So I’ll refer to these two tasks as attempts to answer the question of moral content and to answer the question of normative ground. That is why, why care about these things? Now, to answer the latter question of normative ground, we need to have some general characterization of, of the morality that we’re talking about, right?
So you have to characterize the, the subject in some general way. We have to describe the kind of claims that these moral, various moral requirements
(clears throat)
all express, or you might say characterize what it is that they’re all about.
(clears throat)
And then we need to explain why claims of that kind or claims about that subject matter, whatever it is, if correct, um, have s- the distinctive kind of importance that they claim, uh, to have for us. So, following Kant, we might call a general characterization of what morality is about or what kind of principles there are, uh, a statement of the fundamental principle of morality. It’s the general omnibus description of what kind of claims we’re making when we’re, uh, when we’re making moral claims.
Now, there’s a problem here, and this is my first, uh, gesture in the direction of pluralism. Uh, I think it’s more than a gesture. There, there’s a problem here, I think, about what morality in– that’s the object of this, uh, supposed investigation, uh, should be taken to include.
Because I think the term morality is used by different people to cover really quite different things, and in fact is used probably by each of us, uh, to, to apply to requirements and standards which don’t all have the same kind of normative ground, as they aren’t all important to us for the same, for the same kind of reason. I mean, looking first at the interpersonal case, Uh, some people, uh, I think people who at least write newspaper editorials, um, think that sexual conduct is a central concern of morality. If anything has to do with sex and, uh, it’s, it’s a moral issue.
Others, most people who write in philosophy journals, uh, think that that’s a rather peripheral question if it’s involved, uh, if— If, if it really is an issue, uh, at all. And I don’t think this represents, uh, on the part of these two, uh, parties, a fundamental disagreement about the same subject matter. As they’re talking about the same thing, they disagree about what its content is.
I think, I think it’s pl- more plausible to understand this as two groups of people who are using the term morality in somewhat different ways. They have different ideas about what, what, what the nature of the importance of moral requirements is, what, what they’re, what they’re about, uh, rather than disagreeing about the truth about the same subject that they’re, that they’re talking about. And, and I think there are many other cases of, of this kind.
I won’t, I won’t go through a lot of them, but I, I think that, as you say, the term morality is used in lots, uh, lots of different ways to refer to standards that have many different kinds of, um, claims of importance. So I think that one of the things that theoretical reflection about morality can do, and that is the investigation of this first question of what is the normative ground of morality? Why is it something we should take seriously?
I, I think that kind of investigation can make an important contribution simply by bringing to light differences of this kind. We reflect on what we, uh, think morally, and we discover and we say, “Why should that matter?” And it turns out that we’re, we’re actually not talking about just one thing, but a lot of different things of different, different kinds of significance.
Um, and of course, I don’t think that all these different things are equally worthy of a-adoption and adherence. I’m, I’m not a relativist in that sense. But still, I think that one, one important kind of clarity we should seek is, is to identify different conceptions of morality and try to investigate the different reasons that people might have or that we may think we have, um, to give one or another of these, uh, some distinctive kind of force in deciding how to live.
Now, I started off saying there were two questions that moral theory investigated. Uh, one is the question of what the normative grounds of morality or the various different parts of morality and kinds of morality are, and the other was the question of moral content. Um, now in, in attempting to answer each of these questions, we may face, uh, a, a problem of circularity or a threat of triviality in the answer we give.
In explaining why we should take some particular set of moral requirements as authoritative, we can’t simply say, “Well, they’re authoritative because it would be morally wrong not to take them seriously.” That just sort of repeats the same, the same issue again at the higher, higher level. But on the other hand, uh, an answer to the question of normative ground, why morality is something we should take seriously, can’t be entirely unrelated to the idea of morality that we’re, that we’re talking about.
Uh, it’s got to deliver the kind of significance, uh, that morality seems to have, whatever that kind of significance, uh, is. Second, as to the question of the content of morality, I think it’s natural to suppose that an adequate specification of a fundamental principle of morality, a way of characterizing it for purposes of asking the question why care about it, should also, I think certainly this is a theoretical ambition, should also pri-provide some kind of basis for deciding what the content of morality is. There ought to be a connection between the characterization that explains why I care about it and what we’re gonna– the conclusions we’re gonna be led to if we start taking that, uh, start taking that, um, seriously.
Now, in an extreme form, we might think that this characterization of, of, that we give of morality for purposes of asking the question, of answering the question of why we should care about it, should itself provide a basis from which we can decide all the questions of right or wrong without any further appeal to our moral, uh, intuitions about what’s right and what’s wrong. And I actually think that’s an unrealistic, uh, claim. I don’t–
This is my second kind of pluralism. I, I don’t think, uh, that’s very, that’s, uh, that’s very plausible, uh, either. Um, but one of the, one of the requirements if we’re gonna try to do this, um, is, is, is, is that, you know, one, one of the reasons for thinking we’re gonna have to look to our ideas about the content of morality in order to get from any principle to the conclusions, we can’t just get… go directly, is that the, the intervening steps, the considerations that we appeal to, uh, are going to have to be ones that strike us not simply as following from the principle, uh, the, the fundamental principle we’ve stated.
They also have to be ones that seem more or less significant. So two examples might illustrate this point which have come up in our discussion. In Parfit’s second lecture, he pointed out that insofar as, as Kant’s fundamental principle, uh, his, his, as he, in various forms, he was understanding it, is taken to determine the content of morality via the notion of a contradiction.
So, suppose we’re thinking here of the universal law form of the categorical imperative, insofar as the universal law form is supposed to determine the, the, the answers to questions of right and wrong, because we’re gonna ask whether, uh, uh, the maxim of that action would be one that could be conceived, uh, without contradiction in, in, in various ways Parfit considered. And one of the things that Parfit brought out in that, in that lecture was that although there may be some cases where this might, would, might give the right answer, it doesn’t seem to give the right answer for the right reason. The fact that your maxim might be contradictory doesn’t seem to be importantly connected with, with what would actually make it, make it wrong.
So it seems like we, we don’t want a principle, so to speak, that leads to conclusions, uh, about what’s right and wrong without going via intervening steps that have independent, it seemed to have indep– seemed to us independently to have moral significance. Similarly, picking up a point from the third lecture, the fact, if it were a fact, that it would be rational for someone in Rawls’ original position to reason as if he or she had an equal chance of being in any social position wouldn’t seem to me to show that social institutions that maximize average utility are morally justified, no matter how that utility is distributed. Rather, what it would show is that a thought experiment that simply involves people gambling, uh, is, isn’t very morally relevant.
So you, you, you don’t, you don’t, you, you don’t want a theory, again, a theoretical structure that just leads to conclusions. You want a theory that leads to conclusions through intervening steps that match what seems independently to be morally, um, morally relevant. Now, Kant’s various specifications of the fundamental principle of morality, um, have attracted enormous attention over the years and have attracted our attention in this discussion.
Um, in part because they have the merit of seeming to deal particularly well with the first of the problems I mentioned, that is the circularity problem as it faces an explanation of, uh, of the, um, normative significance of morality. Why sh- why we should care about it. Um, he locates the normative authority of morality in a view we must take of ourselves insofar as we think of ourselves as agents at all.
He thinks these various forms of the principle are ones that we’ve got to, we’ve got to accept for that, for that reason. Um, and insofar as this is the ground that he finds, the– insofar as he finds this is the reason why we should take, uh, morality, uh, so seriously, Um, he’s found a foundation for it, um, which is free of any charge of begging the question simply by repeating a moral idea, by saying, “Well, you gotta do it because it would be morally wrong,” right? You do, you gotta do it because otherwise you won’t be acting.
So when you’re thinking of yourself as acting, you have to think of yourself as acting, uh, according to this, uh, formula. This is clearest perhaps in the argument Kant offers for the universal law form, for example, in the third section of the groundwork, but I think it applies as well to the various independent or seemingly independent arguments he offers for his formula of, of humanity. So on the one hand, he seems to have provided, uh, his answer to the question of the ground of, of, of these fundamental principles in a way that seems, doesn’t seem to be open to a charge that, that it’s not morally that– sorry, that it’s, that it’s morally question-begging because there’s already moral content built in.
But on the other hand, the formulae themselves, particularly the formula of humanity and the formula of the realm of ends, but also the formula of universal law insofar as it’s read in a way that connects it with these other formulae, are formulae which seem to it– themselves express ideas that, ideas about what it is to respect other people and what it is to be to form part of a moral community with other people, which have very deep moral resonance. So these principles are– have, have this great place in the history of the subject, I think, because Kant both was giving what seemed like a non-question-begging answer to why we should care about them, but the things that he was identifying as having this non-question-begging answer were things that seemed morally relevant. They seemed, they seemed to tie up with our deep moral feelings, and this seems a wonderful achievement, uh, uh, on Kant’s, uh, part.
Marrying the abstract idea of rational agency with substantively appealing moral ideas in this way was a brilliant, a brilliant move. Uh, unfortunately, it seems to me it doesn’t work. Uh, his attempt to show that insofar as we think of ourselves as agents at all, we must accept these moral ideals as fundamental constraints, uh, on our practical reasoning is brilliant, daring, a monument of philosophical ambition and ingenuity, and entirely unconvincing.
But still, it’s, it’s a great thing because it, it, it seems to do both of these things. That is, to derive from non-moral starting points something that seems to have great moral content. Now, one response to this failure would be to cleave to the appealing ideas of community that are expressed in Kant’s formulae, and to take them as themselves offering an explanation of, of morality’s normative basis.
So this is to sail closer to the rock of circularity. Say, why should you care about morality? Well, because it’s only in thinking in the terms that morality describes that you will be treating others as ends in themselves or tr– or, or, or behaving to them in the way, uh, uh, that’s part of a, of a realm of ends.
And those things are just im- appealing on their own f- on, on the face of it. And I think modern contractualism, in the form that I espouse, goes in that direction. It, it tries to, it tries to give, it, it gives up a little bit by way of our worry about circularity, and tries to identify at least part of what we think of as morality as being important for us because it’s a way of having the right kind of, of relations with, with other people.
Now, you can debate whether that kind of contractualism is really Kantian, having given up, uh, this fundamental aspiration of Kant’s, but I’ve already t- we already talked about that last time. So some of what I’ve said so far might be summed up as follows, in terms of a variety of answers that we might give to the question of normative ground.
In particular, to the answer we might give to the question, why should we take what other people have reason to want or what they might have reason to agree to, uh, into account in our own decisions about what to do? I’m going to consider four possible answers to this question. First, we might have reason to give what other people have reason to want or what they have reason to accept weight in our own decisions simply because things go better for these other people.
Things go better in the world when these other people’s lives go better. That is, we have reason to promote states of affairs in which their interests are better fulfilled. Their interests are, as Parfit might say, intrinsically reading, reason-providing for us.
(clears throat)
So this you might think of as a purely teleological answer to the question of normative ground. We, we want to take their interests seriously because we have reason to make the world go better, and that makes the world go better. This brings up a question that emerged in yesterday’s discussion.
I had suggested in my comments after the lecture yesterday that what Parfit calls the impartial point of view was defined in his lecture merely by subtraction. F- that is, we define the impartial view by starting with all the reasons that a person has and removing those reasons that are personally based, that the re– the reasons a person has only because of the way he or she is affected, or only because of the way his or her children are affected, or only because of the way his or her football team is affected, or whatever it might be. Parfit replied, as I understood him, that taking the impartial view also involved adding reasons that wouldn’t, that one wouldn’t have simply from one’s own personal point of view.
At least this , I took him to be making this suggestion, and he cited here Thomas Nagel’s description of the move from a personal or subjective to an impersonal or objective point of view, where we look at things, setting aside our own thing. And then on, on that view, it seems that when we move from the personal point of view to the objective one, new reasons come into view, namely the reasons that, that we have in virtue of other people’s pains or other people’s aspirations that we didn’t have when we were looking at things merely from a personal point of view.
Now, perhaps I simply misunderstood Parfit on this point, um, but I was led to my remarks about, uh, arriving at the impartial point of view by subtraction, um, by r- by my reliance on his remarks that, for example, someone else’s pain or, or pain in general is not only something that the person who is in pain has reason to avoid, um, it’s something that others have reason to avoid or prevent if they can, uh, as well. And I took this to be a, a perfectly general claim he was making about the reasons we have, not a special claim about the reasons we have if we take a certain special, uh, point of view. If we were to begin from a desire-based or a self-interest-based conception of the reasons people have, then obviously some move, such as a move to an impersonal, impersonal outlook, would be required in order for us to come to have reasons to prevent other people’s pain.
But Parfit was rejecting, uh, self-interest-based and desire-based conceptions of what reasons we had and starting off with a much broader, um, a broader conception. And it seems to me that Nagel, Nagel’s idea of the personal point of view is a rather narrow conception of what reasons we have, and that’s why the– for him, the move to the impartial point of view adds reasons. Whereas it wasn’t clear to me, although I’m– I don’t think in, you know, I’m not here interested in fighting for one side or other of this, but it seems to me that given the broader conception of reasons that Parfit started with, um, uh, reasons to prevent other people’s pain are already included, so to speak, in the basic package of reasons that we get if we just sign up.
You don’t have to buy extra channels
(laughter)
That’s it. You don’t have to move to the impartial supplement, to get the impartial supplement in order to get those reasons that come with the package. Okay, so that was a bit of a digression or reference the last time, following on my discussion of one way in which we might have reason to take into account in our decisions about what to do, what other people have reasons to, namely because what their reasons just are also reasons for us, right?
No contractualism there. That’s just straight objective to objectivity of reasons. So let me– but now let me consider three other answers.
We might have reason to take into account what other people have reason to want or what they’ve had reason to agree to, because we will be better able to pursue our own aims if we secure the cooperation of others. But no system of cooperation would be stable unless other people have a reason to accept it. And this means that any such system of cooperation must require us to take other people’s interests into account to some extent in deciding what to do.
This, crudely put, is David Gauthier’s strategy at which Parfit mentioned. It might be seen as attempting to found morality on self-interest. Although I think this is a little bit of a misnomer, since all that’s assumed is that we have reason to accept what will advance our aims, and it isn’t necessarily true.
It needn’t be assumed that these aims are purely, uh, self-interested. Okay, that’s the first quasi-contractualist, uh, reason. We need to, we need to take other people’s reasons in, reasons into account because we need their cooperation, and we need it in a stable form, and we’ve got to buy them off in order to get them to go along.
Second possibility, um, we might have reason to be concerned with what others have reason to want, because if we don’t think in this way, we will be failing to understand correctly our own status as rational agents. If we understand what it is for us to be rational agents and for our reasons to count for us in the way that they do, we, we have to see that, that applies to other people a-as well as it does to us. This is Scanlon’s position as I understand it.
Third, we may be said to have reason to be concerned with the justifiability of our action to others because otherwise our relations with them will be based, as Rawls says in his early writings, simply on force and circumstance. They will be asked to accept the way we treat them because they have no choice other than to accept it. And we won’t, we will be asking them to live with us on those terms.
This is an unattractive way to live with them, you might say. We want to live with them on better terms. We want to live with them on terms of equality or terms that, that acknowledge their, their standing.
So this is contractualism as I would understand it, and I think as Rawls describes it in his earlier articles and in A Theory of Justice. Now, any of the last three alternatives I’ve described, that is the Gauthier version, the Kant version, and my, my, my last Rawls, Rawlsian or Scanlonian, uh, contractualism might be called contractualism. Although the label seems to me not so, uh, appropriate, uh, for– not so clearly appropriate for Kant’s alternative because it doesn’t start from an idea of the value of others.
Rather it deduce, it deduces that from the requirements of practical rationality. The cartographical conclusion that Parfit reached at the end of his third lecture might be put as follows, in terms of the two questions that I’ve been considering. A moral theory might be called consequentialist because of the answers that it gives to the question of normative ground.
That is, namely, it gave the first answer, that we should, we should take into account the interests of others because those interests count toward making the world better. Um, or sec-second, a theory might be called consequentialist because of the answer to the question, answer it gives to the question of moral content. Namely, it says the way we ought to decide what to do is to ask which consequences, uh, would be better.
And it might seem that a view that is consequentialist in the first of these senses, that is, it takes promotion of the best of consequences to be what morality is all about and what gives it its rationale, point, and claim on our attention, would provide the best foundation for a consequentialist account of the con-content of moral requirements. But in fact, I take this to be the claim, Parfit’s cartographical claim at the end of the third lecture. But in fact, a contractualist answer to the question of normative authority need not exclude a consequentialist answer to– a consequentialist answer to the question of moral content.
We start off with contractualism of the Kantian variety that he describes. That can perfectly well lead to consequentialism as an answer about what, what to do. Um, and, uh, this is what, this is what he argues, uh, is, uh, is, is, is likely, uh, to happen.
This is what, this is what he argues, uh, is, uh, is, is, is, is, is likely, uh, to happen in that particular version of con-contractualism. Now, it remains true that a teleological answer to the question of normative authority might make consequentialism about moral content more inescapable. It’s hard to see once you start off with the idea of promoting the best state of affairs is what Rawls is about, You’re never gonna get any other answer to its content.
But, um, and it may be if you start off with a contractual starting point about the rationale for morality, it, it, it may not necessarily lead to a consequentialist, uh, conclusion. But I think Parfit’s idea is that if it does so, it does– it provides consequentialism with a more secure and less controversial, uh, perhaps, uh, starting point. So the, the cartological– cartographical question had to do with the idea that things could be called contractualist or consequentialist because of their answer to the question of why care about morality or because of their answer to the question, what’s the content of morality?
And the two don’t necessarily march together. Now let me conclude here. I think I’m doing all right.
Uh, a couple minutes. All right. Uh, let me just state, say quickly my other, my other, um… I have two pages.
My other, um, uh, pluralist point about the question of moral content. How and to what degree can a characterization of the nature of morality provide us with a basis for an answer to the question of, of its, of its content? If we, if we, if we characterize a fundamental principle for the purposes of explaining why we care about morality, should that also tell us of exactly what we ought to do?
Now, it– I think not. It may be reasonable to expect there to be a single fundamental principle of morality in the sense of a single characterization of what that particular kind of morality is about, which explains its normative authority. But how reasonable is it to expect there to be a single principle from which all of the content of morality could be derived without intervening appeals to moral judgment?
Utilitarianism famously offers such a principle, and some may regard this as one of its attractions. But, but this monistic character is, from another point of view that I would take, a source of its great implausibility. That’s not what all of…
You can’t get all the answers out of that one, out of that one way of thinking. Now, Parfit’s version of consequentialism, because it allows a much wider range of reasons to count toward determining the best states of affairs, these aren’t all about well-being, yields a doctrine that’s much more able to accom-accommodate the apparent diversity of moral phenomena. It still offers a fundamental principle, but applying that principle now requires a great deal of normative judgment about the relative importance of all these diverse kinds, uh, of, of, of reasons, not just a comparative judgment of what will produce the most happiness.
So consider this example. In my comments yesterday, I suggested that in some cases, the best state of affairs might be produced by following a principle that required great sacrifices by a few people for the sake of very small gains to a great many others. And I said that one of the few– that if one of the few would have sufficient reason to make such a sacrifice, but also sufficient reason to decline to do so, it seemed questionable whether this sacrifice is one that we could say was morally required.
But it might be replied that I was here being unfair to Parfit, uh, or unfair to this kind of consequentialism, since I failed to take into account the diversity of considerations that this kind of consequentialism includes. In particular, the state of affairs resulting from a principle of the kind I mentioned, even if it would involve a greater sum of well-being, might be objectionable on distribu-distributive grounds. It was unfair, or was it, was it, it produced a bad distribution to take a lot from a few and spread the benefits, uh, over, uh, over many.
This reply seems to me to have force, and I think I should have taken it account, into account better in what I said yesterday. Distributive considerations may indeed rule out the most threatening principles of the kind I was referring to. But to decide whether a principle would fail to yield the best state of affairs because of the inequalities it would involve, even though it produced a greater sum of well-being, we would need to decide how object– how objectionable are these particular inequalities, and we would need to balance that against the gains in aggregate well-being.
This brings us back to the question which we considered yesterday of whether the reasons we have to avoid an inequality are or are not at base moral reasons. I suggested yesterday that there are several grounds for objecting to inequality that do not seem to me to be essentially moral. For example, the disadvantage of losing one’s liberty and the costs of stigmatization.
But distributive reasons of the kind one would need to appeal to in order to resist a principle of the kind I’ve just been discussing, a principle that imposed large costs on some in order to bring small benefits to others, might not be of either of these types. It might not be either loss of liberty or, or a stigmatization, but simply inequality and benefit that would be the, that would be the, um, objection. And it isn’t, isn’t clear to me that that, that objection wouldn’t basically be a moral, uh, an objection of moral character.
So this would raise the question then of how much difference there would be from the point of view of the charge of circularity between consequentialism of the kind that brought in distributive considerations of this kind and asked us to bene- weigh these moral objections to inequality against other benefits. How much difference would there be between that kind of consequentialism and a kind of c-contractualism that involved a me– an admittedly moral idea of whether it was reasonable for someone to reject or wasn’t reasonable for someone to reject the demand that he or she shoulder shoulder this burden? From the point of view of underlying moral content, it didn’t seem to me that there was much to choose between the two.
[01:19:10] DEREK PARFIT:
Over here. Uh, thanks very much. Um, I…
There’s rather little I have to disagree with there. Perhaps I should say that… No, I wouldn’t say that.
First, I think you were perhaps understated Kant’s account of the reason to care about morality. When you say, “If we don’t do so,” we’ll be failing to understand correctly our own status as rational agents.” I don’t think you’d think that it’s your failure to understand your status that gives you the reason.
That just seems much too slender.
[01:19:49] TIM SCANLON:
Um, I feel-
[01:19:51] DEREK PARFIT:
Now, you may-
[01:19:52] TIM SCANLON:
If you fail to recognize it as a reason, you will be showing that you haven’t taken on board.
[01:19:55] DEREK PARFIT:
Yes, but, but that, you should. But that isn’t why you should do it, because if you don’t act rightly, you’ll be failing to understand your own status. That just seems to me-
[01:20:04] TIM SCANLON:
No, no. So choose one.
[01:20:06] DEREK PARFIT:
Now, you made some excellent points about the impartial point of view, and, um, I’m inclined wholly to agree with the claim that even from your personal point of view, you have reasons to care about all these things that we have reasons to care about. So I think what I might say here is this, that what happens when you move from an impartial point of view, that’s a question of the strength of these various reasons. I think on the view that Tom Nagel and I hold, and I think Sidgwick held, I have some reason to care about everyone’s well-being, but I have much stronger reason to care about my own and the well-being of those I love.
And so, when Sidgwick thought the extreme version of this,
(background chatter)
from a personal point of view, you have reason to do what’s best for you or for those you love, but from an impartial point of view, you have reason to do what’s best, all things considered. That’s a question of, of the strength of the reasons. From the impartial point of view, each person’s well-being matters equally.
Mine matters no more than anyone else’s. Now, as Sam very well argued in his first book, you can’t live entirely from the impartial point of view. You have to recognize that we must and can justifiably care much more about what happens to us than those we don’t.
So is that an acceptable amendment, that the reasons of the second, whether the point of view is personal or not, is a matter of their strength?
[01:21:42] TIM SCANLON:
Well, I would say that the change in strength is achieved by subtraction. That is, you subtract the personal reasons you have to care about, about your reason, and what you’re left is their impartial strength. So that I don’t, I don’t see a difference between that and my point about subtraction.
[01:21:56] DEREK PARFIT:
Um, well, you subtract the strength by saying I’m to ignore the fact that it’s my well-being, that I’m the person who will die. Well, the natural way to express that, that you’re ignoring that fact is you’re thinking of it from an impartial point of view.
[01:22:18] TIM SCANLON:
Well, I was defining the… I suggest that you define the impartial point of view by starting with all the reasons you’ve got and, and factoring those out. Now, that’s what– what I’d offered was this definition of what you meant by the impartial point of view.
This is, I think, it’s perhaps becoming-
[01:22:31] DEREK PARFIT:
Yeah. Okay.
[01:22:32] TIM SCANLON:
Academic in a not altogether favorable sense.
[01:22:34] DEREK PARFIT:
Okay. Now, uh, the main questions are questions at the end about, um, consequentialism and contractualism. Uh, there’s a huge difference which I alluded to but didn’t say much about between principles that apply directly to acts and principles that apply only indirectly because they apply to the disposition to act in certain ways or the principles that we apply to act or the rules.
Now, um, act consequentialism is monistic. Uh, there’s just this one principle that tells you how every act is to be assessed. Rule consequentialism isn’t monistic, and it has a structure actually that’s pretty similar to contractualism and to some forms of Kant’s view.
Um, In each case, you ought to act on the principles whose universal acceptance everyone could rationally will, or on the principles that no one could reasonably reject, or on the principles whose general acceptance would on the whole make things go best. So there’s no difference with respect to the degree of moni-mononissity there.
[01:23:46] TIM SCANLON:
And indeed, as
[01:23:47] DEREK PARFIT:
you said and I said, a lot of these can come together. You could plausibly think that you ought to act on the principles whose general acceptance would make things go better, because those are the principles that everyone could rationally will to be universal, and those are the principles that no one could reasonably reject. Um, so those three views are, as far as I think, exactly the same with respect to the monistic pluralistic.
They have one single higher level criterion for which are the relevant principles, but then the principles can be as complicated as common sense morality.
[01:24:25] TIM SCANLON:
Now, you said some other things that we could discuss, like for example, perhaps I’ll just discuss one, but then I think we might, uh, open it more widely. Um, when you talk about the distributive reasons, I think there are many things going on, because at one point you said, uh, one might resist the principle of the kind I’ve been considering that imposed large costs on some in order to bring very small gains to a number of others. Well, I think the question whether very trivial benefits and burdens can outweigh major ones on a few people isn’t itself a distributive question, and there’s an independent reason for thinking that some gains are too trivial to count when other things are at stake.
Uh, but that’s a nitpicky point of detail. So, Sam, uh-
[01:25:15] SAMUEL SCHEFFLER:
Okay. Yeah. Okay.
[01:25:19] TIM SCANLON:
Um- Oh, no, I didn’t want to end with… just, just one, one question to you.
Um, I’d like to ask you whether you thought that the main argument that I end lecture three with and that you mainly discussed, uh, that’s the argument for the view that Kant’s contractualist formula leads to rule consequentialism. Uh, whether that argument is first valid, and then whether the second and third premises are true. The first premise is Kant’s contractualist formula.
My claim is that leads to rule consequentialism.
[01:25:57] DEREK PARFIT:
Uh, and if you thought it’s valid and those two premises are true, so that it’s relevantly sound, um, then I’d be pleased
(laughter)
.
[01:26:04] TIM SCANLON:
But I, I conceded and I agreed that I, which what I thought your argument showed, not that it led to, uh, rule consequentialism, but that it didn’t exclude it. That’s one of the things that it could lead to, since it’s a permissive. Yeah.
I was reading it as a permissive, uh, uh, principle. So yes. Yeah.
In that sense, I agree, yes.
[01:26:22] DEREK PARFIT:
To save five other people. Now, that’s wholly compatible with thinking that you have many other personal reasons, reasons to sacrifice your child’s life, and save your child, reason to help the projects you’re committed to, and so on. So I think at that point, the fact that I was taking simple cases, my life or the lives of five, I didn’t think affected the argument.
Now, with respect to premise B, in a way, I’d quite like to turn to that because it bears on, uh, Tim’s point that you think the argument at least permits, uh, the rule-consequentialist conclusions rather than requiring it. Um, firstly, B may look like the claim that we could always rationally choose whatever we would have most reason to choose from an impartial point of view. And what I meant to be saying, firstly, is that I’m inclined to think that’s so, but it’s not required by that argument because that premise is about what someone could rationally choose in the thought experiment to which this contractualist formula appeals.
She’s supposing that she can choose which of the principles that everyone accepts, and therefore that most people act on. Okay. Um, and in that context, um, the point of my referendum examples are that as long as she thinks she has some reason to care about the well-being of each other person, even if it’s only one millionth as much as her reasons to care about her own well-being, it’s fairly plausible to think that she ought to choose the principles that would, on the whole, make things go best.
And that’s because it’s like the case in which, instead of my giving ten thousand to the poorest people in the world, my giving up ten thousand, which is only one tenth of my income, let’s say, could bring it about that a hundred million benefits went to the poorest people. So that’s the point. It’s that in that argument, if you imagine that you’re affecting the whole of human history, well then if you compare the principles that would make things go best and principles that are significantly different because they’d make things go quite a lot worse, well then my claim just was, even if your reasons to care about the wellbeing of others were extremely small, the scale is going to-
[01:28:57] TIM SCANLON:
But doesn’t it, I mean, but- but how can you feel– I mean, why should we feel confident of that conclusion without attending at all to the variety of other kinds of reasons that you’re officially committed to allowing on the wide view?
[01:29:12] DEREK PARFIT:
Oh, I can include, I can include all that.
[01:29:13] TIM SCANLON:
That is, I mean, suppose her life is bound up with the life of a certain community or tradition that makes sense to her of her aspirations and so on, and that although principles that– and principles that would make things go best in your special sense, um, would have, um, wouldn’t make things go well for that community or that tradition, say. Um, is it unimaginable that there could be someone for whom it wouldn’t be– wouldn’t actually be rational to agree to those best principle– principles that are best in that very special sense? Or at any rate, don’t we need to think a little bit about the force of the actual other reasons that might be in play?
Do you wanna get it all out of these questions about size of contributions and so on?
[01:29:59] DEREK PARFIT:
Well, look, I, I agree that we do. But if you wait, I’m making two points. One is, and I think this would be quite widely accepted by non-academics, but perhaps I’m wrong. I think many non-academics, uh, I mean, it’s probably correct to say that a quarter of non-academics are rational egoists.
[01:30:22] TIM SCANLON:
Right.
[01:30:23] DEREK PARFIT:
Um, but I think many non-
[01:30:25] TIM SCANLON:
No, no, academic.
[01:30:26] DEREK PARFIT:
-non-academics would think that you could rationally sacrifice your life to save the lives of five others. You have sufficient reason to do that. Now, that is an enormous sacrifice. And if they think you could rationally give up your life to save five other people,
[01:30:47] TIM SCANLON:
Okay.
[01:30:48] DEREK PARFIT:
And what we’re now asking, you could rationally choose principles that would be somewhat worse for you and those you love, perhaps. But that isn’t to save five other people. It’s to make the whole history of mankind go enormously better.
[01:31:04] TIM SCANLON:
Yeah, but you’re putting it all in terms of what would be best for you and what would be a sacrifice for you. I mean, maybe what you’re doing—
[01:31:09] DEREK PARFIT:
No, no, no. I’m including you and your children and those you love.
[01:31:12] TIM SCANLON:
In a certain way you are. You’re including their welfare. But I mean, suppose that as I see it—
[01:31:16] DEREK PARFIT:
Mm-hmm.
[01:31:17] TIM SCANLON:
Um, what it would be doing would be consigning the, my community to a dismal future or acquiescing in their sort of dying out or the fading away of a tradition with which, uh, my life is deeply bound up. Um, so I might say I’d be perfectly happy to sacrifice my life to save millions of people, but it doesn’t follow that I’m willing to let my tradition or my community die out for the sake of what’s best in this very special sense that you have defined. I mean, well, I need to— maybe that’s right, and maybe that’s wrong.
I need— but we somehow want to get to the conclusion without even looking at any of these other reasons.
[01:31:50] DEREK PARFIT:
No, no, no, no. Look, look, I’m prepared to look at it, but if somebody says, my reason to care that my community continues is actually much greater than my reason to care about my own life and the lives of my children and those I love, then I think that person is very likely to be making a huge mistake. Um, there was some, uh, fellows of male-only Oxford colleges who cared enormously about preserving male-only colleges.
Um, and one of them made this eloquent speech that, you know, we go to great lengths to preserve Victorian architecture, even if we think that it’s ugly, because we think it’s important to preserve these distinctive units. So why not male-only colleges? Well, I think that has some weight, but I don’t think it has more weight than the lives of himself, his children, and those he loves.
Um, so I’m not meaning to exclude those, but, uh, I’d be surprised if bringing in the survival of my community is going to make enough difference unless Unless you’re introducing moral beliefs of a different kind, something like that. Clearly, if my community is, is the monastic community in the order which is the one,
[01:33:05] TIM SCANLON:
just be bringing in other values. I mean, it’s supposed to be a value-based view. Yeah. And if, and if the, if the view is that, um, it would, um, it would undermine values that I care deeply about.
[01:33:20] DEREK PARFIT:
Yeah, yeah. I mean, you might get an example that, um, someone who cares greatly about the natural wilderness, something like that. Now, you might say, I mean, it would make the whole history of humanity go enormously better, but, you know, the, the natural wildernesses would be interfered with, and I think that outweighs.
I don’t want to regard it with, um-
[01:33:45] TIM SCANLON:
The question is whether someone’s life couldn’t be devoted to a cause or a tradition or a value in such a way that for them it wouldn’t be rational to consent to the principles that would make things go best in your sense.
[01:33:58] DEREK PARFIT:
Well, I think you’re there appealing to a version of the desire based on view. You’re appealing to the strength of that person’s commitment to this project.
[01:34:09] TIM SCANLON:
Why does it have to be to the strength of their desire as opposed to the role of the value in structuring their sense of the, of their own life and its significance? I don’t see why it has to be strength of desire. I mean, if you really take the value-based view seriously, then it seems to me you really have to look at the role that these different values can play in people’s lives
[01:34:28] DEREK PARFIT:
Right, but, but-
[01:34:29] TIM SCANLON:
and the way they can give rise to reasoning.
[01:34:30] DEREK PARFIT:
Right. But the point is it’s got to be a value that they can plausibly give greater weight to than their own death at the age of twenty. I mean, if somebody could– if, if a twenty-year-old could rationally give half our life to save five people, let alone a million people, then how is this project that gives meaning to this person?
[01:34:55] TIM SCANLON:
Well, some twenty-year-olds may. I, I, I mean, well, okay.
[01:34:59] DEREK PARFIT:
Well, I don’t think there are many twenty-year-olds who don’t have sufficient reason, but okay.
[01:35:05] TIM SCANLON:
Well, that’s somebody else. Yeah. I have a question about, uh, a-about picking up on this pluralism stuff, and maybe it’s just a request to hear more about what you think is at stake in, um, in engaging with questions about the normative structure
(cough)
of rightness and wrongness. Yeah. But, um, but you’ve got the, um, the wide value-based theory of reasons, which I’m sympathetic to in some sense, which involves lots of what one in a, you know, uh, earlier idiom might have referred to as fine-grained intuitioni- intuitionistic judgments about plur– you know, a diverse range of values and, uh, hard to quantify, uh, attempts to weigh these different values and think about what would be appropriate responses to them and all of that.
A lot of which subsumes some of the, um, traditional territory or coincides in various ways with some of the traditional territory of moral reflection in some ways. Um, one, one natural thought is that if you’re, if you’re prepared to go so far, why not just c-be pluralistic, as it were, all the way down and just say there’s the landscape of, uh, the, the tremendously diverse landscape of reasons and sometimes we talk about things as being right or wrong or morally right or wrong, but that’s just a summary way of talking about the balance of reasons and the actions that would be, you know, appropriate, uh, responses to this, this, um, incredibly multifarious, i-interesting, um, uh, ramified landscape of reasons. Um, we– And, and, you know, as Tim was saying, r-classifying responses as moral responses, uh, it’s not something that we have a r– that, you know, that forms a natural kind anyway in response, in, in terms of, um, describing a set of human responses or for that matter, a set of reasons.
So may, you know, maybe the– in the broadest sense, uh, in some sense, uh, questions of moral rightness or wrongness or questions of how appropriately to respond to the complicated landscape of reasons in situations where, you know, human well-being is at stake or something like that. In other cases, it might just be a, uh, you know, a question of how appropriately to respond to values. But, um, if you were to think about it that way, there would be no particular importance that would attach to giving a theoretical account of some subset of reasons that are the reasons, uh, constituted by, uh, moral rightness or wrongness itself.
These considerations would be, as it were, transparent from the standpoint of practical reflection, uh, and would, um, you know, the real reasons would be the, the concrete, um,
(coughs)
considerations that, uh, you would be correctly responding to when you’re properly weighing the diverse range of considerations and such. Uh, you know, um, one response to that might be that you need to give an– as Tim was saying, you need to give a unified account of the morality of right and wrong in some sense to make sense of, uh, the, the claim– the claims of morality on us, um, you know, uh, to be a s- a set of reasons that we ought to respond to in some sense. Yeah.
It wasn’t clear to me that that was an important concern of yours, at least in these lectures. And it seemed to me in some sense that, uh, uh, you know, the, the, the reason-giving considerations were all being done, uh, provided by the background, uh, wide value-based theory. Uh, but if you do think that it’s important because of its, uh, to give a unified account of the normative structure of morality, in part to illuminate the normative significance of a certain subset of reasons, uh, I’d, I’d like to hear more about, you know, how you think that story would go.
[01:38:31] DEREK PARFIT:
Well, that, that really does ask the central questions of this entire branch of philosophy. Uh, I… I am inclined to think that it would be a very bad mistake to replace our thinking about right and wrong as our thinking about what we have most reason to do, all things considered.
I think right and wrong are highly distinctive and separate and should be looked at, um, on its own, on its own terms. Although in the end, we need to return to the questions, what we have most reason to do, all things considered. Now, I think there’s been great oversimplification with respect to the question about wrongness.
I mean, to put it in its very simplest, I think the position is this. Um, some people believe that the concept of moral wrongness, as Tim and I believe about the concept of the reason, cannot be explained in other terms. Uh, you just mustn’t do it.
[01:39:35] TIM SCANLON:
Okay.
[01:39:37] DEREK PARFIT:
There are also various senses we can give to the word wrong, which give us distinctive ways of understanding morality. I think one that’s very attractive, which I take from Scanlon’s book, um, an act is wrong in what I call the moral reason-involving sense if it’s an act that, this is quoting from Scanlon, that violates standards of conduct that we all have strong reason to accept and follow, and an act that gives the agent reasons to feel guilty and gives others reasons for indignation and resentment. Now, there’s also a justifiabilist sense.
An act is wrong if it couldn’t be justified to others. There are many reasons involving analyzable senses of wrong, and I think actually when you look at people, they’re not talking about what’s wrong in the same sense. That doesn’t mean that they can’t be disagreeing, but it means that when you look at the disagreement, you have to take into account the different senses in which they’re using it.
Um, uh, so I mean, I’ve also had the thought– I was very struck in Scanlon’s original article that you can think that the well-being of others, in a way, matters as much as yours, and that you have reason from your point of view to relieve the suffering of others. But the thought that it would be wrong not to seems sufficiently distinct from the thought that as an impar– you know, from an impartial point of view, you have reason to do it.
Um, and I think, uh, Scanlon’s quite right to think that that is a distinctive thought, and one of the most compelling ways of bringing out what’s distinctive to it is by appealing to justifiability to others, which is closely connected to reasons to feel guilty and reasons for others to feel indignation and resentment. These ideas are all rather similar, and I certainly think it’s a great mistake to try and ram all these thoughts into one package. What do I have most reason to do, all things considered?
On the other hand, I do think that that’s the last question. That the people who say no, and Kant would have said this, but then he would have gone on, except by the answer. The last question is, you mustn’t do it.
Okay, you, it’s just wrong. You mustn’t do it. Now, the question is,
[01:41:56] TIM SCANLON:
well, does the fact that I mustn’t do it give me a reason not to do it? Now, if I said, oh no, it doesn’t give you any reason not to do it, it’s just that you mustn’t do it, well, that would be disaster. Um, now there’s a similar question.
It does give me a reason to do it, but is it what I have most reason to do? And he says, well, no, actually, you have most reason to act wrongly. I think that’s the ultimate question, but I don’t think we should bypass moral thinking to get to the question about what I have most reason to do.
I think we should let morality be fully displayed, work out the implications, and then return to the ultimate question.
[01:42:37] SPEAKER 1:
Very, but that still leaves somewhat obscure to me then your particular enterprise here, um, because you, um, display the ways in which Kant’s various formulations don’t capture what’s right and what’s wrong. And you, you have the, the rule consequentialism, which, well, what’s obscure is to what extent you are actually trying to do the thing that Tim was talking about, establishing the content of morality, finding something that will tell you what’s right and what’s wrong, and presumably that’s the role of your moral consequentialism.
So why, this connects also with Jay’s question, Why, um, why don’t we see right through those apparently moral considerations that you say we should keep alive, uh, to the, um, consequentialist, the rule consequentialist equivalents that, as it seems to me, your enterprise is supposed to be establishing? Or is that not your enterprise?
[01:43:51] DEREK PARFIT:
Well, no, my enterprise, um, was in a way more theoretical. I mean, I said on my picture of the moral map, you have common sense pluralistic morality, then you have these systematic theories. Now, I’ve just been struck by the fact that as far as I can see, there’s a kind of strange mismatch because Sidgwick, the act-utilitarian, thought about morality in the way that best supports common sense pluralist morality, deontological restrictions, you know.
It’s just you mustn’t kill someone as a means. Then we have Kant’s principles and Kantian contractualism, And I was just struck by the fact that, but as far as I can see, that most strongly supports consequentialism of a rule-consequentialist kind. I wasn’t, as I said at the end, arguing for rule consequentialism.
Um, I was trying to see where I thought these different methods of thinking about morality lead. So in that sense, it was cartographical or theoretical. Um, does that leave you with a puzzle?
[01:44:58] TIM SCANLON:
Yes. Where are you on the map? Uh, that is where, what– Where now do you put yourself on the map as a result of this enterprise? I mean, that’s my… That’s what’s obscure to me. Um, now you, um, you’re saying you’re not on it. You’re just pointing out, holding up the map for the rest of us.
[01:45:17] DEREK PARFIT:
Well, when you say, where am I on it? I can use the word wrong in several different senses, and I’ll tell you what I think to be wrong in this sense or that sense and so on. And then if you ask me questions about what I think I have most reason to do, I can start giving you answers to that.
But I don’t think I should come out and say, “Right,” what I am is an ideal rule consequentialist of the following kind.” I mean, I-
[01:45:43] TIM SCANLON:
Uh-huh.
[01:45:44] DEREK PARFIT:
I think that’s-
[01:45:44] TIM SCANLON:
Why not?
[01:45:47] DEREK PARFIT:
Because I think the truth is more complicated than that. Uh, I mean, my position is somewhat more like, I think, that of a physicist who knows that there are various competing theories about how, you know, how you explain this phenomenon. He, he alters his view from time to time about the plausibility of this and how well it’s working out.
But, um, I think that’s how we ought to regard these things. Um, now, that may seem a bit detached and clinical, but it’s, as it were, the most striking difference with Sidgwick as compared to all of his predecessors. For the first time, he was viewing it in what many people would regard as this academic, dry way, and that’s part of the reason why it suddenly became very boring.
Whereas Kant is gripping, okay?
[01:46:31] AUDIENCE MEMBER:
Oh,
[01:46:32] DEREK PARFIT:
but I, I think it’s– I think this subject is in very early days, and I think we need to do a great deal of just kind of clearing up what the views are, what the concepts are, how they support what they’re related to one another. And to say, “Oh, but where do you think we ought to end up?
What do you think the ultimate truth is?” It’s too soon. Too soon for me.
[01:46:55] TIM SCANLON:
But you said, uh, you, I’ve heard you say a number of different things of– One is this idea that it’s too s- in, it, that might count as answers to Barry. One is that it’s too soon to say. Another is that the truth, you think the truth is more complicated in ways that I took it-
[01:47:10] DEREK PARFIT:
Yeah.
[01:47:11] TIM SCANLON:
You could in principle articulate. And the third was that you Re- You had some reservations about systematic theories at all, and some attraction to sort of-
[01:47:19] DEREK PARFIT:
Well- -common sense pluralism.
[01:47:20] TIM SCANLON:
And I wonder-
[01:47:21] DEREK PARFIT:
Yeah.
[01:47:22] TIM SCANLON:
Right, right. which of those three things lies behind your reticence?
[01:47:25] DEREK PARFIT:
Well, here’s one reason. Um, I’ve said there’s this big distinction between, uh, two level theories like contractualism, rule consequentialism, and single principle. But in fact, that’s a huge oversimplification because there are enormous differences within those categories.
I mean, in the case of rule consequentialism, there’s an enormous difference between the one that appeals to ideal rules and the one that appeals to how you should assess actual rules that, Uh, we can’t judge whether a theory of this kind is going to work until we’ve looked at different versions of it, because there are very strong objections to this version and that addition, we need to look and see. So I just think it’s hasty to say, um, I mean, I don’t know enough about the ways in which these different views can best be developed for me to be confident in advance which I should end up with. Now, with respect to the systematic theories, here I think the position is this.
Um,
[01:48:28] TIM SCANLON:
The question is whether we ought to think about morality and try to answer moral questions in a way that excludes as irrelevant our moral convictions, our moral intuitions. And what’s striking about Kant’s various formulas and all contractualist
(clears throat)
theories known to me is that they take that form. Uh, in deciding what you ought to do, your moral beliefs never come in, okay? Now, most contractualists will allow that if the implications of this contractualist theory for what you ought to do conflict too strongly with your moral beliefs, then it’s reasonable for you to reject contractualism.
But in applying the theory, you don’t appeal to your moral beliefs. And there’s a baby question: why in our moral thinking should we not appeal to our moral beliefs? And one defense would be the skeptical defense.
Um, Scanlon said in his first article that contractualism rests on a limited form of skepticism. Well, some contractualists are more outspoken. Gauthier rejects all moral intuitions as just irrelevant because he thinks they’re all mistaken.
There’s nothing for them to be about. People believe there are independent moral truths, but there aren’t. The whole thing is a mistake.
And then, well, can we, out of the ruins of traditional moral thinking, erect a kind of substitute by appealing to means-ends instrumental rationality? He thinks we can. Um, but if a contractualist accepts, I mean, to put it in Tim’s terms, that there are conceptions of what’s right and wrong, uh, which aren’t themselves contractualist, they don’t appeal to the idea of what no one can reasonably object, and if there are independent truths about what’s right and wrong in those senses, then we need to be told why, in our moral thinking we should bracket all of that.
And that’s my main simplest question. Unless we think our moral intuitions are deeply untrustworthy, why shouldn’t we appeal to them in our moral thinking? And that’s what many of the systematic theories tell us to do.
[01:50:50] AUDIENCE MEMBER:
Uh, Tim, did you wanna-
[01:50:52] TIM SCANLON:
I had two, two thoughts about that. One in, uh, just in relation to Barry’s question. Uh, perhaps your position in relation to Barry’s question might be put as follows: Barry, you took to be asking you, “Derek, what do you think really is right and wrong?”
Yeah. I mean, which things are right and wrong according to you? What view do you take on that?
[01:51:11] DEREK PARFIT:
Yeah.
[01:51:11] TIM SCANLON:
And, and your response was, “I think that right and wrong are used in a number of different senses, can be clarified and so on, but they’re, but they, they’re, they’re distinct, and the answer to q- to, to right– what’s right and wrong may depend on which of those one is using, uh- I wouldn’t- I don’t think, quite the same way.
That you don’t think there’s a single answer to the question of which one of those is the correct account of right and wrong because they’re talking about some degree different, different things. Um, a-a-although you think there is the question, what– which one of those in which circumstances do I have most reason to take seriously? Yeah.
And, and that’s not a question about what’s the right answer about right and wrong. It’s rather a question about what I have most reason to do. Is that, is that a fair description of your point?
[01:51:55] DEREK PARFIT:
It’s not, it’s not just, I mean, I think there are different senses of wrong. Some of them I think it’s not worth worrying with, but there are several I think that give you a plausible thing.
And then if you’re using right and wrong in these different senses, then there are different kinds of theory, and there are very different ways of developing these theories. And I just think– I do think at the end, we’ve got to try and decide which are the most important ways in which an act can be wrong, right? How strong are the reasons that are given to us by that?
And within each of these, we have to decide, well, should we go for an ideal, uh, or should we go for something that brings in, you know, relativity to the community? I really do mean, I think it’s extremely early days, and I’m struck by the fact that, I mean, I used once to be interested in chess, and, you know, there are some questions in chess like, you know, the Sicilian variation. There’s a three-volume work on the Sicilian variation.
Far more attention is being given to that than to some of the most fundamental questions about normativity, reason, and morality. Um, and I don’t think I can predict which way it’s gonna go. Um, but I think a lot of people think, “Oh, goodness,” people have been thinking about morality for, you know, three thousand years, so surely everything that there is to say has been said and thought.
Absolutely not.
[01:53:18] MODERATOR:
Um.
[01:53:23] AUDIENCE MEMBER:
Let me ask this question.
[01:53:24] MODERATOR:
John?
[01:53:26] JOHN:
Yeah, um, Susan Wolf asked you the question whether you thought all reasons were either, uh, impartial or personal, and you talked about her example, but I didn’t quite hear what your answer to that question was, as in yes or no.
[01:53:39] DEREK PARFIT:
Uh, right. Well, I just did.
[01:53:41] JOHN:
I want to follow this up, but I don’t—
[01:53:42] TIM SCANLON:
Oh, sorry. Why, why don’t you—
[01:53:44] JOHN:
Yeah. Well— Well, was it yes or no for the former? I mean, was the answer yes?
[01:53:49] DEREK PARFIT:
What is the alternative? There are two related questions. There’s the distinction between agent-relative reasons and agent-neutral impersonal reasons.
Right. Uh, and then there’s the distinction between the personal point of view and the impartial point of view, and those are different. And, uh, they’re both complicated, and one of the complexities is that some moral reasons, which I would want to cast as impersonal, uh, are nonetheless agent-relative.
I mean, they don’t say, “I should minimize the incidence of deceiving, you know, stealing, tragic– I mustn’t do it, I shouldn’t do it.” That’s an agent-relative restriction, but it’s impersonal in some of my senses.
So, um, they don’t fall neatly into either of the two categories. There are clear cases, but then there are much more complicated ones which are much harder to classify.
[01:54:46] TIM SCANLON:
Yeah. I mean, the kind of cases I was wo-wondering about what you can say about were cases where, uh, suppose I’m considering, uh, my department. Yeah,
but I’m not picking myself out as special in that group. Yet I’m picking the group out by being my department. Um, impartial or personal, it doesn’t really seem that that classification, um, is apt for cases like that.
[01:55:14] DEREK PARFIT:
Well, uh, it would become an impersonal but agent-relative reason if you thought that everyone had the obligation to have the special relation with those with whom they work, their institution, and so on. It wouldn’t be in that way impersonal. If it’s true of you that, you know, you care greatly about these things, not your own well-being, it’s, you know, this institution to which you belong, but you didn’t think that that was a requirement on anyone in such institutions.
Um, that’s only one way of drawing that distinction. But I, I, I mean, I think part of the reason why Sidgwick despaired, uh, thought that the cosmos of duty was reduced to a chaos and so on, um, is that his conception of the personal and impersonal was much too simple. And if all you’ve got is what’s best for me and what’s best for everyone impartially considered, it can seem very difficult to bring them together and so on.
Once you throw in a much wider range of reasons that we have, uh, you may be able to sort of break the ice. Uh, Sidgwick, I think, just got into a rut which he just couldn’t, couldn’t get out.
[01:56:38] TIM SCANLON:
Alan, Susan, do you wanna say anything at this point before we start to run out of time and res– either in response to what Derrick said to you or, um, just to- no?
[01:56:47] ALAN:
No, I might wanna make a remark, uh, about a couple of things that he said. Um, I, I’m not sure quite what he meant by saying that the lifeboat case is extremely common. But, uh, I mean, I think it is extremely common, and I said this, that there are decisions where if you do it one way, you will hurt these people, and if you do it the other way, you’ll hurt these other people.
Those are very common. But that there are cases in which you have to think about it in those terms, or you really ought to think about it in those terms and the terms that you do in the lifeboat example, I’m not so sure there are— Those are common.
[01:57:30] DEREK PARFIT:
Well, um, if we think that it makes an enormous difference whether the people whose lives you could save are right in front of you, or makes an enormous difference whether or not other people could also save their lives, um, then there may be reason for thinking that the fact that each of us could easily save many people’s lives, uh, it doesn’t make this like lifeboat, ’cause in lifeboat they’re on rocks which are within sight or something like that. So there would be ways of saying, um, that this is not a good model for the wider picture. On the other hand, you were talking about war, um, and so on.
It’s extremely common for people running a health service, whether it be the medical people or whether it be the Minister of Health and the government, to have to make decisions, uh, that’s about whether it matters how many lives they save and what the relative importance are. So that’s the sense. Those are the two senses in which I thought that it was very common.
Um, but I agree, uh, life, uh, tunnel and bridge are not very common. And I wondered what you think about the claim that we can just ignore those cases, that it isn’t a good objection to utilitarianism, that it might tell you to kill one person as a means because that case is so unusual, and similarly, it isn’t a good objection to Kant’s view about lying that, you know, that would imply you should lie even to a would-be murderer, ’cause you can say, “Well, you know, that’s a, that’s one of those cases we then.”
[01:59:10] TIM SCANLON:
That’s the direction I wanted to take it, though it is sort of. Uh, that is, um, I don’t know that your intuitions about these very… It’s not just that they’re uncommon, but that they’re artificial. Uh-
[01:59:24] DEREK PARFIT:
Well, you had, you had very strong intuitions about all of them.
[01:59:28] TIM SCANLON:
Well, uh, that was the other point I wanted to address. I have strong intuitions when I add a bunch of assumptions that I don’t think you’re supposed to add. Now, maybe you are supposed to add them, but-
[01:59:37] DEREK PARFIT:
Well, let’s, let’s take a hypothetical case. A person says, uh, there are people on these two rocks, and, um, some of them are Black, others are white. Blacks on this rock, white on that.
Now, I quite agree with you. We should say, well, the fact that some are Black, some are white, that’s irrelevant. All I know is that there are some people on these rocks and some on others.
Some people would say, “That’s all I need to know. I should give everyone an equal chance.” Other people would say, “Well, what are the numbers?”
And that’s a straightforward question. Does it matter how many people you can save, or should you give everyone an equal chance? Now, as I say, in interpreting these cases, you’ve got to assume that there are no other relevant differences.
If you can save the lives of, right, five ninety-year-olds or one twenty-year-old, perhaps you should save the twenty-year-old. Um, so you just have to assume whenever people produce simply described cases that there are no other moral development differences.
[02:00:34] TIM SCANLON:
Yeah. The problem– then what I would… see, then, then what I would say is that I’d no longer have any clear intuition. No, I may have intuitions, but I don’t trust them.
[02:00:43] DEREK PARFIT:
I wouldn’t say that. So in other words, your view is that–
[02:00:45] TIM SCANLON:
I think they’d be bad data for moral philosophy.
[02:00:47] DEREK PARFIT:
Okay. But then, I mean, do you think that a case in which you could either save five, the lives of five 20-year-olds or one 20-year-old, and the chances of saving the five are the same as the chances of saving the one, and there’s no hope of saving all six. I mean, I think you could easily know there’s no hope of saving all six.
Um, then I think you might think, “Well, I should save five rather than one.” Or you might think, “I should toss the coin.” But I don’t think you’re missing any relevant information.
That’s a fully described case. I mean, the-
[02:01:22] TIM SCANLON:
No, I, I think you are missing, Rob, because I think you oughtn’t to make decisions on that kind of basis, and moreover, that if human life goes as we should want it to, you wouldn’t have to. That’s the line I want to take.
[02:01:40] DEREK PARFIT:
Well, that’s like saying, you should ignore the question whether you should resist aggression by force, because if human life goes as it wants it to, nobody will, you know, commit aggression. I mean, we have to respond to the actual world. Um, And what’s wrong with trying to decide whether you ought to save the one or the five?
[02:02:05] TIM SCANLON:
Well, there may be a, a, a real disagreement here about how necessary to the human condition it is that we think about things in the way that trolley problems lead us to. And you may think that it is necessary, and I may doubt that.
[02:02:22] DEREK PARFIT:
Well, Kant thought that it was necessary.
[02:02:24] TIM SCANLON:
Well, he may have.
[02:02:26] SUSAN WOLF:
Um.
[02:02:28] DEREK PARFIT:
Well, I mean, then, then you’re saying that, that, that there are many cases, uh, in which we just needn’t ask what we ought to do?
[02:02:38] TIM SCANLON:
No, well, uh, there are many examples where whether we have strong intuitions or not, we shouldn’t necessarily trust those intuitions as, as, uh, good data for morality. That’s, that’s the kind of conclusion I would draw.
[02:02:55] DEREK PARFIT:
Okay. Well then, I think you should withdraw some of the firm intuitions that you have here. Yeah. Because, you know, you very firmly said, um, that you shouldn’t switch the trolley from the five to the one.
[02:03:12] TIM SCANLON:
Well, I was making a bunch of assumptions. I was assuming, for instance, that if the trolleys were well-regulated, that would be an illegal act.
[02:03:20] DEREK PARFIT:
Well, I, I must say, what you said there-
[02:03:22] TIM SCANLON:
And, and I, you know, I made a bunch of other… Now, so if you don’t make any of those assumptions, then-
[02:03:26] DEREK PARFIT:
Yes, but-
[02:03:27] TIM SCANLON:
Okay. Then I don’t have the strong intuition.
[02:03:28] DEREK PARFIT:
Yeah, well, you did. You said, “You have no business touching such equipment for any reason,” and if you’re an employee, you should strictly follow the rules.” Now, I think that is what produced the difference between Germany and Italy in the Second World War.
Many Germans accepted that kind of view. We should follow the rules, and we shouldn’t interfere for any reason. And as Eichmann, you know, famously said to Straus, he said, “I’ve read the second critique.
I know about the categorical imperative.” No, I’m not claiming that Kant led- But his view that you have no business interfering for any reason, and you should follow the rules, I think that’s an extremely dangerous, uh, view for people to hold.
And Italians-
[02:04:15] TIM SCANLON:
Oh, I give you-
[02:04:16] DEREK PARFIT:
Their humanity break through, and they behave much better.
[02:04:18] TIM SCANLON:
You wouldn’t know that you wouldn’t wreck the train, that you wouldn’t know that there were people down, uh, farther down on the track. If you remove a lot of those assumptions-
[02:04:25] DEREK PARFIT:
Well, but, well, but that-
[02:04:26] TIM SCANLON:
You know, you can doctor these cases so that it– what I said isn’t any longer reasonable. Well, and then I don’t think it’s reasonable either.
[02:04:34] DEREK PARFIT:
Well, but that’s like, I mean, what you said about that case is like Kant said when, you know, the murderer asks where his victim is. For all you know, if you tell him where the victim is, that may save the victim’s life.
(sighs)
Well… We have to go on the balance of probabilities, and very often, although you’re not certain, you will know that it’s much more likely that this will save them or that this will lead to their death. Um.
[02:05:06] TIM SCANLON:
Susan, did you want to weigh in maybe?
[02:05:09] SUSAN WOLF:
Uh, well, I think rather than reintroducing another subject, I should stop. But I would make one comment about the, uh, value of trolley problems and so on.
(laughter)
I, um, even though I don’t go in for, uh, doing moral philosophy that way, I, um, it’s actually my impression that almost anything that moral philosophers have created, um, happens in the world in some, um… So I was actually surprised that Derek conceded so much, uh, that, oh, well, not, not only lifeboat. Uh, it seems to me they all happen.
And if I recall, the trolley problems were introduced as a way of dealing with abortion issues. I mean, real live issues about what doctors may and may not do. And, and so I guess I think though it’s very healthy to recognize the ways in which it’s, um, you know,
[02:06:07] DEREK PARFIT:
Well, it can be-
[02:06:08] SUSAN WOLF:
if we get highly fictionalized, it’s also not so unconnected with real problems, and, um, it may be a lack of imagination to think that, that that’s so stuck in that-
[02:06:20] TIM SCANLON:
Well, Bernard Williams, who’s not particularly a friend of transportation ethics, made the point in, in, in Critique of Utilitarianism, and that one of the reasons for resorting to hypothetical examples is that it’s an important resource in thinking about actual cases. Uncommon cases occur actually all the time, and one of the things we do when we confront them is to try to think what we’d say if the cases were a little different in a certain relevant respect. So that partly it seems the danger of the line of, um, criticism that you’re suggesting is that it makes sort of cut off conversations that we need to be able to engage in if we’re gonna say anything about the cases we do encounter
[02:06:59] ALAN:
Well, I snuck in remarks that said essentially that. That there are, there are… This– They certainly do give you, uh, resources for thinking about extreme cases, right? Right. I didn’t deny that.
[02:07:14] TIM SCANLON:
Well, I mean, the point is they may give you resources for thinking about actual cases, the actual cases you are trying-
[02:07:19] ALAN:
Well, I know. But I considered these cases are extreme. Yeah.
[02:07:24] TIM SCANLON:
Okay.
[02:07:24] DEREK PARFIT:
I didn’t think it’s very often true that you can only save some people by means of killing others. I think those cases are pretty rare. I don’t think it’s a good defense of utilitarianism to say that they’re rare, but I think they are rare anyway.
But I think saving one or many, and that’s just an instance of saving one or many from burdens, needn’t be life. Uh, those are extraordinarily common, I think. That’s why I wanted to distinguish so sharply between that and as a means or as a side effect.
[02:07:56] ALAN:
Well, I didn’t deny that these cases are, uh, that cases in which this is true are common. What I’m denying is that there is, there need be no other way of thinking about this.
[02:08:08] DEREK PARFIT:
Well, the most striking thing you said is that it’s immoral even to ask whether you should save the one or the five. Now that– I mean, you might say two things. It’s immoral to ask that question if you’re not confronted with that actual dilemma.
You shouldn’t waste your time thinking about that. The other is, even if you are in that position, it’s immoral to ask, what should I do? Now, why is it immoral to ask that question?
[02:08:35] ALAN:
Yeah. No, I didn’t mean to imply it in the mo– in the extreme case. I did quote… I, I, I think– So what’s immoral– Dorothy Allison was saying that, and I quoted her with some approval.
[02:08:46] DEREK PARFIT:
The point is that if you don’t–
[02:08:47] ALAN:
But why is it immoral to ask? What I said was, if you don’t have to think that way, it is probably immoral to m– to, to think that way voluntarily. That’s what I would say.
[02:09:01] DEREK PARFIT:
So it’s immoral to ask the question, does it matter how many people your acts would benefit or burden? Is that an immoral question to ask?
[02:09:11] ALAN:
Not necessarily, no.
[02:09:12] DEREK PARFIT:
But that is the question.
[02:09:13] ALAN:
No. I mean,
[02:09:14] TIM SCANLON:
But that, that you sh– But that should be the, the… The trolley problems force you… I mean, they give you only this information, really, and you’re not supposed to add a bunch of other information.
And so they suggest that you’re dealing with a case in which, uh, that’s the only relevant information. Well, but look, can I say that? Yeah.
Well, but what they’re concerned- those kinds of cases are very rare, where they’re the only relevant information.
[02:09:42] DEREK PARFIT:
No, no, no. What they’re doing is they’re saying it isn’t the only relevant information, how many people you are benefiting or burdening, but the question is, is that one of the relevant considerations? Does it matter whether you’re saving one person’s life or a hundred people’s lives?
Now, some people think, no, you should give everyone an equal chance.
[02:10:04] TIM SCANLON:
Um, but I don’t think that if you say, well, suppose that other things are relevantly equal, the only difference is that you’re either curing one person’s blindness or ten people’s blindness, um, that it’s immoral to ask whether the numbers count. You know, when you say all other things are relevantly equal, So I, I find it very hard to have intuitions in which– about cases in which I’m trying to take that stipulation seriously.
[02:10:38] DEREK PARFIT:
Well, I mean, and one easy way of doing it in which you, you don’t know whether one of these ten people is young Adolf Hitler or something like that, uh, or is a mass murderer or whatever. All you know is there are ten thirty-year-olds over here, and they look as if they’re in their twenties, and there’s one over there. Um, if you don’t know the further information, then all you know is you can save one life or ten, and, uh, it would matter if, if the ten were age ninety and the one was twenty, which is why I’m saying, well, other things equal, they’re about the same age.
Um.
[02:11:18] MODERATOR:
Um. The hour is getting late. I think yesterday, uh, in posing one of his questions, I think, uh, Jay Wallace, uh, gave us an example of quiz show in which you have to pick what’s behind door number one, two, or three.
Um, we have a real-life example. Uh, behind this door, um, there is actually a reception. Um, and your only choice is whether you would like to attend it.
But I thought that before, uh, to, to give you a little time to think about that, uh, we might first ask Derek, Derek, if he’d like to have a last word or give us any last thoughts in response to, uh, any of the issues that have been raised or any issues that haven’t been raised. Any final words? You don’t have to have them.
[02:12:06] DEREK PARFIT:
I don’t think so. I mean, I just want to repeat early days that, I mean, I– In preparing these lectures, I was actually rather staggered to realize that the questions that I was asking and the views that I was considering are ones that weren’t even asked by the philosophers I was writing about, Hume, Kant, Sir David Ross, and Rawls, because they didn’t even understand them in the same sense.
Now, if we haven’t been even understanding one another, it’s very early days. So no, I don’t think I have more to say.
[02:12:48] MODERATOR:
Well, in that case, please join me in thanking Derek and Pete.
(applause)