[00:00:00] (laughter)
[00:00:01] MODERATOR:
All right. Uh, we-welcome to the third of our, uh, uh, uh, Tanner Lecture events this week. Uh, today’s format will, will involve a, a kind of seminar and discussion.
Uh, the three commentators will be presenting further remarks about, uh, the two lectures that Alan Gibbard has, um, has given us this week. Uh, we’ll follow that with a break, and then there will be a response to the, the three new sets of comments from, uh, Alan Gibbard, uh, which will then um, segue into an open discussion. Uh, and I want to alert you in advance to the fact that after the discussion is over around 6:30, we will, uh, there will be a reception with some refreshments that will magically appear behind this, uh, this wooden wall.
Wall, the wall will, will open. And, um, so, so that’s something to, to think about and keep in mind. Um, uh, we’ll be starting with, uh, with Michael Bratman.
[00:00:55] MICHAEL BRATMAN:
Okay, I should come up there, I suggest.
[00:00:57] MODERATOR:
Uh, I, I think that’s fine. Is that comfortable for you?
[00:00:59] MICHAEL BRATMAN:
It’s fine. So let me thank the Tanner Committee for again, for the privilege of participating in this wonderful event. Central to Allan Gibbard’s planning theory of normative thought is the idea that plans are subject to constraints of consistency and coherence that to some extent parallel such constraints on beliefs.
This idea is an important element in the argument that planning to act or to prefer can be a kind of judgment. In particular, a normative judgment. I agree with Gibbard that attitudes of planning are subject to constraints of consistency and coherence, but I want to examine Gibbard’s story about why this is so.
Gibbard’s thinking about plan consistency and coherence is influenced by another feature of his view, namely that the plans we need to appeal to in understanding normative thinking include plans for contingencies one knows one will never face.
(coughing)
If I judge that Caesar ought not to have crossed the Rubicon, then I in some sense plan not to cross were I Caesar. But I know I will not be Caesar. This lends special force to Gibbard’s question, quote: “Why does coherence in plans for action matter, especially when they are plans for wild contingencies that we will never face?”
Gibbard notes that a problem with inconsistent beliefs is that they cannot all be true. But he does not think we can simply extend this to incoherent plans. He does, in the end, think we can say of plan-laden judgments that they are true or false.
This is the quasi-realism of his 2003 book. But this talk of true plan-laden judgment must come at the end of the story when we have earned our right to it. We are now, however, at the beginning of the story, and we need a different kind of answer to the question why plans are to be coherent.
A word about terminology. Gibbard does talk of consistency of plans and of plans with beliefs, but his preferred terminology in talking about plans is incoherence. So with an exception noted below, I’ll follow him there.
Gibbard notes that, quote, “Incoherent plans can’t all be carried out.” But Gibbard does not think this gets to the bottom of things. As noted, it’s part of his theory that many of the plans that constitute normative judgment are wild contingency plans that will never be in a position to carry out anyway.
So he suggests we need to ask, why should it matter if these wild contingency plans are ones that one will never be in a position to carry out? I think Gibbard’s right to reject an easy extension from an account of belief consistency and coherence by appeal to truth to an account of plan coherence. And I think he’s right to wonder why the coherence of wild contingency plans should matter, since it’s not clear why it would matter if all of them could not be carried out.
The issue we face is how to proceed once this is granted. First, though, let me say a bit more about how Gibbard is thinking about constraints of coherence on plans. As he explains in his second lecture, he’s thinking primarily of what he calls the standard conditions appealed to in decision theory as together ensuring that a person’s choices between options can be seen as maximizing an expected value.
These standard conditions are conditions on preferences concerning options. One such condition is that one not strictly prefer A to B and also strictly prefer B to A. Another condition is one is transitivity of preference. Yet another condition is a completeness condition.
As between two options, one is to have some preference or other. And these preferences are conceived as ranging as well over lotteries as well as over simple options. Gibbard supposes that the relevant constraints of coherence on plans are in effect constraints on preferences, constraints of the sort captured in these standard conditions.
He’s also assuming, I suppose, that to plan to A is to prefer to, is to prefer A to its alternatives, and he’s assuming, and this seems problematic to me, that to prefer A to its alternatives is to plan to A. I’ll briefly come back to this at the end. What about plans for weights, which figured in his picture as I– as we talked about at the first lecture?
Gibbard seems to be thinking in this context about coherence and consistency. He seems to be thinking of such plans for weights as implicit in a complete contingency plan over actions. Coherence applies to plans for weights indirectly, is the view, I think, by way of coherence constraints on preferences over options.
I myself would prefer a model of our practical thinking that sees plans for weights as elements in their own right, but I’ll mostly put this aside here. So Gibbard’s appeal to coherence of plans is in a way better seen as a story about coherence of preference. Still, it remains true that one central kind of incoherence of plans that’s his focus in mind is inconsistency of all the different things you plan to do with your beliefs about the world.
And it– and it– this is the kind of inconsistency I’ll focus on below. So what is Gibbard’s answer to his question about why coherence in plans for action matters? This is what he says, and I th– I put a lot of weight on this passage, so I’m just going to read it to you.
I apologize if it’s– easy to miss this, I think, in the lectures, but but I believe I’ve got them right. Okay, but we can find out. Okay.
The problem… here’s the quote. “The problem with inconsistent plans is that there’s no way they can be realized in a complete contingency plan for living.” For each full contingency plan one might have, something in the set will rule it out.
Or more completely, we’d have to talk about an inconsistent combination of beliefs, plans, and constraints. If a set of these is inconsistent, there’s no combination of a full contingency plan for living and a full way that the world might be that fits, and judgments get their content from what they are consistent with and what not. So that’s his answer to the question.
So his idea, I take it, is this: We assign content to the agent’s overall set of beliefs and plans by asking what overall possibilities are consistent with those plans and beliefs. So inconsistency of plans given beliefs, even while contingency plans that are, in the theory associated with normative thinking about scenarios will never be in, like that of Caesar, will tend to baffle the holistic assignment of content, and so baffle the treatment of those plans as judgments with the precise content. Call this a content theory of why planned coherence matters.
We can express this in terms of the idea of interpretation, which Alan introduces in the first lecture. Coherence of planning attitudes, at least coherence for the most part, that’s actually an expression from his 1990 book. Um, at least coherence for the most part is needed in interpretation to ascribe content.
If you seem to plan to A And you seem to plan to B and you seem to believe that A is contingently incompatible with B, well then, there’s significant interpretive pressure against saying, against saying, that you really do plan to A, really do plan to B while believing they’re not co-possible. Our interpretation of you tends to be to that extent thwarted.
Perhaps one can be interpretable while violating a strict constraint of coherence, but such violations must be the exception. To be interpretable, one must conform at least for the most part. So the constraints of coherence on plans and preferences are an aspect of the second of the three areas of inquiry Gibbard identifies in the first lecture, namely interpretation, in which we, as he says, understand some of these natural goings-on as beliefs, assertions, plans, and the like with which we can agree or disagree.
And this area of inquiry is, according to Gibbard, to be distinguished from normative inquiry. So constraints of coherence on plans and beliefs enter once we do interpreted psychology. We can then try to say that certain aspects of that interpreted psychology, the planning aspects on the view, constitute the ought judgments essential to normative inquiry.
In normative inquiry, we can go on to consider whether one ought to be coherent in plan. But the judgment that one ought to be coherent in plan, if that’s what we do judge, is not what is at the the root of why coherence in plan matters, according to Gibbard, as I understand it. According to the theory, what is at the root is the connection betwe-between coherence and interpretability.
And this seems a virtue of the theory. If we were instead to say that coherence and plan matters because we ought to be coherent, where this is an ought judgment within normative inquiry, it seems we might be threatened with a circularity. To explain what that ought thought consists in, we need to appeal, according to the theory, to plans as judgments.
But to explain the nature of those planned judgments, we would need that ought thought. We avoid this circularity if, to explain why plans are judgments, we appeal not to the thought that we ought to have coherent plans, but rather to the thought that coherence, for the most part, is constitutive of planning in the sense that it’s a constraint on interpretability. I do have a worry.
A minimally realistic psychology will include, in addition to attitudes of planning to act, something like non-instrumental desire. Suppose then that I desire, non-instrumentally, though I won’t keep saying that, to be fabulously wealthy, and also desire to lead the life of a scholar. In the world as I know it, however, these desires can’t both be fulfilled.
That certainly does appro– does pose a problem for me. But it doesn’t show that I’m failing to satisfy a relevant constraint on desires. Such known contingent conflicts of the desire seem an inevitable feature of our lives.
What we reasonably aspire to is not eliminating these conflicts, but negotiating a life in light of them. This contrasts with desiring to be wealthy and desiring to be poor. Desiring things that are of necessity in conflict does seem to violate a relevant constraint on desire.
But that’s different from desiring things that are known only to be contingently incompatible. Now consider planning. Suppose I plan to be fabulously wealthy and also plan to lead the life of a scholar despite my belief that these are contingently incompatible.
Here I agree with Gibbard that there is a violation of a coherence constraint on planning attitudes. Since it’s here the talk of consistency seems most natural, let’s call this a constraint of strong consistency. That is, consistency of plans both among themselves and with beliefs.
But now we need to ask, why are planning attitudes subject to this strong constraint in a way in which desires are not? To answer that question, we cannot just note that incoherent plans can’t all be carried out. After all, my desires to be wealthy and to be a scholar can’t all be carried out in the world as I know it.
But that’s just the human condition. The problem is to say what’s special about planning attitudes that explains this difference between planning and desiring. And my worry is that the content theory, as so far developed, does not by itself answer the question of why in particular planning attitudes, in contrast, say, with desires, are subject to the constraints of strong consistency.
After all, we’re able, in interpretation, to assign contents to desires. And we can agree or disagree with what others or our past selves desire. None of this seems to require that desires are subject to a constraint of strong consistency.
So we need to know why, in contrast, inconsistency of plans with beliefs baffles interpretation and content. Perhaps the idea that plans are judgments does depend on this appeal to strong consistency, though when you kind of see it this way, if you, if you’re one of those people who thinks of desires as judgments, you, you probably want to stop there, but we’re not going to go there. So perhaps the idea that plans are judgments does depend on this appeal to strong consistency, but at this point in the argument, we’re looking for reasons to agree that plans are judgments, so we can’t appeal to the id– to that idea to defend the role of strong consistency in plan content.
I think then that we need at least to supplement the content theory. And the natural strategy here is to say more about the roles planning attitudes play in our lives. As I see it, and I think Alan would agree, the basic roles of plans and planning are coordinating roles, both within the life of the agent and socially.
Plans of action coordinate our actions over time and socially in the pursuit of temporally extended and, in some cases, socially shared ends. Plans for weights in deliberation coordinate practical thinking at a time and over time, and potentially socially. Ordinary desires, in contrast, don’t have this coordinating role.
Rather, and to put it crudely, we seek coordinating plans in part as a way of satisfying relevant and potentially conflicting desires. It seems that it’s these coordinating roles of plans that lie behind the idea that plans are subject to a constraint of strong consistency in a way in which ordinary desires are not. One way to express this idea is to draw on an apparent parallel with belief.
Belief, many say, famously Bernard Williams, a former colleague of mine here at Berkeley, belief, many say, aims at truth. What this means, we can say, is quite roughly that an attitude does not count as belief unless it’s embedded in a psychology that shapes that attitude in a way that tends to track truth. Analogously, we can say that for an attitude to be one of planning, it is necessary that the attitude be located in a psychology that shapes that attitude in a way that tends to track compatibility with other planning attitudes and beliefs.
In this sense, attitudes of planning aim at coordination with other attitudes of planning, given one’s beliefs. Systematic violations of strong consistency manifest an underlying psychology of the wrong sort for the attitudes to be ones of planning, and so undermine interpretation of the subject as a planning agent. How would this help with Gibbard’s problem about wild contingency plans?
Well, if they really are plans, they need to be embedded in a psychology that adjusts in a way that tracks coordination with all the agent’s other plans, including other wild contingency plans. This isn’t a defense of the idea that my judgment that Caesar ought not to have crossed the Rubicon is my plan not to cross under certain conditions. But if it is a plan, that is, if my ought judgment is, as Gibbard’s theory says, my planning so to act, then it will be subject to pressures for coherence with the rest of the edifice of plans.
None of this shows, it’s important to see, that we have reason to make our plans coherent. The judgment about reasons for coherence is a normative judgment, one that goes beyond the thought that coherence for the most part is a constraint on interpretability. That’s the difference between interpretive inquiry and normative inquiry in Gibbard’s views.
It might well be that we do have reason to make our plans coherent, since, after all, we have reason to be effective agents, and coherent plans contribute to that. Indeed, that we have reason to make our plans coherent. is the view towards which the initial observation that incoherent plans can’t all be carried out seems to point.
But these are judgments within normative inquiry, albeit judgments Gibbard mi- Gi-Gibbard might well go on to make. On the theory, however, these ought judgments are not the basic answer, though they are an answer, to why coherence of plans matters.
You might say we’ve uncovered two senses in which plan
(cough)
coherence matters. One that appeals to a condition of being interpretable as a planning agent. A second that appeals to reasons to be coherent, and it’s the former that in the theory is primary.
John Broome in conversation worried that there remains a problem for the theory in its unders– This was at breakfast this morning. There re– It’s amazing how it goes from breakfast to a talk very fast. But what…
It’s the nature of this kind of work, working thing. So John, in conversation, worried there remains a problem for the theory in its understanding of my thought that I ought to be coherent in plan. So the idea is, um, let’s make sure we got it right now.
So the idea is the basic answer to the question now is you need coherence for interpretabil-interpretability as a planning agent. But to support that, we need to get clear about the coordinating role of plans and the idea– sense in which they aim at coordination. Now, like this, the way belief aims at truth.
Okay. But so far, we just have this idea that coherence is, is, is kind of a condition of in-interpretability as a planning agent. We don’t have the idea in normative inquiry that we ought to be coherent, though of course we might have that idea and, uh, um, at the end I’m going to say we need it.
Um, but, um, okay, So that’s where we are. But then the– Broome’s worry this morning at breakfast, um, was that there seemed maybe a problem for the theory in its understanding of, say, my thought that I ought to be coherent in plan.
My thought is for Gibbard, remember, something like my plan to be coherent in plan. So I think I ought to be coherent in plan, and that’s a plan to be coherent in plan, right? But do I then need yet a further thought that that plan to be coherent in plan ought to be coherent with my other plans?
Uh-oh, does regress loom?
[00:18:07] AUDIENCE MEMBER:
Right.
[00:18:09] MICHAEL BRATMAN:
Well, I think Allen can avoid a regress here by saying that my thought that my plans ought to be coherent, which thought I might have in normative inquiry, is my plan that all of my plans, including this very plan, together be coherent. Re-reflexivity staves off Lewis Carroll’s tortoise. Let me note another virtue of the appeal to coordinating roles.
It may help a Gib– The idea I have is it may help a theory like Allan’s solve a problem about the metaphysics of agency, a problem that needs to be solved by an adequate metaethics. Thoughts about oughts and reasons…
Here’s the problem. Thoughts about oughts and reasons normally are central elements of what we can call the agent’s practical standpoint. When these elements of this standpoint guide thought and action, then, at least normally, the agent guides thought and action.
The agent governs. Thoughts about oughts and reasons normally have what we can call agential authority. When these thoughts guide, the agent governs.
Appeal to ought thoughts, then, will be one part of a story of how agential guidance, guidance by the agent, can be constituted within a psychic economy of events, states, and processes. Or so it seems. For this to work, we need to know what it is about thoughts about oughts and reasons that, at least normally, gives these thoughts regential authority.
That is to say that when they guide, the agent governs. If thoughts about oughts and reasons are responsive to independent normative facts of the sort posited by the normative realists, then we could try to explain the regential authority by appeal to such responsiveness to these normative facts. This would be a kind of Platonic theory of regential authority.
I mean, think about why reason has authority in the Republic, right? In the tripartite story of the soul in the Republic. Okay.
But if thoughts about oughts and reasons are planning attitudes, as they are in Allan’s theory, why do they at least norm– h-why do they at least normally have such authority to speak for the agent? See, I think this is a problem for any metaethical theory, and it’s important to see that. Well, I think a theory like Allan’s can say, In light of the conversation, the discussion I’ve just been having, these planning attitudes speak for the agent because their roles are to organize and coordinate the agent’s thought and action over time.
When functioning properly, these planning attitudes help create the unity of agency at a time and over time that’s presupposed in talk of agential governance. And this has the further advantage that this account of agential authority can be extended to other planning attitudes that are not strictly speaking normative judgments because it’s, okay, generally applicable. Okay.
So a Gibbardian view developed along these lines has much to recommend it, I think. I do suspect that this emphasis on cross-temporal coordinating roles exerts pressure against Gibbard’s apparent identification of planning and preference. Since preference seems less tightly, to me anyway, seems less tightly tied to such cross-temporal coordinating roles.
I mean, one way to see this is the standard conditions on preference are not about cross-temporality at all. They’re time-slice conditions. But in closing, I’m going to focus on a different issue.
And this is in closing. Okay. Gibbard’s basic account of why plan coherence matters appeals to interpretation, not to reasons to be coherent.
That’s what we’ve seen. But I suspect that some of the uses to which he puts the idea of coherence in– especially in the second lecture, need the view that we have reason to be coherent. I take it, for example, that Alan would say you have reason to avoid incoherence in your plans concerning Zeckhauser’s Russian roulette example.
After all, if the only problem about incoherent plans were that they tended to undermine your interpretability, you might think, “So what?” But this takes us back to the question, why do we have reason to avoid incoherence in plans, given that many of the plans at issue are wild contingency plans? That was the original question that started this.
And Gibbard’s content theory, even when supplemented by the appeal to coordinating plans, doesn’t yet provide an answer to that question. Okay.
(crowd applauding)
[00:22:27] MODERATOR:
You have to come to a philosophy event to experience pronouncement like reflexivity staves off Lewis Carroll’s tortoise.
[00:22:33] MICHAEL BRATMAN:
Right. That was an in-joke.
[00:22:35] MODERATOR:
Right. It conjures an image of Michael walking along and about to be attacked by a tortoise, pointing to himself. Anyway, um, our next commentator today will be, uh, John Broome.
[00:22:46] JOHN BROOME:
Thank you. Um, I’ve got a handout which will be…
[00:22:55] MODERATOR:
Is that all right?
[00:22:56] JOHN BROOME:
Yeah. Um, a few copies, yeah. Um, shall I fix them here?
(paper rustling)
[00:23:02] MODERATOR:
Of course. Um, oh, I’ll, I’ll pass them out, please.
[00:23:04] JOHN BROOME:
Thank you. It’s not meant to be comprehensive, I’m sorry. Then it was originally planned to be an overhead projected thing, but, uh, it’ll be useful for you to have it.
It might be useful in, in discussion. Uh, I’m also going to talk about, uh, Alan’s, um, first lecture. I, I hope what I’ll have to say is complementary to what Michael says, and it, uh, doesn’t overlap much, but I think it fits quite well with what he’s, uh, just been talking about.
Um, take any normative sentence such as, uh, “Caesar ought not to cross the Rubicon,” or, uh, “Nobody ought to have, uh, conflicting, um, intentions.” Philosophers used to worry a lot about whether sentences like that, normative sentences, could be true, uh, or false, many of them denied that they could be, that they were the sort of thing that could be true or false. Uh, these days, we worry less about that.
Um, most of us think it’s not so hard for a sentence, uh, in a particular class, um, to be true or false, and for some sentences in a particular class, uh, to be true. Um- because we think that all that’s required for them to be true is, or false, is that they, um, participate in our, our thinking, our lives and our discourse in, uh, characteristic ways. Um, for instance, we need to do, uh, truth-functional logic, uh, with them.
Uh, we need sometimes to disagree, uh, with each other, uh, about them. Um, we need to think that just because, uh, somebody utters one of these sentences in an assertive fashion, that doesn’t mean it’s true, um, even if the person is, uh, justified in doing what she’s, uh, just done. Um, and so on.
So that there are ways that you can think of as characteristic of truth, um, uh, in which, uh, a class of sentences might, might participate. And so long as it does, then it earns the right to be, uh, treated as the sort of thing that can be, uh, true or false. And, um, it’s pretty clear that normative sentences meet those requirements.
They do meet the standards that are characteristic of, of, uh, truth and falsity. So we take it that they can be true or false, and if one of them is false, then its negation, uh, is, is true. So we don’t worry so much about that, uh, these days.
And of course, if these sentences can be true or false, it goes along with that, that we can have attitudes of belief or disbelief, uh, towards their, uh, towards them, um, or towards their contents, uh, towards, uh, what they say. So we can have cognitive attitudes towards sentences, uh, of this sort. And I don’t think that’s any longer particularly controversial.
Um, on the grounds of what we do with these sentences, we can accept that, um, that they, uh, are true, uh, or can be true or false. But that does leave us with the question of why is that, uh, so? They participate in our thinking and discourse, uh, in ways that are characteristic of, of truth, but we can wonder how come.
How come, how come that they participate in our lives, uh, in those ways. If we were dealing with sentences about, uh, natural matters, then we could give a, uh, an explanation o-of that, which would have something to do with the way that they correspond to, uh, facts in the, uh, natural world. We would say that they participate in our discourse in the way that they do because of their correspondence with, uh, uh, the facts.
Um, we could give a parallel answer for normative sentences, but many people, including Alan, would find that fantastic. Uh, he just finds it incredible that there are normative facts at this– that, that roughly correspond to natural facts that would explain why our normative sentences work in the ways that our, our natural sentences, uh, do. So he offers to provide an alternative explanation of why our discourse with these sentences works in this truth-characteristic, uh, way.
And his explanation is that these sentences, um, help us to plan, uh, our lives, plan our lives in general, and plan what to do on particular occasions. It’s a natural fact about us, he points out, that we are the sort of creatures, uh, who do make plans. Um, so Alan’s explanation of why these sentences work in the way that they do doesn’t appeal to anything apart from our na- our, our nature, uh, how we are in the, the, the, the natural, uh, world.
Um, but what we’re doing when we utter a normative sentence is, uh, expressing a partial plan, or to at least a first approximation, uh, that’s what we’re doing. Um, and on that basis, Alan explains, uh, why, um, uh, as our thinking and, uh, discourse, um, uses those sentences, um, it’s going to endow them with the characteristics that go along, uh, with truth. Uh, for one thing, he provides a semantic theory, uh, that, uh, that explains how they participate in truth-functional logic in the way that, that, uh, they need to.
So he’s got an explanation then of why we can have the attitude of belief or disbelief towards sentences, uh, of these sorts. And the explanation is that those cognitive attitudes of belief and disbelief arise from non-cognitive attitudes, which are, uh, uh, planning, uh, attitudes. In fact, our cognitive attitudes are really simply those planning attitudes in another guise.
They’re nothing other than the, uh, planning attitudes. So for instance, to believe you ought not to feel resentment in particular circumstances is to do nothing else than plan not to feel resentment in those, uh, circumstances. And Allan, uh, argues that these planning attitudes are woven together in a, a structure that, uh, mimics the structure of believing attitudes, and that’s how we can treat them as believing attitudes indeed.
Uh, that explains why their, uh, um, their contents have the logical structure of, of truth. Um, so our attitudes towards normative sentences are basically non-cognitive, but they, as, uh, as Michael just said, they earn the right to be treated as cognitive, uh, attitudes because of the way that they fit together, and that arises from the way that they participate in, in our, our thinking and discourse. Um, so in Allan’s account, each attitude of normative belief is going to be accounted for by a non-cognitive planning attitude that’s the same as it, equivalent to it, uh, in fact.
So to believe you ought not to feel resentment, as I say, is just to re– to, um, plan not to feel reject the re-resentment, or as a matter of fact, as he puts it, you reject feeling, uh, resentment. And to believe that, um, Caesar ought not to c-cross the Rubicon is just to plan when in the position of Caesar, or if in the position of Caesar before the Rubicon, to plan, uh, not to, uh, uh, cross it. Now, that though is only an approximation, uh, as Alan says, and there are at least two reasons why it’s only an approximation.
And I’m going to, uh, mention them, the, the two of them. The first is actually one that Alan himself describes. Um, even if it were true that all ought beliefs are plans, it would not be true that every plan is an ought belief.
Buridan’s ass is the illustration of that. Buridan’s ass stood between two equally attractive, uh, bales, uh, of hay. As it happened, it couldn’t decide to go to one or the other, uh, and, uh, it died.
But if it had been more sensible, it would have formed the plan of eating one or the other bale. But it didn’t think that it ought to eat one or the other bale. They were both equally desirable.
It didn’t think it ought to go left, and it didn’t think it ought to go right. But nevertheless, had it been more intelligent, it would have planned to go one way or the other. So then it would have had a plan that was not, uh, uh, an auth- an ought belief.
So that must happen. And because of that, Allan doesn’t, um, find normative belief, uh, strictly, um, on attitudes of planning, but on what he calls valenced attitudes that… or, or the– ultimately, on one valenced attitude, which might be called the attitude of okaying, of, uh, attaching an okay, of assigning an okay attitude towards the thing. So if Buridan’s ass had been more sensible, it would have assigned an okay attitude towards eating the left bale or to– and towards eating, uh, the, the right bale, but then it would have planned to go to, uh, one or the other.
Um, from– for Allan’s purposes, this attitude of okaying is not to be initially identified as an attitude of thinking it’s okay. So the attitude of think– of okaying the left bale that the ass would have had is not the attitude of believing that the left bale is okay. It’s something other than that.
It’s a primitive, non-cognitive attitude, but it will, in due course, in the working out of the theory earn the right to be treated as the cognitive attitude of believing that the, the left bale, uh, is okay. So that’s one way in which to say that normative attitudes are plans is an approximation. Actually, they’re not really plans, they’re these okay-ing sort of, uh, attitudes.
But there’s a second way in which the, uh, attitudes he’s using, um, uh, are not really planning attitudes, and this is one that he doesn’t mention. And that is that the attitude, uh, of believing you ought to do something just simply is not an attitude of planning to do it. Not only can you plan to do something without believing you ought to do it, you can also believe you ought to do something without planning to do it.
If Buridan’s, uh, uh, ass had been, um, uh, more sensible, uh, then, um, it certainly would have had the attitude of planning to do something without believing it ought to do it. But it’s also quite possible for an ass or anybody else to believe it ought to do something, but, uh, not plan to do it. If you think that, you’re in a state that’s often– that’s, um, universally really in philosophy called the state of akrasia.
Most philosophers think the state of akrasia, that’s to say believing you ought to do something without planning to do it, is irrational, but most of us think it’s possible, though, uh, irrational. However, many non-cognitivists, including Allan, find themselves in a position effectively of denying that akrasia, uh, exists. Richard Hare is another example.
Um, what Hare said is that you couldn’t sincerely believe you ought to do something without intending, uh, uh, to do it. Uh, I must say, I find that unconvincing, and a lot of other people I know find it unconvincing. It seems to me quite possible sincerely to intend to do something, uh, to believe you ought to do something, uh, without, uh, intending to, to do it.
Um, Alan doesn’t actually explicitly deny that but on the other hand, he does use language that elides the distinction between, on the one hand, thinking you ought to do something and on the other hand, planning to do it. So at one point, he says that thinking what I ought to do is thinking what to do. Now, I suppose it’s true that in a way you could say that thinking what to do is the same as thinking, uh, what you ought to do.
Um, but that’s not actually a very idiomatic usage of that term, and anyway, there are certainly two different questions that you can ask yourself. There’s the question of what to do, and there’s the question of what you ought to do. I’ve put, um, the, an example of those two quite different questions, um, on one side of that, uh, handout, uh, that I’ve put there.
There’s the question of what to do. It’s an unusual sort of question because, um, the answer to it calls for a decision. If you ask yourself what to do, and you answer yourself, then in answering yourself, you’ve decided, uh, something or other.
So you might decide to clean your teeth, and so when you ask yourself, “What shall I do?” Uh, uh, or, “What to do now?” The answer will be, “I’ll clean my teeth,” but that answer expresses an intention that you’ve got to clean your teeth, and it results from a decision that you’ve made.
So it’s an unusual sort of question, a what to do, uh, type of question. And it certainly doesn’t express a normative view that you ought to clean your teeth. You might also ask yourself the question, “What ought I to do?”
And the answer you might give yourself if you do that is quite different. It might be, “Oh, I ought to stay in bed.” Those two things are perfectly consistent with each other, and then you will end up believing you ought to stay in bed, but on the other hand, with the plan of cleaning your teeth.
That’s an example then of akrasia, and I take it for granted that akrasia, uh, exists. And this is another way in which planning attitudes are separated from normative attitudes. It doesn’t mean that Alan is wrong to treat normative beliefs as fundamentally non-cognitive attitudes.
They may still be fundamentally non-cognitive attitudes, but they’re not ordinary planning attitudes, and they’re even further from ordinary planning attitudes than he tells us. It’s not merely that we’ve gotta deal with, deal with Buridan’s ass, but there is this other thing we’ve gotta deal with, namely akrasia, that sometimes, um, uh, what we believe we ought to do is not at all what we, uh, plan to do. Alan doesn’t say they’re planning attitudes, he says to a person’s approximation, and we can still accept, or this is a possibility, we could still accept that they are, what he says, they’re like planning, uh, attitudes.
We’ve just got to recognize that they’re removed from ordinary planning attitudes. They’re not actually planning to do something, although they may be like planning to do something in a way. For instance, they might be some sort of ideal plans.
It’s, it may be some, some higher superior sort of plan that we have, which is distinct from the actual plans, uh, that we make. Now, to be honest, as it happens, I doubt that normative beliefs really could be much like planning attitudes at all, um, because despite what Alan says, uh, I think the logic of normative beliefs is not really much like the logic of planning attitudes. And on the back of that handout, the other side of that handout that I’ve given you, there is, uh, an example of a difference between the logic of planning attitudes and the logic of normative beliefs, which I’ll, uh, you’ve got now, and if it comes up in discussion, then, uh, it, it’s available, but I’ll leave that aside for now.
So that’s, uh, an as it happens remark. I do not think that normative beliefs really could be much like, uh, planning attitudes. But, uh, let me leave that aside, because I want to get on to, uh, what I think is, um, um, another difficulty for Allan’s, um, argument that arises from recognizing, uh, this.
Um, these attitudes which we’re talking about, the non-cognitive attitudes, are the ones that are at the foundation of his, his, uh, system and the foundation of what turns into the cognitive attitudes of, uh, normative, uh, beliefs. They earn the right to do it, but fundamentally, they’re the non-cognitive attitudes that are like planning attitudes. But how do we identify what these non-cognitive attitudes are?
If they were ordinary planning attitudes, we could do that pretty easily because we know what it’s like to intend to clean your teeth. Um, we, we form plans, we work with plans. They’re very– we’re very familiar with, uh, ordinary, uh, plans.
But the ones that Alan is talking about are not ordinary plans. They’re s- if anything, they’re ideal sorts of plans, something of that sort. And we can only really describe what these attitudes are through cognitive, uh, descriptions, uh, of them.
So take this attitude of okaying something, the primitive attitude of okaying something. Now, I told you you weren’t to think of this initially as the attitude of believing the thing was okay, but nevertheless, in order to describe what that okaying attitude is, I’ve got to say, to point to it by saying it’s like the attitude of believing the thing is okay, or indeed that it is the attitude, uh, of, uh, believing it’s okay. I’m pretty sure that you’re only going to be able to recognize this okaying attitude because you understand what it is to think that a thing is okay.
Because you know what the cognitive attitude is, you can go through that to pick out the non-cognitive one, but we can’t point to the non-cognitive one, uh, directly. And what about the attitude of planning ideally to stay in bed when actually what you plan to do is to get up and, uh, clean your, uh, teeth. Your mundane planning attitude is to brush your teeth, but you’ve got an ideal planning attitude of staying in bed.
Well, how do we point to that? Can you recognize this ideal planning attitude? Well, it seems to me that the only way you’re going to be able to pick out what that ideal planning attitude is, is to realize that it’s an attitude of believing that you ought to stay in bed.
Um, so these non-cognitive attitudes, if you want to know what they are, we’re going to have to pick them out for you through the cognitive attitudes that, that they underlie. Now, uh, that doesn’t mean that what Alan says is, is wrong. Um, so far as Alan is concerned, these underlying cognitive and non-cognitive attitudes are indeed the same as the the cognitive attitudes by means of which we, uh, I’m saying that we need to, uh, identify them.
They are indeed beliefs because they earn the right to be beliefs because of their structure. But they’re supposed to be fundamentally non-cognitive, but still, uh, they are the things that we can, um, identify as the beliefs. So that’s– I’m not contradicting Alan in saying what I just, uh, said, but I do think that this raises a serious difficulty over Alan’s project of providing of how these attitudes earn that.
[00:43:43] MICHAEL BRATMAN:
Okay, okay. Uh-huh.
[00:43:48] JOHN BROOME:
Um, those underlying non-cognitive attitudes are supposed to explain how we have the cognitive attitudes, but we can only recognize the underlying non-cognitive attitudes through recognizing the cognitive ones. Uh, or I should say perhaps more accurately, by means of their cognitive attitude aspect. These attitudes are Janus-faced.
On the one hand, they’re non-cognitive, but they earn the right to count as cognitive, and it’s that face through which we have to, uh, recognize them. So, uh, Alan, um, is, uh, in effect saying to us, um, that the belief that you ought to stay, Sorry, um, but yes, the belief that you ought to stay in bed is explained by that non-cognitive attitude, whatever it is, that corresponds to the belief that you ought to stay in bed, and we are only gonna be able to identify that attitude through the thing that it’s trying… that it’s, uh, uh, explaining. And that makes me worry that he hasn’t really got very much of an explanation.
When the explanans is identified through the explanandum, and when this happens so extensively throughout the explanatory story, I doubt that we’ve got much of an explanation at all. I think for a proper explanation of the thing that needs to be explained, namely how we have these cognitive beliefs in the first place, for a proper explanation of that, we need to be able to to– to identify in some independent way at least some of the underlying non-cognitive attitudes that do the, uh, explaining. And here’s a more specific point that goes along with that.
It’s supposed to be the, uh, logical relations among the non-cognitive attitudes that explain how they can be treated as cognitive, how they t– earn the right to have to to be treated as, as attitudes that have contents which are true or false. So it’s supposed to be the relations among the non-cognitive attitudes which explain the way in which what we do with these attitudes allows us to treat them as beliefs having, um, uh, contents that are, uh, true or false. But so those, those relations between the non-cognitive attitudes, they’re supposed to generate a logic that mirrors the logic of truth, and they’re explaining why the, the, the, the contents can be treated as actually being true or false.
But now we identify those, uh, underlying non-cognitive attitudes through cognitive attitudes in the first place, and that means that they just can’t help having the structure that’s characteristic of truth because we’re picking them out as the cognitive attitudes to begin with. So they’ve inevitably got the structure that, um, Alan is supposed to be explaining, not because of anything to do with their internal, uh, coherence or anything like that, but because we identify them through attitudes that inevitably have that structure in the first place. So we’re not succeeding in explaining why they, uh, have that, uh, structure.
So that’s a difficulty that, that I, um, uh, think is underlying Alan’s way of going about things. To summarize, he hopes to explain the truth and falsity of normative sentences and the fact that we can have normative beliefs by a structure that appears within non-cognitive attitudes that underlie those things. But I don’t think, but those non-cognitive attitudes are not truly planning attitudes.
They’re like planning attitudes perhaps, but they’re not truly ones, and I think k-planning attitudes, and we can only identify them through their corresponding cognitive attitudes, and that makes it rather doubtful that they can play the explanatory role that Alan wants them to play.
(applause)
You want me to go up there? All right. Or can I sit here?
[00:48:07] MODERATOR:
Uh, yes. I– you can sit there. Um, I think if you move the, um, microphone over maybe.
[00:48:11] FRANCES KAMM:
Yeah. I’ll just do that. Yeah. One second.
[00:48:16] MODERATOR:
Right. We’ll have, uh, one more set of, uh-
[00:48:19] FRANCES KAMM:
Get closer to my notes
[00:48:20] MODERATOR:
comments before, before taking a break, um, and they will be from, uh, Frances Kamm.
[00:48:25] FRANCES KAMM:
Yeah, you can hand them out, I guess. So, um, the other speakers have, uh, done a great job, uh,
(paper rustle)
commentators, sorry, have done a great job in, uh, sort of helping Alan, and I’m only gonna
(paper rustle)
ask Alan to help me. Uh, I just have a bunch of questions, really, uh, sort of, uh, I’m hoping can be put on the right track in some ways. Is this… something like that? Can I have the, the watch?
[00:48:49] JOHN BROOME:
Okay. Is there a handout?
[00:48:50] FRANCES KAMM:
I have some questions. Yeah, there’s just a little brief handout. Often it’s not gonna do anybody any good. But, um, I, um, I have some questions related both to the first lecture and to the second lecture.
(paper rustle)
Um, I’ll, I’ll do the first lecture
(paper rustle)
first, uh, just so for continuity. Um, one question I had was, u-unless I’m misunderstanding, um, the, the planning theory here, um, or something like planning, um, to account for something like, well, I– my belief that I ought to do something. I plan to have a, a preference for a certain sort of, um, act, uh, above all others, and for having certain feelings and for doing certain things.
Um, that this involves planning seems to be a, an empirical claim or subject to empirical confirmation or disconfirmation. I mean, John has been discussing, uh, it from the point of view of the, the logical concept of it, okay? Whether, um, all ought statements involve something like planning statements or ideal planning statements.
But I was just wondering whether, um, you think of your theory as something that is subject to, uh, at least empirical disconfirmation. So it seems like you’re saying that creatures who can’t plan can’t have moral judgments or make moral judgments. I mean, they can be subject to morality, but they, they wouldn’t be capable of making moral judgments.
And so I was just wondering if, uh, you know, uh, I don’t know very much about brain science, but, um, um, uh, I hear that there are in individuals who, through some sort of accident or defect of the brain or whatever, are incapable of planning. I mean, in the ordinary sense, you know, planning their lives, planning their actions, projecting their thoughts into the future about what they’ll do and whatever. And I was just wondering whether you intend your theory to be of the sort that, um, would be disconfirmed if it were the case that people who can’t plan in this other ordinary sense, right, um, can nevertheless engage in judgments about what Caesar ought to do or what they ought to do if they confront, you know, a drowning child or something of that sort.
Um, and, uh, if it were the case that they could, uh, would that show that there was, you know, some separate idea of planning other than the ordinary, uh, notion of planning your life, which they obviously lack the capacity to do, that is undergirding their moral judgments? Or is it, you know, the case that you would say, “Well, okay, planning’s not involved here”? Um, the other thing is, of course, that there are some moral judgments that these people, perhaps they, they couldn’t in fact engage in the thought, well, you know, if I saw the drowning child, I ought to do such and such, um, because they lacked the ability to plan and, you know, and this was connected, uh, to moral judgment.
Uh, when they’re in a situation, they could, could still have a moral judgment when they’re responding to, you know, the drowning child, and then in that case decide, you know, well, this is what I ought to do. And so I’m wondering whether, um, if an individual who in general lacked these, you know, planning capacities, um, could do that, Um, whether that would indicate that some sort of moral judgments, right, um, are, are making these judgments, uh, does– don’t involve, uh, don’t involve, uh, planning capacity. Uh, even if the long-term ones, uh, you know, or hypothetical case imagining judgments, uh, do.
And, um, I was just wondering whether this is really a theory that, uh, is subject, on your view, to this sort of a confirmation or disconfirmation. It’s just, it’s really a question. Now, the other thing that, the question that I had about the um, uh, the first paper, your first lecture, came as a res-as a, uh, further, uh, reflection on your response to a question that I asked at the first session, and I’m not sure that I understood your answer.
I, I raised the issue of supererogation as, um, you know, I wanted to know how you dealt with it on your view, Because when you say that, um, uh, y-you think that you ought to, you know, rescue the child, it means you plan to prefer something like planning to prefer it to alternatives. Let’s forget about the case where there are ties, and to feel guilt, you know, if you don’t, uh, and, uh, you plan to feel s-guilt if you don’t actually carry out the action. And I just, you know, raised the question of supererogation, uh, cases where, um, what you ought to do, you know, if you see the drowning child, you decide is, uh, you know, call the police.
Uh, but, um, you recognize that it might be morally more commendable and, uh, a great thing if you were to jump in the water and rescue the child yourself when there’s real– even though there’s real danger to you. And, um, the way, the way I keep on thinking of it, though it’s not the way you recommended I think of it. I think, uh, so, so far as I understood your answer, you were talking about, uh, an attitude of approbation of some sort of, the supererogatory action.
But I was just wondering, I mean, I have this little handout. I don’t have the neater version, which you have. Unbelievably, somebody had to actually write this neat version out for me, because the threat was that if I started writing it, um…
So this is, uh, thanks to Ellen Gobler that I have this. Uh, so the way I keep on thinking of this is, and you’ll correct me how, how I go wrong, is, you know, I think of sort of a scale of degrees of preference for, uh, uh, and, uh, I’ve got at the top, I’m imagining someone who really would prefer to be the person who would jump into the water. They would very much, you know, prefer to be that sort of person, and they would regret it if they found that they didn’t in fact jump into the water.
So I’ve got, you know, on, on my view of this, I’ve got the strongest preferences for the supererogatory act. Um, and, uh, regret if one doesn’t engage in it, you know, through weakness of will or, or whatever. Um, and then, of course, o-o-on this little chart, uh, I’ve got under that, you know, not as the thing for which I have the strongest preference or have a plan to prefer to the strongest degree or whatever, uh, the action of which I think, you know, someone might say, “Well, you ought to do, you ought to call the police.”
Okay, that’s, uh, something that I really ought to do. And, um, of course, if I don’t do that, I, I can recognize if I did something Z, like sit around on the, you know, the beach and enjoy the sight of the child drown, um, that I would feel, you know, I would plan to, uh, ex-ex… have the feeling of guilt or have something like a plan to have guilt. Um, and it seems to me, I mean, the way I keep on thinking of this is that, um, when I’m imagining this person who really does wanna engage in the supererogatory act, but the person would have regret if they didn’t, but they wouldn’t have guilt, or they wouldn’t plan, right, on having guilt.
They’d plan on having regret. Um, and then, uh, of course, they can do something that’s still okay, of course, namely call the police, and it’s really what people would say, “That’s what you, you know, uh, you’re…” In the ordinary sense, if you thought, What is, what is it that I ought to do in this circumstance, okay, is call the police.
Um, and one of the things, of course, that if they did the wrong thing, okay, they would, they would plan to feel the guilt. Now, as I see this then, to say that, you know, I ought to, you know, call the police, um, is not a plan to prefer it to, um, alternatives, alternative actions, because I do prefer the supererogatory act. And it’s, it’s not a plan to feel guilt if I don’t do what I ought to do because of course, if I do the supererogatory act, I’m not gonna feel–
I don’t plan to feel guilty at all, right? I mean, I avoid guilt completely. I mean, to me, it’s sort of interesting that if I, if I avoid, um, uh, uh, guilt by doing the supererogatory act, I mean, I can’t avoid guilt that way.
But if I fail to do the supererogatory act and the person I’m imagining, um, there’s no way I can avoid regret, even if I wind up doing what I ought to do, and I don’t feel guilty. So that the regret is somehow hanging there, right? As some sort of more inescapable fact of this person’s life than the guilt.
Um, now, of course, in– when you have these tie situations like, uh, you know, there are multiple things that are equally good, the okay things as you see them. My view is that, um, on this chart, I guess it shows that okay is, is, is not just, you know, I prefer it to, I prefer any of these possible things and alt- amongst the things that are okay to any other alternatives, um, because, um, I do prefer, uh, the supererogatory action to the just doing what I, you know, standardly think I ought to do and and believe that I, you know, it’s what I ought to do. Um, uh, So, um, I– I-I’m, I’m not saying that something is okay because there’s no alternative, right, that I prefer to it.
I mean, I do think it’s okay if I just do my duty. But it’s not because there isn’t some alternative that I would prefer. Um, so one of the things
I, I was thinking about on, on my model here is that to say that I think that something is okay, given that I think that either supererogation or doing what I ought to do was okay, Um, is that when I think, when I, you know, think of something as being okay, I’ve had that attitude towards it, um, so I plan to prefer, right? Um, what I’m thinking of is I’m planning to prefer to do what I never planned to have guilt if I don’t do. That’s what it– um, this chart, I’m just reading off from the chart, right?
Um, and that, um, furthermore, I mean the way I think of this is that it’s not merely that I have these plans to prefer, you know, to do some things And I plan to feel guilt if I don’t do this other thing, as if it were some sort of a, you know, if I failed in my best plan to do what’s okay or to do what’s supererogatory, not merely one of the other things that’s okay. But, um, it’s not like that planning to feel the guilt is sort of a, a backup, you know, a second-best plan because, um, there’s a demand on me that I not go below that line, right?
I’ve got this line here where, uh, I’ve got a negative, you know, don’t go into the guilt area. Uh, you know, plan not to, uh, to do what I would then plan to be guilty about doing. Uh, there’s some sort of a demand, uh, not just sort of a preference, that I stay in that okay area.
And it’s not just because I would fail to satisfy the thing that I prefer most or that I would plan to prefer the most, um, because when I plan to prefer the supererogatory action, I don’t necessarily think that I, it’s a demand on me that I do it. So it’s not just failing to do what I think is my, the plan to prefer, what I should plan to prefer the most that leads to this sense of, of demand, okay. So anyway, this is, uh, I couldn’t stop thinking about this.
Um, it sort of became obsessive, and I just was wondering what you thought about this model, and can you help me? Uh, so those are my comments about the, the thoughts that occurred to me with respect to the first, um, section. I guess I still have some time and can go on to the second.
Um, I was struck by the fact that you said at one point, uh, that you thought that the morality of respect, um, and people’s attempt to build up an i– a morality on the basis of respect, you say this in the second lecture, has been, in your view, a failure, right? Um, uh, and yet, uh, you in your view about, um, encapsulating or somehow deriving, uh, out of Scanlon or out of the idea that, uh, one ought to, uh, have a preference, right? Uh, one ought to plan to have a preference to engage in relationships of, uh, res– fair reciprocity with people on the basis of agreements that, uh, no one could reject on their own behalf.
Um, you think that that is what it amounts to, to respect persons. You, you, you adopt the language and the, the format and the framework of, of respect. And so I was, I was wondering why you felt it necessary to speak in those terms.
I mean, uh, and not just, uh, directly, you know, speak in favor of the, uh, the utilitarian solution. Not saying in, sort of, sort of putting it in the context of, well, it’s what… If you live according to those terms, the utilitarian terms, you will be engaging in respect for persons.
Um, you know, Pat Scanlon, uh, doesn’t just think he has a preference, right, for enga– for, uh, doing what he thinks is required by respect for persons. He thinks is a good reason to have it. And you may, of course, uh, may think that too, though I think that in your lectures, you simply just say that, “Well, you know, this is something that appeals to me.”
Um, but I, I was struck by the fact that, uh, you yourself have now sort of, um, find it attractive or maybe necessary, there’s some sort of demand, that we, uh, fit, right, uh, other theories into the respect for persons umbrella. Okay. And so th-that’s one question related to respect.
My other questions related to respect, if I still have time, I don’t know how much time I have. I sort of, uh, bring up the, uh, tail end of what I said last time, um, because I had been discussing cases, uh, the parent cases. I don’t know if those people who are here today You know what I’m referring to.
If a parent, uh, um, has a child that’s, uh, in an, uh, a, a canoe that’s in danger, and, uh, he has to to decide whether to save his own child or the children of two other, two other children of a parent, uh, who can’t reach his own children. And, um, the, uh, uh, the solution, uh, seemed to be, uh, Alan’s thought that the, uh, that the solution was to do what, quite independent of knowing who you would have been and which of the parents, what you would have agreed to, uh, an arrangement you would agree to behind a veil of ignorance, uh, assuming that you had an equal probability of being either in the position of the parent A who could help or a parent who couldn’t help. Um, now, John raised questions about this equal probability argument yesterday, but what I’m interested in today is another case of yours where the same sort of issues seem to come up, namely the case of Ida and Jay.
And, um, uh, Ida, uh, the, the assignments of, uh, utilities, I think, were that Ida would, uh, uh, and, in one state of the world, uh, have a nine, right? And Jay, who would be in that world too, would have a one. Um, and this would be a state in which Ida would get this very expensive funeral.
Um, and, um, another state of the world might be in one in which, uh, Ida only has, uh, you know, what’s good for her to the degree five, right? And, uh, Jay has it to the degree three. So Jay is better off and, uh, you know, Ida is not so great.
Um, and, um, y-you think that, uh, you know, what each individual… You, you raise problems for differential views about the good of a person, uh, that Jay may think that Ida is mistaken in her, uh… Even if he wa-he turned out to be Ida, given this equal chance of being himself in that world or someone with Ida’s preferences, he would think that it didn’t really matter whether the preference for having a funeral got satisfied.
Um, and you focused on that, the difference in different people’s views about what is, uh, really the good of, uh, of, of Ida or Jay. But I wanna just go back to the cases where there isn’t any disagreement about the good and, uh, you know, you do think you agree with Ida that if you want to have a big funeral, it’s really good for you to have that good funeral. Um, And, uh, just say that again.
You know, those of us who think that the maximin, uh, solution, namely bringing up the position of the person who’ll be worse off in an outcome in the world to the highest degree, even if it means lowering, uh, the outcome for the other person, you know, the solution of five and three, um, is preferable, right? Um, How do they think? Why are they getting this result?
And I wanted to know, um, my understanding has always been that there’s a different view about how to, uh, conceive of the point of the veil of ignorance, right? That, um, those who get the maximin solution think that, that, uh, the way to think of this device is not as being there in order to, um, uh, allow a person in ignorance, right? Assigning equal probability of being in every possible position, uh, outcome position, to think how they would maximize their expected utility.
But, uh, so where they think of each of these, you know, positions of life as something that they have a po-pi-possibly fall into, right? But rather, the contrasting view that’s supposed to get you the maximin is that the veil of ignorance forces you to identify, um, or take seriously the lives of the actual people who will fall into those different slots, the different separate persons, right? Uh, they’re not just possible slots that you might occupy, but there are positions that people, real people will occupy.
And to get you to take that seriously, I think that’s the way Scanlon, uh, you know, describes it. Um, at least when he’s– he doesn’t make use of the veil of ignorance when he’s describing Rawls’s use of the veil of ignorance. And it seems to me that, uh, in terms of respect, when people think of, uh, showing respect for persons as ends or whatever, that it’s that focus on, you know, what it will actually be like for each particular person in those positions that’s driving, um, the desire to use the veil of ignorance, right?
Uh, and of course, this is a completely different way in which you’re using it. And, um, uh, it connects up, of course, with this idea of what it is, uh, how we flesh out the idea of respect for persons. So just to sort of make this clearer, I mean, um, suppose Ida, you know, would have a, an i-
A nine, right? Suppose, uh, because she wants a slave. Uh, that’s what would give real meaning to her life.
And it’s in, and it’s in her interest to have all, uh, you know, these things done for her. And Jay would be the slave. I mean, that’s why she’s got a nine.
And he’s got a one in one world. In the other world, she doesn’t get Jay to be s- her slave. She gets a five, and Jay’s better off, he gets a three.
Um, and, you know, I, I suppose that, uh, you know, the person who takes the, the Rawlsian view will say, “Well, look, that desire, right, is intrinsically disrespectful of persons,
[01:07:41] JOHN BROOME:
right?
[01:07:41] FRANCES KAMM:
To have that desire to get your, uh– to want to have a slave, for example.” Um, and the view, of course, that you take about how to use this, um, the veil of ignorance, um, says nothing about that,
[01:07:54] JOHN BROOME:
right?
[01:07:54] FRANCES KAMM:
Uh, there is this desire,
[01:07:56] JOHN BROOME:
okay,
[01:07:56] FRANCES KAMM:
and of course, Jay might wind up being Ida and agree with her that having– when you want a slave, having a slave is a great thing, and you wind up with a nine,
[01:08:03] JOHN BROOME:
right?
[01:08:04] FRANCES KAMM:
And increase his expected utility by, uh, you know, um, going for an arrangement in which he might, of course, turn out to be Ida, turn out to be a Jay, and ha- and, in your view, have no complaint if he turns out to be Jay the slave. Um, and I, I also think that, you know, the inability to evaluate these sort of desires, uh, you yourself said desires might change in different contexts, but not to be able to say that there is something intrinsically disrespectful about this because after all, respect on your view only amounts to, you know, playing your part in a fair reciprocity in that, uh, utilitarian, uh, outcome. A-a-again, uh, the idea of… Am I over time?
Uh, a little bit. Yes. Okay.
Uh, okay. Treating people as, uh, as means, I mean, Sometimes I, I, um, uh, on your view, Jay would not be treated as a means because he’s been going on, uh, as a slave in the way that, uh, is demanded by the system that he would’ve, uh, uh, had, uh, you know, the highest expected utility under. And re-recall, I mean, there’s these debates I have with Derek Parfit.
Derek Parfit has the view that you’re not treating someone as a means when you would in any way limit your, um, uh, use of them out of concern for their own good. And I’ve always thought that that was too weak, okay? But this view here on, uh, what it is to fail on, you know, to satisfy the crime of treating people not as mere means,
(gasp)
of course, is even weaker. I mean, Ida need have absolutely no concern, right, for the welfare of Jay to any degree at all, modi– you know, potentially modify her conduct in light of him. He’s just a slave, you know, he’s a piece of property.
(gasp)
And, um, but he’s– she’s not treating him as a mere means on this view because it’s just in accord with the system, you know, uh, that he would’ve had the highest, uh, expected utility under. And I just want to point out that, you know, it seems that all these differences are arising from a different view about what the veil of ignorance is supposed to be there for, you know, to make you do, right? Um, and I guess I’ll end at that point.
I had wanted to discuss cases where that problem arises even when it’s not a question of deciding between those who are going to be worse off, but differences in expected outcome of people who would be equally badly off if they were not helped. But I’m over time, so I’ll stop. Thank you.
I hope… Yeah, I hope I get a lot of help.
[01:10:19] MODERATOR:
Thank you.
(applause)
[01:10:25] ALAN GIBBARD:
I think we could break briefly.
[01:10:27] MODERATOR:
Yes. Uh, so we’ll take a brief break, about five minutes, and, uh, Alan Gibbard will then, um, respond, and we’ll have a general discussion that, again, will be followed by some refreshments behind the, uh, behind the wall, for today’s comments, and, uh, and then we’ll open up into a general discussion. Um, Alan?
[01:10:50] ALAN GIBBARD:
Well, it’s such a privilege to have, uh, uh, things that I’m interested in, uh, raised by, uh, such a terrific, uh, array of commentators that, uh, I wish I had the ability to do justice to very much of this. Uh, I wanted to go backwards just because it’s, uh, easier to get started, uh, remembering, uh, what’s freshest. Uh, so I’d like to talk about, uh, uh, start out with Francis’s, uh, uh, comments, questions about lecture two.
Um, well, is respect-based morality a failure? Well, I think what I was claiming was that the attempt to get a, a non-utilitarian morality, uh, from a general rationale of respect, uh, is a failure. Now, of course, that’s a very broad claim, and, uh, and we, uh, I’m, uh, as she says, trying to get a, um, trying to get an account of morality, uh, starting with, uh, a way of understanding respect, but I’m not trying to get a, uh, find respect as the reason that, uh, utilitarianism is wrong and something incompatible with utilitarianism is right.
Um, so, uh, one thing I had in mind was, uh, attempts to get, uh, uh, Kantian sorts of, uh, uh, tests, uh, the test of the first version of the categorical imperative to, uh, to work properly or attempt to get the second version of the categorical imperative to, uh, have definite non-utilitarian content. Uh, specifically on Rawls, uh, well, uh, Rawls rules out equal probability reasoning, but we have to ask, well, what’s the rationale within the general theory of doing that? Uh, well, he says, well, uh, he’s ruling out, uh, knowledge of what kind of society, uh, one will be in, and we can’t make any sense of the idea of what kind of, uh, society, what the chances are of being in a particular kind of society.
But that would still allow the, uh, conditional judgment that, uh, if I’m in a particular kind of society with a population of a million, say, then I have a one in a million chance of being anybody. Uh, is that not taking seriously the positions that people will actually be in? Well, Rawls was trying to do it with, well, Well, he always distinguished, uh, uh, the parties be-behind the veil of ignorance, uh, people in a well-ordered society and, uh, us.
Uh, the parties behind the veil of ignorance were supposed to be mutually disinterested. They didn’t respect, uh, uh, but the thought experiment about them was supposed to, uh, tell us something about respect. Uh, and my, my view is we– well, we have to scrutinize the, the whole way in which things are set up to try to, uh, try to give an account of what respect requires.
Now, uh, I, I should say if the, uh, in a situation where, uh, maximin choice is rational, I doubt that there are such situations, but in, in such a situation, we have to generalize the, uh, the idea of utility, uh, because, uh, that’s, uh, going to satisfy some of the ax– most of the axioms of decision theory, but it will violate some sort of, uh, Archimedean condition. So if we, uh, I think if we generalize, uh, the idea of, uh, of utility, we still get something that’s, uh, utilitarian in a way, but has this strange kind of, uh, utility where, uh, you would never cross the street for a chocolate bar because, uh, uh, because getting hit by a car is so bad. On the, uh, is my talk about, uh, my attempt to explain normative judgments in terms of planning and empirical claim?
Well, I hadn’t been thinking of it that way, and let– instead of trying to, uh, think whether that would be a good test, let me say a bit about how I am thinking of it. Uh, we can think of, uh, the, the logic of decision in a way that, uh, decision theorists often do. Uh, there’s a situation where various different And alternatives are, are open.
Uh, I classify some of those alternatives as, uh, as okay, some of them as, uh, as not okay. That may lead to a further, uh, situation in which I’ll— that is, any choice can lead to a further situation where, uh, again, there are a number of alternatives, uh, and, like, I allow some, reject others. I said, well, that has to be the valence part of— uh, now, my talk of, uh, planning maybe was, uh, uh, a bad term to use because we could think of, uh, picking a strategy, uh, uh, just picking, uh, uh, in the sense of what we might call a plan that, uh, just gets a contingency plan.
It picks a particular alternative in each situation one might be in, and that’s not what I had in mind. So I’ll use the term strategy for something that just picks out one alternative in each decision situation, and then I’m using plan as a sort of valence classification of the, uh, of the, uh, alternatives in each decision situation. Now my claim was that, uh, that, that, uh, kind– that planning in that sense, well, first, uh, we, uh, can ask what planners in that sense are committed to, and then my claim has been that, uh, planners in that sense are committed to something that turns out to, uh, look just like, uh, like normative thought.
Now, uh, John has some, uh, some arguments that it doesn’t work that way, uh, which I’ll address, uh, shortly. But, uh, let me, uh, let me say a bit about supererogation again. Uh, so there are two kinds of oughts in play.
There’s, uh, the moral ought, which is attached to, Uh, to guilt and, uh, resentment, And then there’s the, what we might call the, uh, the, the rational ought, uh. Uh, so thinking about the, the thing to do is supposed to, uh, supposed to explain, uh, thinking about the, the rational ought, uh, in my first book, I talked about what it makes, uh, what it makes sense to do. Okay.
So, uh, this is somebody who, uh, uh, doesn’t… I, I think the situation that Francis describes isn’t one where, uh, somebody thinks that, uh, morally, uh, he ought to, uh, ought to call the police, because to think that would be to think that it’s morally wrong to jump in and rescue, uh, the person. This, this is somebody who thinks, uh, morally it’s, uh, permissible to, uh, call the police, uh, permissi– but permissible also, and in fact, uh, morally admirable to, uh, jump in and save the person.
And, uh, this is somebody who, uh, who because of the, uh, moral admirableness of it or because, uh, more– thinking more or less narcissis-cistically, uh, uh, the effectiveness in saving the, the child, uh, somebody who, uh, thinks that, uh, what it really makes sense to do in the situation is jump in and save the child but is too, uh, uh, well, is overcome by, uh, akrasia in this case. So, uh, and then we’ll have to talk about that, uh, later. Uh…
Okay, uh, now, uh, uh, So is, uh, in fact, let’s talk about akrasia next. Uh, so I, I wish I could, uh, think myself into the, uh, the example. All, all my, uh, experiences that are candidates for akrasia are, uh, matters of excessive sloth, not matters of excessive energy.
And this is, uh, somebody who can’t, uh, can’t resist getting up and brushing teeth while, uh, uh, well, thinking that he ought to stay in bed, uh, but, uh
[01:20:41] JOHN BROOME:
Well, when I, when I think about situations like that, I’m, uh, I guess I’m not very, uh, clear about how to describe them. After all, it has to be the thought that, uh, at this very instant, uh, I ought to such and such. Now, uh, T. M. Scanlon, for example, uh, has a vivid example of, uh, uh, thinking that I’ve got to call the doctor.
I keep walking to the telephone and then walking away from it, uh, because I dread the news I’m going to get. Well, it seems to me that what, uh, uh, what typically happens is I get to the telephone, I think, uh, \”But, uh, shouldn’t I get a drink first?\” Uh, then I walk back to the telephone, and then I have some other such thought.
I’m the sort of process that, uh, uh, Shakespeare, uh, illustrated in Hamlet, Um, but, uh, I, I do want to allow that, uh, uh, I mean, I’m ambivalent enough about this that I’d certainly like to allow what lots of other people insist on, namely that, uh, it’s not only that I don’t, uh, hold steady my view of what I ought to do, uh, when the moment comes from, for, uh, actually executing it, uh, but that, uh, uh, but that even when I hold steady my view of what I ought to do, uh, uh, I may not do it. Uh, okay, well, if we say, well, then there’s planning is one thing and uh, and ought beliefs are another thing, uh, there’s something peculiar about that, namely, well, we have the kinds of concepts that I’ve described. Uh, now, uh, whether the, uh, whether my description is coherent or not, of course, is, uh, is debated.
But, uh, and if the– whether the logic works as– I’ll talk shortly about whether the logic works as I claim that it does. But okay, so we’ve got these, these, planning concepts in this, in this, in this, uh, s-somewhat, uh, special sense of planning. Okay.
Uh, these are concepts that, uh, that any, uh, anybody who thinks what to do, anyone who, uh, faces what amounts to a decision, the decision tree of life, uh, is committed to, although they may not, uh, uh, they may not explicitly, uh, use these concepts. And then we’ve got this other set of, uh, normative concepts, and if you’re perfectly rational, then the two just coincide. So why do we need these, uh, these normative concepts, these, uh, uh, ought and, uh, and, uh, and, uh, rationally permissible or something, uh, concepts of, Well, just for the sake of being irrational, So they can come apart in the case of being irrational.
Well, I’m somewhat drawn to think that, uh, irrationality is a matter of, uh, some sort of, uh, incoherence of one’s, uh, of one’s thoughts that, uh, I want to… I’m drawn to explaining it, uh, either, as I said before, as, uh, as lacking steadiness, uh, in, in one’s, uh, convictions or plans. Uh, but, uh, also as, uh, but if I, uh, try to, to explain it not that way, then I want to, uh, then it seemed to me that the, these are going to be situations where in a way I plan and to do something, and in a way I, uh, don’t plan to do it.
So in my, uh, uh, in my first book, I thought of, uh, Cornelius and Babar, uh, standing on the, uh, standing on his balcony. The firemen have the, uh, net held out, uh, and he thinks, uh, “Well, what, what to do?” He can’t figure out what to do, whether, whether to jump, and then he, uh, decides to jump and stands there, uh.
Uh, well, it seems to me that’s a case of, I mean, he is planning right now to jump, except he finds his legs won’t move or something like that. Well, we all– So it seems to be that those are, uh, those are cases where, uh, in a way, in a way he plans to jump right now, but, uh, but then part of him certainly plan, uh, uh, the, the, uh, part of him that responds to terror, uh, uh, doesn’t go through with the plan.
Uh… Okay.
[01:26:08] ALAN GIBBARD:
So is the, uh, is the logic actually different in the way that, uh, John’s handout indicates? Well, I don’t quite, uh, uh… See, I don’t seem to have the… I, I guess I left the handout over here. But, uh, okay. Um,
[01:26:33] JOHN BROOME:
No, uh, um,
[01:26:37] ALAN GIBBARD:
I guess I need to know more about the, uh, uh, the way he’s conceiving of believing that, that you ought if you F to G. But, uh, if we put it in terms of planning, there, uh, there seem to be three quite distinct conditional states here. Uh, one is that, uh, if in fact, uh, P then do Q.
So I suppose, in fact, I’m going to drink the second glass of wine, uh, uh, what’s my, uh, what’s my plan for that circumstance where, uh, the second glass of wine will make me, uh, will give me a terrible hangover the next day or something like that? Well, my plan for the situation where, in fact, I’m going to take another glass of wine, uh, is not to take the other glass of wine. That’s another way, of course, in which, uh, planning as I’m speaking of it, uh, differs from planning in the ordinary sense, because planning in the ordinary sense involves a belief that one’s going to carry it out and, uh,
(coughs)
and, of course, planning, I talk about planning for the case of being Caesar at the Rubicon, and, uh, certainly I don’t think that my planning will have any effect on, uh, what Caesar does at the Rubicon. Uh, now, for that sense, uh, of the,
[01:28:08] JOHN BROOME:
uh, inference at the top is valid, I think. I’ve, uh… But, uh, there are a couple of other senses, uh, couple of other conditional constructions.
One is, uh, if unavoidably P, then Q. So– And that’s a matter of, uh, uh, if, uh, my planning takes the form of maximizing, then that would be maximizing under the constraint that P. Because the truth of P doesn’t mean, uh, that P is true, doesn’t mean that I’m constrained by P, right? And it’s true that I’m not going to lie down on the floor, uh, in the next few minutes, but I’m not under the constraint of being unable to lie down on the floor.
Okay. So if, uh, unavoidably P, then Q, and, and for that, uh, then do Q. Uh, so, uh, I mean, suppose, uh, I’m not even capable of jumping into the fireman’s net.
Uh, if, uh, unavoidably I don’t jump, then what to do? So that’s another… a second kind of conditional. And for that kind of conditional, this, uh, pattern at the stop, top would not be valid as far as I can see.
And then a third very interesting one that, uh, that, uh, Hare, uh, discusses in a, in a wonderful paper called, uh, Wanting: Some Pitfalls. I think he makes some mistakes there, but has a, a marvelous set of devices for thinking about these things. Uh, something ungrammatical, but, uh, if imperative do P, then imperative do Q.
And we c– I can think of, uh, means-end reasoning, uh, in that way. So if leave the room right now and go back to where I’m staying, then follow the path up the stream, understanding both of those as imperatives. So to evaluate that, I assume a goal or assume, uh, some sort of policy and then, uh, and then, uh, in a, in a kind of Ramsey test, uh, uh, Taking on the policy, I, uh, sort of offline as it were, I, uh, decide how to carry it out.
And that kind of, uh, conditional also, uh, doesn’t satisfy the, uh, doesn’t make the pattern at the top ballot. So I think there are lots of resources. I’m, I’m not quite sure which, uh, reading of, uh, of the conditional ought and the second is intended, but, uh, there are some… there are two alternatives for, uh, how to realize it in planning language.
Um… Okay. Uh, so, uh, I’m, I’m, I’m never quite sure what cognitive means
but it can mean, uh, explained in terms of truth, and of course, I’m trying a different kind of explanation. My– The explanation is that an agent plans, and, uh, Michael has some, uh, uh, intriguing thoughts about agency, which I won’t be able to get very far with.
But, uh, let, uh, let’s start with, uh, coherence and, uh, why c-coherence matters. Uh, so we both agree there’s, uh, one question, uh, whether, uh, a way of thinking is undesirable, and another question whether it has logical defects. So, uh, so there’s nothing particular– particularly undesirable about uh, uh, logical defects in my, uh, in my thinking about some-something that it doesn’t matter having correct beliefs about anyway, uh, but we’re interested in logical defects.
Now, uh, I think I made a blunder in the, uh, in my discussion of why coherence matters. Uh, I s- went pretty carelessly from the, uh, term coherent to the term, uh, consistent, and I defined coherent as satisfying the standard, uh, axioms of decision theory. Uh, my example of, uh, incoherence was something that’s really more like inconsistency.
It would violate the, uh, formal constraints of decision theory, but, uh, in a special way, uh, in that, uh, it’s a contingency plan that, uh, that, uh, can’t be carried out in the contingencies to which it applies. Okay, so is, uh, is what’s wrong with incoherence a matter of, uh, not being interpretable? Well, uh, not for consistency, I think, like, uh, for consistency, the problem really is that, that, uh, there’s, uh, uh, no, uh, no specific contingency plan that fits, uh, an incoherence, uh, or an inconsistent set of, uh, beliefs and plans.
Uh, is that undesirable? Well, no, inconsistency isn’t always, uh, undesirable. Often it doesn’t matter.
Uh, but then there’s the rest. Well, what’s the rest? Uh, in the Zeckhauser, uh, uh, Russian roulette example, uh, people like Ned McClennen have given, uh, consistent policies, uh, for, uh, violating the, uh, uh, standard constraints of decision theory.
Uh, and so the s- uh, the standard constraints of decision theory that are violated aren’t matters of consistency. And there, I think, uh, interpretability has something to do with it, but I have a different picture of what interpretability has to do with it. Um, if I’m, uh, If, say, I respond differently if the, uh, problem is what to do in a series of actions, uh, with certain end probabilities and what to do, uh, with a single action that has those same end probabilities, well, it’s, uh, consistent to respond to things obviously, uh, not even the end probabilities.
Uh, and then, uh, it’s, uh, we might say, in… incoherent unless the process matters. Uh, so we have to, uh, now we have to interpret my plan to see what I’m treating as mattering and what’s, uh, what’s incoherent in a way. Well, you can represent the policy, uh, as long as you represent the, the process as mattering.
Uh, the constraints of decision theory, uh, don’t make any difference. Uh, you can always, uh, fudge things to fit the constraints by saying, “Well, uh, what he, uh, really cares about is how the game goes, not whether he ends up alive or dead, or rich or poor.” But then it’s just fantastic that those are the things to care about in, in that situation.
So we have to interpret in order to, uh, uh, assess the, uh, we have to interpret the, the plan, uh, Uh, as having a rationale in order to see whether the, uh, in order to assess the rationale, in order to see whether the rationale is, uh, is credible. Uh, so that’s the role I picture interpretation as having. Not that I reject what Michael says.
I find it intriguing, but I, I don’t, uh, I don’t know how to think it through in those terms. And, uh, and so I’m, uh, eager to learn more. Um, Okay, so uh, preferences aren’t plans because they have different constraints.
Well, I’m trying to explain, uh, everything in terms of what I’m calling plans. And in the case of, uh, in the case of, uh, desires or preferences, Well, first I’d take it that talk about desires is really talk about preferences. Uh, I desire something in that I, uh, prefer it, uh, to something else that’s implicit, uh, maybe to– maybe just prefer it to the world never having existed.
But, uh, uh, but, uh, And economists have, uh, have systematic ways of talking about preferences. What do preferences do? Well, as what’s feasible varies, uh, your, uh, choices, uh, vary in a systematic way given that you have certain preferences.
So we, uh, so we read off the preferences by the way the choices, Well, actually the way the, uh, uh, uh, the valence choices, uh, vary as the feasible set varies. Well, that’s, uh, uh, that’s a feature of a contingency plan that covers quite a different, uh, quite a lot of different, uh, uh, feasible sets of outcomes. So in the case of, uh, wanting both to be scholarly and, uh, fabulously rich, uh, I, uh, if that’s my desire, uh, that is, I, presumably, I prefer being scholarly and fabulously rich to being, uh, fabulously rich and non-scholarly, to being scholarly and, uh, and not fabulously rich, to being neither.
Uh, okay, well, if, uh, uh, if those are my preferences, then, uh, my contingency plan for a situation in which it’s feasible to be, uh, both scholarly and rich is to choose it. Now, of course, actually, we’d have to say everything’s equal and, uh, and I think what’s actually going on here is that we think of preferences as somehow resulting from what we might call preference tendencies. So I have a preference tendency to be rich.
I, uh, I have a preference tendency to be scholarly. Uh, of course, the two might interact so that, uh, but, uh, what– But one way in which, uh, those preference tendencies might act is just by, uh, adding up straightforwardly.
So, uh, so I think that we can read preferences off of a contingency plan, and then preferences, and uh, uh,
[01:40:11] ALAN GIBBARD:
do have, uh, constraints. I, I would take it that they have the constraints that, uh, economists work out for, uh, utilities, even though, as John was saying yesterday, economists, uh, standardly define utilities in terms of preferences, and I was trying to use the same formal apparatus to, uh, talk about, uh, uh, prospective goodness for a person or prospective goodness, uh, all told. Uh, uh, do plans aim at coordination?
Well, uh, again, I’m, I’m intrigued, but, uh, I’m looking at plans as aiming at, uh, uh, being, uh, well, plans aiming at being carried out. But, uh, of course, I’m talking about contingency plans, so what contingency plans aim at is being carried out, uh, should the contingency arise. Uh, and I’m trying to explain the, uh, the logic of planning, uh, starting with, uh, starting with that.
Um, well, I’d love to say more things about, uh, Francis’s, uh, fascinating comments yesterday, but I think I’d better let the proceedings go on.
(applause)
[01:41:41] MODERATOR:
All right. We have, uh, opportunity now for, for general discussion. Sam? Oh, could you, could you also use the microphone, please?
[01:42:00] JOHN BROOME:
Um, I just wanna raise two extremely simple-minded questions, and, uh, uh, y- I’m sure you’ve answered them, but, uh, I would find it helpful to have the answers restated. Um, so the first really is related to a question that Jay raised on the first day.
Um, and that has to do with this idea of, uh, um, justified resentment or warranted resentment. So you say that if I judge that resentment is warranted, we should think of that as being like a kind of plan to feel resentment in the relevant circumstances. So a kind of quasi-planning, we might call it.
[01:42:43] ALAN GIBBARD:
Mm-hmm.
[01:42:44] JOHN BROOME:
So if I actually, I mean, y– leaving aside the awkwardness of thinking about planning what to feel and waiving worries about the, um, possibility of emotions being responsive to the will, um, if I actually imagine myself sitting down and trying to plan when shall I feel resentment, Um, I, I c- I can imagine myself thinking along two lines. I mean, the first thought I think I’m likely to have is, well, I should– I plan to feel resentment when it’s warranted. Um, and I might have some views about when it’s warranted, when I’ve been wronged, when I’ve been treated unfairly, when someone has taken unfair advantage of me, something like that.
Um, another thought I might have is, um, well, I should feel less resentment than I do. I plan to feel less resentment than I do. I’m going to feel resentment, um, you know, I’m, I’m gonna ep– I’m g-
I’m gonna avoid feeling resentment even on many occasions when it, when it would be warranted, when I have been treated unfairly or been wronged. Um, maybe at the limit case, I decide I’m never gonna feel resentment, um, although I, of course, continue to feel that people wrong each other, and I can be wrong. But for various reasons, I think it would sort of… it would just, you know, life will be meaner if I am always, you know, go– or feeling resentment even when it’s warranted, and I’m just gonna, I’m just gonna give up resentment.
Um, I don’t– I, I take it on your view, neither of those is a, uh, is a way of, uh, is a kind of plan that, uh, that would be appropriate somehow. Um, that is, I mean, it seems circularly to involve notions of warrant that they’re supposed– that you’re supposed to be explaining.
And the second case also involves a kind of divergence between the view of the plan of when to feel resentment and the idea of when its moral judgments are, um, are in place. So I guess… But if I, if I’m not allowed to think in those ways, and I try to imagine, “No, no, you can’t– don’t think those thoughts.
Just plan when to feel resentment, but don’t rely on judgments of warrant.” And I, I, I com– feel completely at sea. I mean, I don’t know how I would begin actually to go about making that plan, even if I thought I could put it into effect, that my will would somehow be magically responsive to my planning.
So, you know, maybe this is just a problem about me, but after all, these are, among other things, my judgments of warranted resentment that are g- being, um, explained here. So I, I, I just would like to know w-how to think about, um, uh, what I should be taking into account when I’m making these plans or when I imagine myself making these plans.
The other question is, is from the second lecture, and it, i-in a way, it’s related to something Francis said. Uh, you– In this path from the hypothetical choice under uncertainty to sort of s- more roughly utilitarianism, um, you appeal a lot to the force of the criticism, uh, the rebuke, you would have chosen it.
Um, how much force that has, of course, depends on what the choice, you know, the circumstances under which you’ve cho- would have chosen it, um, what the thing is we’re talking about, uh, the object of the choice. Um, it doesn’t always have force. I mean, sometimes it would be out of place.
And, um, so the question is, is really about w-w– Could you say a little bit more about what you think the authority of the relevant kind of hypothetical choice is? I mean, Rawls faces the same problem. Why should we care about choice in the original position?
And he gives a variety of answers, you know, one of which appeals to reflective equilibrium, and the fact that what comes out of the choice will match our intuitive judgments. Um, I take it you don’t want to say that, but I’m wondering if you could just say a little bit more about why this particular rather under-described, I think, in the way you, um, in, in the way you, you use it here, choice sh-should be able to bear the weight that you place on it. It’s also a much more general kind of choice than you’re imagining than, than, than Rawls does.
I mean, he’s concerned with just principles for the basic structure, and this is all of morality, you know, and I’m in this very sketchy way supposed to imagine myself choosing in ignorance of who I am, all of the principles of morality. And, you know, uh, not obvious to me that that should– that how much should rest on that or if so why. So I’d just like to hear a little more about that.
Thanks.
[01:47:28] ALAN GIBBARD:
Yeah. That, uh, that is, we might think of, uh, this in a sort of, sort of Brandian way of, uh, choosing the, the, uh, moral code or moral ethos of my society or so. Uh, well, uh, the conditions I put on, and maybe these aren’t enough, but, uh, one is this only applies when the, uh, when the system is the going system.
So, uh, so that’s the way we do things, uh, that’s the established, uh, uh, system seems right. Uh, and, uh, the condition for choice of a system has to be, has to be fair. Uh, so that, that’s what I was, uh, en-envisaging.
Now, what, uh, what is it for it t-to be fair? Well, uh, I’m just going to have to say something pretty, uh, circular. To regard it as fair, uh, is to be regarded as having this, uh, sort of, uh, uh, uh, upshot for, uh, for our, for what our, what moral attitudes are, are warranted and, uh, and that’s, uh, uh, that’s a planning question.
Uh, now, on, on warranted emotions. So, uh, seems to me I can, uh, think about, uh, what sorts of things to resent and what sorts of things not to resent. And I understand this not as, uh, requiring, uh, resent, resenting things at will.
In fact, I, uh, regard it as, uh, as, uh, talk about something different. Uh, so is, uh, uh, being expected to stand in line at a store where there’s a sudden onslaught of, uh, more customers than there normally are that couldn’t have been foreseen. Uh, something to resent.
Uh, know what else can they do. Uh, so I see that as being in a kind in a state that, uh, may not, uh, may not be effective in getting me not to resent it. But if it is, uh, effective in getting me not to, uh, not to resent standing in line, uh, then it’s, uh, then it’s not effective through, uh, my willing not to resent it.
Uh, it’s, uh, uh, I’m in a state that I’d, uh, that I’d express to myself as saying, “Well, resenting it, uh, doesn’t make sense.” And, uh, and I think that, uh, any, uh, feeling like resentment, uh, does respond to such thoughts. Uh, now, uh, I would, uh, on, uh, on the, uh, on thinking I, uh, should, uh, resent less than, uh, uh, Uh, feel warranted resentment less, uh, less often.
Um, seems to me that’s, uh, a kind of view that, uh, even warranted resentment, uh, isn’t desirable. So I would be, uh, planning there to, uh, to desire or want to feel less warranted resentment. So I think, well, there’s some things that are, uh, things to resent, uh, but there are also things to, uh, want not to resent because it’s bad for the digestion to resent or something like that, or, uh, or bad for relations with other people.
Uh, uh, okay. I think that’s, uh, what I’ve got to say.
[01:52:10] MODERATOR:
Maybe, maybe I’ll ask a question.
[01:52:11] JOHN BROOME:
Uh-huh. Um, I, I… Actually, this is just a follow-up to one of, uh, Frances’s questions.
Mm-hmm. And I think it, it, it’s, it’s on that issue that she had more to say about, so perhaps she’d like to follow up as well, um, or, you know, you know, um, take advantage of this opportunity to, to elaborate a little bit on some of her own remarks. But, but I, um, I, I too was a little bit puzzled, um, in the second lecture about how, how quickly we moved to the interpretation of the choice situation in Harsanyi terms as a choice where we’re, we’re assuming an equal probability of occupying each of these, uh, each of these positions.
You know, the Rawlsians I talk to, for instance, they, you know, they all of course concede that if you const- const- construe the choice situation in that way, then some kind of average utilitarianism or something like that comes out of it. Uh, and so, you know, all the, all the, if we’re, if we’re trying to imagine a route from um, broadly contractarian ideas to utilitarianism, you know, it seems like, uh, boy, it’s going to be a huge crux to justify how you, how you, um, set up and describe that initial choice situation, and whether you assume it’s a, a choice under the condition that you, uh, take yours- take there to be an equal chance of occupying each of these positions, or, uh, or rather in Rawlsian’s ter-terms, where you don’t make that assumption. Uh, now, um, in re-in response to, to Francis’s question about this, you, you, you noted that Rawls’s– that, that one argument that Rawls gives, uh, having to do with not knowing which society you’re, you’re choosing for, uh, doesn’t by itself rule out, um,
[01:53:56] MICHAEL BRATMAN:
mm-hmm,
[01:53:57] JOHN BROOME:
you know, uh,
(cough)
um, assuming an equal probability of equal, you know, um, occupying each of the positions in any of the possible societies that, that, uh, that might come into existence. And, and I think that’s true. Uh, so maybe that argument just by itself in Rawls isn’t conclusive, but, but there are other…
So, so the, the friends of Rawls a-a-appeal to other considerations. I think Francis was adumbrating or adverting to some of them, um, you know, having to do with the di-distinctive nature of this choice and the importance of, uh, y-you know, the, the, the w- the worst possible outcomes would be, you know, outcomes for your whole– uh, for the whole of a life, and, and these are the kinds of considerations that make it, uh, they, they would suppose, uh, perhaps unreasonable to build in this equal probability assumption in the initial choice situation. Um, and, and maybe those considerations are, are also inconclusive, but it, but it strikes me that this is really the crux, and it seems like unless we have more to say on one side of the issue or the other, uh, the, the– there’s really not gonna be anything like a linear route to utilitarianism.
And, and it, and, um, it’s– In, in particular, it would be nice if there– something more could be said in favor of the equal probability assumption than, than just that some of the things Rawls says aren’t completely convincing. So I’m, I’m wondering if you could, if you could say something more about why that really is the default assumption. I, it– when I read Harsanyi, it looks like there’s a kind of economist assumption that this is the only way we have of dealing with rationality under uncertainty, um, conditions of this sort.
But, um, um, but it’s, it’s not obvious to me how compelling that consideration is. So I’d, I’d be curious to hear, uh, whether there are other things that can be said about that. Uh,
[01:55:38] ALAN GIBBARD:
well, there, I mean, one, one question is just, uh, are there convincing things on the, on Rawls’s side? So yeah, so that it can affect, uh, that it will affect me for my entire life, uh, doesn’t seem convincing.
If I get, uh, maimed in an earthquake, that will, uh, affect me for my entire life. Uh, if a child gets maimed by an earthquake, it, uh, affects the child for the, uh, uh, child’s entire life. Uh, I came into California.
Uh, the, uh, we deal with low probabilities of, uh, terrible things all the time, and I don’t think a different kind of decision theory applies to that. Uh, and, uh, um, now, when, uh, Rawls was writing against a background of a, a decision theory that tried to do without probabilities and, uh, and stressed maximin, uh, Rawls, uh, doesn’t think that maximin is appropriate, uh, all the time. He thinks…
He argues that it’s appropriate in the– for the kind of choice that he’s making but gives various conditions, and they amount to conditions under which maximin and, uh, and expected utility maximization would coincide as far as I can see. So, uh, so I’m– I prefer the more, uh, the more general, uh, account of rational choice. Uh, so, I mean, another reason is just going to be the, uh, the, uh, the a matter of how plausible I find the, uh, uh, the standard constraints of decision theory as constraints on rational choice.
Uh, of course, the, uh, in this case, the, uh, there’s only one constraint that’s, uh, that’s relevant to whether you maximin or, uh, or do normal expected utility maximizing, namely something like an Archimedean, uh, constraint that, uh, there, uh, that there’s nothing such that, uh, a sufficiently small probability wouldn’t be outweighed by, uh– I mean, there’s no case of something that’s, uh, uh, so immensely important and something else that’s, uh, that matters but is so immensely unimportant that a sufficiently small probability, uh, of the immensely important thing wouldn’t be outweighed by certainty of the less important thing. But, uh, crossing the street to get a chocolate bar just seems like a, um, the sort of case where it’s, uh, implausible to say that the Archimedean condition is violated.
[01:58:35] MODERATOR:
Barry Stroud. Oh, oh, um, could—is this on this point or?
[01:58:39] FRANCES KAMM:
Yeah, this is on this point. I, I won’t go into the other case that I was concerned with. Okay.
But it’s just that when I think about the way, um… I mean, Scanlon— I think about Scanlon’s view about why Rawls eliminated probability. I mean, you know, he mentions all the reasons. I’m thinking about the earlier article, Contractualism and Utilitarianism, which I had an opportunity to read parts, reread parts of, thanks to Sam Scheffler’s lending me the book again.
But, um, the idea that, you know, you want to, um, get people… I mean, it’s true Rawls speaks about, you know, imagine yourself, you know, having Could wind up anywhere. You, you know, could wind up anywhere.
But that that is really, you know, he eliminates the probability, right? Because he wants it to be the case that no matter where you wind up, you could consent, you could agree to this outcome. No matter where you wind up, you could agre- uh, consent to everything,
[01:59:34] JOHN BROOME:
okay,
[01:59:34] FRANCES KAMM:
that’s going on. And it’s, it’s not s-that, um, and it’s not that the reason when you’re in any of these positions you can consent to what’s going on because, well, I would have chosen this from an original position. It, it’s not– that’s not the reason he, I think, you know, ways, uh, that you give for why you can consent to this.
And so the, the focus, I mean, it’s, it’s, it is result-driven. I mean, you, you want… You said that, you know, when I said you’re being forced by the veil of ignorance to take the position of each individual seriously and consider whether someone in that position could consent to this arrangement, okay?
And not just because he’s thinking, “Well, I would’ve, you know, would’ve maximized my expected utility, so how can I not consent,” right? Um, the thing is that, um, uh, you said, well, you know, behind the veil of ignorance, you’re not supposed to be thinking about other people. You’re a self-interested individual and so forth.
But it’s so constructed, right, that the self-interested individual, right, is gonna give us the answer to the question, you know, are we– do we come up with a system that everybody, when they’re actually in those positions, right, can accept? Um, so, well, that’s– I don’t know if that helps,
(laughter)
but it doesn’t probably help. But I just wanted to make clearer that, you know, accepting the idea that you are thinking of yourself behind the veil of ignorance. The point is, what is this, you know, what is the function of this veil?
Uh, what, what is it trying to get you, even as an individual imagined thinking about only yourself behind the veil of ignorance to come out with, right? Um, and it’s not supposed to be the answer that, you know, any position is acceptable given that I would have chosen it behind the veil of ignorance. Uh, it’s, it’s something quite different than that.
I, I- I’m sure that’s not helpful, but- Uh, I’ve tried.
[02:01:22] JOHN BROOME:
Well, but, uh-
[02:01:23] FRANCES KAMM:
I’ve tried to be helpful.
[02:01:25] JOHN BROOME:
Okay. So the, the people behind the original position– I mean, behind the veil of ignorance aren’t, uh, aren’t-
[02:01:30] FRANCES KAMM:
There’s someone behind the original position.
[02:01:32] JOHN BROOME:
Sorry, sorry.
(laughter)
Uh, the, uh, the, the people-
[02:01:37] MICHAEL BRATMAN:
there’s strings.
[02:01:38] JOHN BROOME:
The people- the people. Okay. So the people behind the veil of ignorance aren’t thinking, well, uh, what, uh, what will I accept if I’m, uh, in this circumstance?
And they’re assuming that they, uh, they will, uh, accept, uh, whatever’s chosen as, uh, as legitimate, and that that will motivate them in the way that’s, that, uh, regarding a system as legitimate motivates people. Uh, so the– as you, as you said, uh, the, the real point is that the original position is supposed to, uh, the original position thought experiment is supposed to, uh, tell us something, uh, about the sort of social morality, the justice of basic structures, uh, if, uh, uh, by thinking, by running a thought experiment ab-about people who aren’t concerned with such things. Uh, now, uh, so what will people accept?
Well, that could– we could give that an, an empirical interpretation. What, uh, what sorts of social orders are such that if those social orders are instituted, uh, people, um, people will accept them as legitimate and, uh, uh, and not work to undermine them? Well, that’s a, that’s a very important question, and it has, uh, I think it has complex and important bearings on, uh, questions of social justice.
But, uh, take, uh, take one thing that, uh, we observe, that if people are in the, uh, in unjustly privileged positions and, uh, people try to take away those positions, they, uh, they resist furiously. Um, and that’s part, I guess, of a general, uh, tendency to, uh, uh, especially value and, uh, regard as legitimate what, uh, what you already f– uh, frame as, uh, as yours. So, uh, so people with, uh, uh, Say slave own–
Oh, well, the American Civil War is a, a fine example. Okay, so what were the– what would the slave owners accept? Uh, nothing just, certainly.
Uh, so presumably, uh, but then the other word was, uh, acceptable. So the question then is, uh, something like, uh, what, uh, what ought I to accept, uh, if it’s the going social order and if I’m in a, um, if I’m in a, not in one of the best positions? Um, well, that’s a, uh, that’s a very important question in, in social philosophy.
Uh, of course, Rawls himself didn’t really use a maximin, right? He, uh, uh, he worried about, uh, what at one point he was calling, uh, basket cases, uh, pe-people with, uh, injuries such that, uh, enormous resources would be needed to make their lives, uh, slightly better, and he, and he, uh, decided that he needed something, uh, like, say, the, uh, prospects of the, uh, people in the, uh, lower half of the starting positions or, or something like that, uh, well, I, I, uh, approve of the, of mitigating it, it like that, uh, but I, I still find it, uh, curious that the, uh, that the arguments that lead to it, uh, I mean, the argument he actually seems to depend on there, uh, depend, uh, just depends on circumstances being such that this coincides with utilitarianism. And utilitarianism, I think, is a pretty egalitarian view when you put it, uh, uh, together with, uh, facts about how, uh, how, uh, how one’s life goes depends on one’s income and what the possibilities are of, uh, alternative, uh, uh, arrangements of income.
Sarah Stroud?
[02:06:24] SARAH STROUD:
Well, I have a question very far from this sort of detail. Uh, I have a question, a huge question about just your project in general. It picks up on some of the things John Broome was asking about.
Um, It’s good for us to have plans, and we look at, uh, us in action, and we see that the way to understand our normative beliefs is to start with by attributing to us non-normative beliefs and then explain in some way how it can come to be that we have normative beliefs or has they earned the right to- Mm-hmm… be sort of thought of as, as cognitive. And it’s the idea of that explanation, explaining the cognitivity of those beliefs that I’m puzzling about, and I don’t know that what I’ve got to ask will help.
But suppose, uh, we look at ourselves in the world, and we have these attitudes, um, and we have what you are happy to start with accepting as beliefs about the natural world, cognitive attitudes.
[02:07:42] JOHN BROOME:
But suppose we try to explain them in terms of some kind of non-cognitive attitude, like something like planning to behave as if it were the case that P, as it were in the natural world, something like that. Now, I don’t know first whether that would count or something like that could be described as a non-cognitive attitude, and we could put that together with the other plans. Now, if we got a full account of human goings-on in that way, would we feel that we had explained the presence of cognitive beliefs about the natural world on the basis of starting with something non-cognitive, which gives us the right to think of ourselves eventually now with cognitive beliefs about the world?
And if not, what, what’s the– what do you think is the disanalogy, um, on the–
[02:08:40] SARAH STROUD:
granting what John is objecting to, that you could actually identify the non-cognitive, uh, beliefs first. But let’s waive that. I mean, what– Could you say something about the lack of parallel or whether there is a parallel?
[02:08:53] ALAN GIBBARD:
Uh, Well, there might, uh, there might possibly be a parallel. Uh, uh, Bob Brandom’s project in, in a way is to, uh, uh, has lots of strains, but, but a central strain is kind of, uh, parallel. Oh, I think, uh, I mean, he starts with social, whereas I’m talk-
I, I’m starting with the, uh, with the, uh, thinking about the individual and thinking of the social as interactions of individuals, and he’s not always, uh, consistently what I’d call an expressivist, but uh, but sometimes says things that I, I sound expressivist. He sounds more expressivist than anything else to me. So, uh, so we’ve got the idea there of explaining logical relations in terms of plans, so, uh, or in terms of, uh, of oughts.
So, uh, what is it to say that, uh, uh, snow is white entails something is white? Well, that, uh, if you ought to think that snow is white, then you ought to think that something is white. Uh, or, uh, to put that in planning terms, uh, if imperative, uh, plan to think that snow is white, then, uh, imperative, uh, plan to think that something is white.
I’m, uh, it has to have the, uh, imperative and the antecedent as well as the consequent, uh, ’cause we don’t have, uh, if, uh, uh, if you do think that, uh, uh, uh, uh, that Michael is a unicorn, then, uh, then you ought to think that something is a unicorn. If you do think that Michael is a unicorn, you ought to change your mind. Uh, okay.
So I th- I think possibly you could, uh, uh, explain, uh, the factual beliefs in, in that sort of way. Uh, now, if you, if you did, then as Brandom says, you’d, uh, you’d have norms all the way down and, uh-
uh, well, does, uh, does this then explain? Well, in the, Uh, in the way of, Uh… Well, of course, we– I mean, we have other modes of explanation.
We can, uh, I can talk about beliefs just by, uh, having them or simulating them in, in other people. And, uh, uh, and if I talk about normative beliefs in that way, I, uh, come out sounding like G.E. Moore, I think, uh, the straight explanations, uh, that fit the, uh, phenomena fit, uh, I mean, seem to fit the non-naturalistic, uh, pattern of. Uh, but, uh, thinking about what sort of, uh, uh, frame of mind it is to have a normative belief, and that, uh, anybody who, uh, did valenced planning for life, uh, would have such, uh, uh, states of mind, it seems to be explains it in, in another way.
[02:12:27] JOHN BROOME:
Well, um, but what you just described sounded like planning to have certain cognitive attitudes, planning to think that P and so on. I was trying to get a non-cognitive attitude, like planning to behave in a certain way.
[02:12:45] ALAN GIBBARD:
Mm-hmm.
[02:12:46] JOHN BROOME:
Um, and then all of one’s behavior could then be accounted for, um, in the parallel with your sort of explanation, without attributing even a plan to have a cognitive attitude, but just to have a non-cognitive attitude— Mm-hmm, which would then make sense of everything, and then we would have earned the right then to introduce the idea of our having cognitive attitudes. Not just that we’re planning to have them. As you described, well, Brandom, but really
that now they suddenly
(laughter)
show up. Now, I, I see, my guess, my feeling was, and it’s hard to distinguish that position from saying, “Well, you see,” the way to understand what they’ve been doing all along, what we’ve been doing all along, is holding cognitive attitudes. There’s no difference when we elaborate
(laughter)
the whole thing between believing that P and plan– and carrying out the plan- Mm-hmm to behave as if P were true.
[02:13:39] ALAN GIBBARD:
Um, and so that’s where I felt we would lose any sense of explana-explaining the second in terms of the first.
(coughing)
Uh-huh. It would just turn out to be another way of describing it, as I think John said-
(laughter)
[02:13:51] JOHN BROOME:
Mm-hmm
[02:13:52] ALAN GIBBARD:
with respect to the-
[02:13:52] JOHN BROOME:
(mm-hmm)
[02:13:53] ALAN GIBBARD:
um, your ideas that the– It’s supposed to be that the– the cognitive normative attitudes are just
[02:14:01] JOHN BROOME:
Mm-hmm.
[02:14:02] ALAN GIBBARD:
non-cognitive attitudes in a different guise. But, um-
[02:14:06] JOHN BROOME:
Mm-hmm.
[02:14:06] ALAN GIBBARD:
the guise that they have in their more pristine state-
[02:14:10] JOHN BROOME:
Uh-huh.
[02:14:10] ALAN GIBBARD:
is supposed to explain their having,
[02:14:12] JOHN BROOME:
uh, this later guise, and I wondered whether you thought that would be true if we started with a non-cognitive attitude- Mm- with respect to- No, as it were, non-normative belief.
[02:14:22] ALAN GIBBARD:
Well, when I wrote my first book, I said it was a non-cognitivist theory, and, uh, and then people kept, uh, stamping their feet and telling me that they were cognitivists, and I tried to figure out what the difference was between what they believed and what I believed. And it was hard to put my finger on it, and the things they said, uh, made them cognitivists seemed to be things that I was open to believing. So I began to wonder what the term meant, and I began to wonder what psychologists meant by the term as well.
And I asked one distinguished psychologist who said, “Well, I think when I say that something is cognitive, I mean that it’s complex.” Uh, um, well, I don’t, I don’t think that’s what we philosophers mean, But I don’t think it’s straightforward what we philosophers mean either. One, one thing we could mean is, well, is to be explained on the pattern, uh, say what the subject matter is, and then just say it’s a belief in that subject matter.
But of course, uh, we can conduct that kind of explanation of normative beliefs. Uh, uh, they, uh, start with the, uh, state of affairs of, uh, one, uh, uh, one ought to try to respond to questions. Uh, and then, uh, then there’s the belief that one ought to try to respond to questions, and we’ve given a–
Maybe we’ve given a cognitivist explanation. Uh, the question is what kind of further explanation to, uh, to give and, uh, and I’m not very clear what, uh, its being a cognitivist explanation amounts to. But note that, uh, uh, decision theory, uh, in its standard, uh, uh, sorts of ways of doing things is supposed to explain both, uh, uh, what David Lewis calls credences or degrees of credence, uh, uh, degrees of belief, subjective probabilities, and, uh, and, uh, utilities, preferences, uh, and Uh, and standardly it’s supposed to be a, uh, uh, supposed to be an empirical theory, but then the empirical theory doesn’t fit the phenomena very well, and we might s– uh, we might better take it as a, as a normative theory.
So maybe you can think of, uh, of s-standard decision theory turned into a normative theory as a, as a, an, uh, explanation of cognitive states in, uh, in normative terms.
[02:17:08] MODERATOR:
Okay. I think at this point, it’s, um, it’s fitting that we should thank our, um, Tanner lecturer and our distinguished commentators for a, for a very stimulating set of events, and continue our discussions over, over refreshments.
[02:17:20] ALAN GIBBARD:
And, and we should thank the, uh, the organizers very much, the Tanner Committee and, uh, and, uh, Tanner Foundation.
[02:17:30] JOHN BROOME:
Uh- Let’s thank everyone. And, uh- And William.
(applause)
Sure, yes. I was expecting it.
(laughter)