Wednesday, August 24, 2011

The power of moral non-realist subjectivism

Over on Facebook, Massimo Pigliucci links an NYT article describing a conversion to non-realism. As I usually recommend Dr. Pigliucci as an expert on ethics, I was perturbed to find that he did not understand how moral non-realists could conduct "moral argument." Because after all, he argues, ethical statements are akin to trivial preferences like "milk chocolate versus dark chocolate" in a non-realist account.

This is profoundly misguided, and I was surprised to hear it from Dr. Pigliucci. There is a trivial sense in which it is correct, but there is a very non-trivial sense in which it is deeply confused, a confusion which often leads people to say things like, "but non-realists cannot conduct sensible moral argument."

To shoulder the responsibilities which come with idiosyncrasy, I will give a formal account of how moral argumentation `works' in a non-realist conception. I am convinced that moral non-realism can do almost everything that realists want their metaethics to do, barring the over-confident usage of a few unimportant phrases. I think that, properly understood, moral realism is paltry in light of a developed non-realism. Further, I want to show that moral realisms are in practice a special case of ethical subjectivism.

So how can one capture non-realism in an analytically useful way? For this, we turn to Bayesianism. Recalling that Bayesians assign subjective probabilities to subsets of a sample space, we simply omit `ethical propositions' from the sample space.

Cool. We can still ascribe probabilities to things like `x causes suffering', `I want to do x', and `doing x would conform to principles protecting free expression', but unless statements like `I ought to do x' wholly reduce to `factual' statements, they are not ascribed probabilities. Many `realists' attempt to claim truth-values for ethical propositions by claiming that what we really `mean' by `I ought to do x' reduces to such statements. In this case, they accomplish nothing but a loss of generality. As we will see later, their position can be described using subjective utilities.

So let's say that you are a coherent, well-calibrated Bayesian with probability distribution p who wants to make a choice between two actions, X and ~X. Better yet, let's say you act and let's find out what that means. To capture it, we can say that you prefer X to not ~X by choosing X. We can capture this simple preference with a subjective utility function u{p,C}, where p is your distribution and C is the choice `X or ~X'. Such binary actions will allow for a trivial ordering relation u(X)>=u(~X) wherever you do X. One can easily see that for a finite set of mutually exclusive choices, a similar ordering can capture the selection of one of the options. So we can assume that this function maps actions into some one-dimensional set of numbers, say the reals. So subjective utilities are functions from actions into the real numbers. If you like, you could let utilities map into the extended reals (i.e. take infinite values), but such functions risk collapsing into triviality unless you are careful to avoid the paradoxes that result

We notice immediately that we are now firmly in the domain of Bayesian decision theory. This is good news; we have a lot of accepted formal tools to use for our moral thinking. Better yet, we have kept things general enough to account for actions as demanded by any other moral system. To see how, let Up be the set of all possible utility functions uC=(u(X),u(~X)) where (u(X),u(~X)) is a pair of real numbers. This allows for any preference to any degree. (If moral absolutism-by-degree (say a Kantian imperative) is in play, reintroduce the extended reals.) So the demands of any moral system as applied to a rational agent faced with a choice can be captured. If we have a very clearly defined system S that says something like "X is permissible iff ...", then utility functions can be sorted into a blocks: the u's consistent with S in one, and the u's inconsistent with S in another. More generally, one can define subsets of utility functions consistent with a moral theory. Better yet, we may introduce fuzziness by probabilizing relationships like "u conforms to the principles of free expression", though as far as I can tell uncertainty with regards to conformity with principle may be subsumed in one's utility function. (One may still want to do this for judging other utility functions.)

Cool. So non-realists can even talk about conformity to abstract principles in a principled way without making charges of irrationality concerning preferences. So how does moral argument look? As factual information is introduced, the restrictions on utility functions which do not commit `horrendous' decisions are increased. So a Bayesian who wants to murder for fun could not ascribe a high value to life, or to non-violence, or to the autonomy of humans, or to the prevention of suffering, or...

He may remain perfectly consistent in adopting such a function. But such functions are very rare. Moral argument works so long as we expect humans to have common concerns and interests. There are limits, but it can be done. Further, I think we can expect a non-realist approach to be just as convincing to those who actually do want to murder for fun as a realist approach.

Which is not very. All the realist gains is the satisfaction of calling such a person irrational, instead of a monster or thug or authoritarian.

Chomsky often invokes `elementary principles', like avoidance of hypocrisy, when beginning a speech. He invokes international law and other `moral truisms'. And yet he doesn't strike me as a realist, and if he were to simply demonstrate that US foreign policy and its media defense is hypocritical, I do not think it would be necessary for him to say "and hypocrisy is something we should be avoid." Do we need to insist on truth-values for statements like "we should not tolerate a racist, censorious, unrepresentative, and violent state"?

Our preferences are not drawn out of a hat. Normal people can ascribe meaning to principles and commit to them. That's why moral argument works, even if irreducible `should' statements are not truth-valued.

Now for the inevitable objections. As I put it on Facebook:
People license themselves to every imaginable silliness whenever terms like `moral non-realist' or `ethical subjectivist' show up; and so I, vainly fending off frustration and fury, must patiently answer such profound arguments as "but moral non-realism means morals aren't truth-valued", "but subjective utilities are subjective", and "but people who want to murder for fun might not care about the consequences", all while having to defend newly controversial propositions like, "peoples' preferences do not tend to be randomized".
Objections to non-realism, or moral skepticism (see esp. error theory), are almost always confusions. I am not a nihilist; I think we can and do make meaningful preferences. Or, as seen in the SEP article, there are noises about `presumptions against moral skepticism'. I think that my account, since it can generalize other systems, should not be victim of some prior prejudice. (And in general, I don't like noise about burdens of proof.)

So, let's talk about what subjective utility functions we would like people to have, which have nice properties, and other swell things. Or not. But don't tell me I cannot have that conversation.

What I have not been saying: I have not insisted that all reasonable people be moral non-realists. I'm just pointing out that you can do a lot with a non-realist account. I of course have problems with realist accounts, but that's a different discussion.

6 comments:

  1. "This is good news; we have a lot of accepted formal tools to use for our moral thinking."

    But the question is whether the thinking in question can in the first place be sensibly understood AS moral thinking at all on a subjectivist conception. I don't see how the appeal to all the Bayesian machinery actually gets at this question. The charge against subjectivism about ethics is basically that it fails to make adequate sense of moral thought, not that we couldn't talk about preferences, or rules, or evaluate proposed courses of action on such dimensions. The question is whether, in talking or arguing about preferences, or rules, or consequences, we could, on the subjectivist conception, be understood to be engaging in an essentially *moral* discourse at all. The motivation for this question is the sense that a subjectivist conception is somehow not true to the relevant data of ethical experience and thought.

    Admittedly, I'm not actually making an anti-subjectivist argument here. I'm just trying to suggest that, by showing that certain ways of talking and disagreeing are possible under a subjectivist conception, you don't show that those ways of talking are essentially *moral* ways of talking. I don't think it is illegitimate for the non-subjectivist to say, in answer:

    "Yes, of course people can talk and argue in the ways you say on a subjectivist conception; but you still haven't accounted for the relevant data, because the relevant data isn't "ways of talking" but rather ways of thinking in which people seem, at least, to be committed to an essentially objective morality. You must either show that the relevant supposed data in question is an illusion or confusion or mistake of some sort, or show that subjectivism can account for that data (somehow), or else admit that your theory fails to explain the data adequately."

    In other words, subjectivists undoubtedly can argue; the question is whether that argument could count as essentially *moral* argument.

    ReplyDelete
  2. The argument I was primarily addressing was that of Pigliucci and many others, that in a non-realist, subjectivist conception argument about `morally important' issues is akin to arguing about preferring milk chocolate to dark chocolate.

    What you seem to be asking for, "essentially *moral* argument", I do not understand as necessary, as the ethical `data' in question are to me not truth-valued propositions. I was not establishing non-realism in this post, as I noted.

    To your non-subjectivist, I would ask what necessary `ways of thinking' are omitted by the subjectivist account that are captured by a non-subjectivist account. I have little doubt that people often think of their moral statements in absolute/objective terms, but I have yet to see an account which makes that legitimate. If the non-subjectivist and I agree on all facts, and yet we act or would act differently, is it correct to say that one of us may be unreasonable by that virtue, and not the other?

    For example, suppose that an informed person agrees that by `good', people usually mean the greatest happiness principle. Now suppose this person is a mother of two and prizes the happiness of her children over the general happiness, and does so knowingly. Is she irrational?

    Now suppose she believes in a theistic God, and knows that by committing a certain act she violates God's will. Yet with full knowledge, she does so anyway. Is she irrational?

    I could run through other examples and other objections specific to other accounts. But the point is that while people in everyday life think of their moral statements as truth-valued and those who disagree with their strongly-felt opinions as irrational, the terrifying reality is that it makes perfect sense to think of two rational, fully knowledgeable people who act completely differently in virtually any circumstance. The value which captures this is the subjective utility, and an objective analogue cannot.

    Yes, people often think of their ethical statements like they think of their factual statements. But I see nothing that can be inferred from this, once we start looking at how people disagree.

    ReplyDelete
  3. But I take it that whether your answer to Pigliucci succeeds depends on whether you have in fact shown, pace Pigliucci's view, that a subjectivist conception can make proper sense of the relevant instances of moral disagreement. And whether you have made such "proper sense" depends in part on whether you have adequately explained the data needing to be explained. And the data needing to be explained is, ultimately, just some aspect of what people think. (I take it this was Pigliucci's deeper point, but maybe not.)

    And then we should say: if people think that morality has objectivity features, and if you have a theory of morality which denies that morality has objectivity features, then you have a theory of morality which does a poor job of capturing and making sensible the relevant data which it ought to have captured and made sense of.

    As for whether we should or shouldn't say that evil people are irrational, I actually think that is a really hard question. In the ordinary run of things, I don't think it's a stretch to say of most "evil" people that they are acting inconsistently with values or considerations which they themselves should endorse on the basis of other values or considerations which they already do endorse. And we might mean by "irrational" something like an inconsistency of that sort. But there may be another sense of irrational, in which we don't mean a deep inconsistency of that kind but something more like "fails to use a means tending to effect a given end or goal." In *that* sense, I think it is much harder to say that "evil" acts are irrational-- they do usually tend to advance some given end or goal of the agent. But I don't know what I ought to say as the final word here; but it is (or seems) like it might be a tough question.

    ReplyDelete
  4. "But I take it that whether your answer to Pigliucci succeeds depends on whether you have in fact shown, pace Pigliucci's view, that a subjectivist conception can make proper sense of the relevant instances of moral disagreement. And whether you have made such "proper sense" depends in part on whether you have adequately explained the data needing to be explained. And the data needing to be explained is, ultimately, just some aspect of what people think. (I take it this was Pigliucci's deeper point, but maybe not.)"

    I do not think it was Pigliucci's point, since he was claiming something much stronger than this. But I'm happy to shoulder the burden of explaining myself more clearly.

    "And then we should say: if people think that morality has objectivity features, and if you have a theory of morality which denies that morality has objectivity features, then you have a theory of morality which does a poor job of capturing and making sensible the relevant data which it ought to have captured and made sense of."

    Well I think that morality does have `objective features'. Like Hume, I do not end by saying that ethics is a subjective business. Instead, we notice certain commonalities in moral judgments, and attempt to explain them. What utilitarians, ethical naturalists, and others often argue is that by `goodness', people really mean the general happiness, or their judgments, when informed, tend to fall within a conditional system of oughts. I think that one can go pretty far with such accounts, so if I do not think that these approaches are correct and mine is, I must explain where they succeed and where they fail and why.

    I've given one general reason why I do not think that these accounts reflect objective truths about morality, since it appears that reason and the ultimate governance of preference and action are separate. Other objections are not hard to find. I'll return to this in a moment.

    "But there may be another sense of irrational, in which we don't mean a deep inconsistency of that kind but something more like "fails to use a means tending to effect a given end or goal.""

    Yes, if we posit that there are objective goals, ethical naturalism takes over as an objective account. Yet if those goals are not shared...

    I continue this point by returning to the earlier one, since both illustrate precisely why I think the decision-theoretic account so powerful. One may analyze in a precise way - insofar as an ethical theory is precise - what goals it affects. Given a reasonable probability distribution, one may judge what is being preferred and to what extent using subjective utilities. So if we want to describe the `successes' and `failures' of other accounts, all we need to do is describe the utility functions which satisfy them. Thought experiments like trolleology and organ lotteries are designed precisely for this. Except without decision theory, one cannot keep track of desired properties, e.g. consistency in employed principles.

    I plan on developing my views in more detail in the near future, once I've finished a few books related to the topic. Thanks for the comments, though :D

    ReplyDelete
  5. "Yes, if we posit that there are objective goals, ethical naturalism takes over as an objective account. Yet if those goals are not shared..."

    I may be misunderstanding your point, but surely you don't think that whether "objective goals" (can we just as well say: "desire-independent moral reason for action"?) exist depends on whether some such goal is (inter)subjectively shared? The questions seem obviously orthogonal to me; at least initially, it seems that I can make very good sense of a goal being mandatory although it's not shared. (We can say, e.g., slavery was always wrong even if it's only been recognized as such in the last 500 yeaars or so.)

    But, maybe this is just part of what is at stake in a disagreement between objective v. nonobjective theories of ethics.

    I'll look forward to your future posts.

    ReplyDelete
  6. "I may be misunderstanding your point, but surely you don't think that whether "objective goals" (can we just as well say: "desire-independent moral reason for action"?) exist depends on whether some such goal is (inter)subjectively shared?"

    Not at all, but the nature of how those consensus concerning those differing goals is to be decided by rational agents very much is. For example, try to think of what it means for one goal to be `false' and another to be `true.' Beyond noting what goals are consistent with principles and accounts, I have trouble going further.

    ReplyDelete