Bayesianism is usually defended by

*Dutch book arguments*. However, these arguments are not based on empirical observation or on deduction from widely-shared premises; what then might they be, exactly?

Since the character of a Dutch book is not empirical, it is often - and I think correctly - admitted that Bayesian rationality is not binding on all `rational' persons in the normal sense of the term.

^{1}Argumentation in these matters is largely limited to finding intuitive and pragmatic desiderata and comparing differing accounts under a plausible interpretation.

*Wager interpretations*, in which probabilities represent the value at which one is willing to bet for/against the occurrence of a proposition, are very common. And it is on such an interpretation conjoined with an idea of what a `wise better' should do that Dutch books are based. Probabilism, conditionalization, and probability kinematics all have Dutch book defenses. In literal terms, you make sure that your credences are probabilities to avoid making stupid bets. Agents who obey the probability norms are said to be

*coherent*, and this is the minimal standard of Bayesian rationality.

^{3}

Before going further, I'll give my own amateur opinion - though one many of my superiors share if I judge correctly - on argumentation about Bayesianism. To me, the key defense of Bayesianism is Carnapian; it is a very powerful and readily comprehensible tool. Even if I thought Dutch books complete and miserable failures, I would still be writing these posts. To me, this post is inessential, and I would be entirely uninjured were readers to skip this information. But Dutch books are really interesting, and if you read further, you will run into them. So I should introduce them regardless of whether I deem them inessential, but I am also happy to introduce them for their own sake.

^{2}

I think my intuitions about Bayesianism are well-founded. It seems strange to not have a minimal credence, i.e. 0, which applies to necessary falsehoods. I do not think it is sensible to say "I am less committed to an uncertain proposition than I am a necessary falsehood." Similarly, I think it nonsensical to lack a maximum value of commitment: try to imagine committing to an uncertain proposition more strongly than you do a tautology. That credence is one-dimensional and continuous is more difficult; the latter of which is mostly an item of computational convenience, just as continuity in physics is. As Carnap mentions - and I'll trust his knowledge of the physics - our measurements of the world are entirely compatible with strictly rational-valued spacial coordinates, but the loss of continuity would present absurd and unnecessary difficulties in calculation and theory. For one-dimensionality, I think one should always be able to say of two propositions that credence may be compared, i.e. the ordering should be complete as it is on the real numbers. So vector-valued credences should have a real norm. If it is e.g. the max norm, then multi-dimensionality adds no effect. If it is e.g. the standard Euclidean norm on 2-dimensional real vectors, then tautologies would having (1) different credences - the points would lie somewhere on the unit circle in the first quadrant - or (2) the same credence, e.g. . In this case, credences lie on line between and , in which case we are back to one-dimensionality, or on a different line, in which case uncertain propositions may be said to be `more certain in one respect' than a tautology. So in cases (1) and (2), we are forced to make distinctions in the credence assigned to different tautologies. I do not see any value in such an approach, nor any immediate need for it.

A case of finite additivity - the complement rule - is also intuitive; try saying that you believe in a proposition with 50% confidence and you believe in its negation with a confidence other than 50%. But additivity generally is more difficult to defend with intuitions and pragmatism, and it does the computational work in probability theory. Why should the disjunction of contradictory hypotheses have additive credence?

There is an intuitive, geometric way of looking at this. Picture a normalized Venn diagram, say a unit square or circle. Then the region represented by two disjoint propositions is disconnected within that diagram; (at least) two sub-regions are non-overlapping, e.g. this. Then the sum of the total area is the sum of the disjoint sub-regions. Similarly, when two regions overlap, the `correct' formula for the total area is given by the inclusion/exclusion principle. Additivity also allows us to `slice up' such a diagram (a partition) and not worry about a failure of normalization.

I think that such considerations and the successful application of probability theory in the sciences are sufficient for a cautious acceptance and application of Bayesianism. I am not saying that it is the only appropriate method for credences. I do not think for example that one should always avoid vague probability; I will discuss how Bayesians can deal with vagueness later. But up to assigning particular values, I have found Bayesian credence more than satisfactory. That all said, I can talk about Dutch books.

I'll be lazy and borrow from the SEP supplement:

The Ramsey/de Finetti argument can be illustrated by an example. Suppose that agent A's degrees of belief in S and ~S (writtendb(S) anddb(~S)) are each .51, and, thus that their sum 1.02 (greater than one). On the behavioral interpretation of degrees of belief introduced above, A would be willing to paydb(S) × $1 for a unit wager on S anddb(~S) × $1 for a unit wager on ~S. If a bookie B sells both wagers to A for a total of $1.02, the combination would be a synchronic Dutch Book -- synchronic because the wagers could both be entered into at the same time, and a Dutch Book because A would have paid $1.02 on a combination of wagers guaranteed to pay exactly $1. Thus, A would have a guaranteed net loss of $.02

As mentioned before, the colloquial Dutch book argument is `make probabilistic bets or risk a guaranteed loss', but I agree with the SEP article that Dutch book arguments are claims about

*pragmatic self-defeat*in light of the objections and qualifications. In the context of science, we want to avoid distributing our credences in a way that could ensure inaccuracy. (I assume that nature is not a kindly bookie.)

But if violating the probability axioms could ensure defeat, could it not also ensure victory? Yes, as it turns out, and such an eventuality is called a `Czech book' by Hájek: ``Iff you violate probability theory, there exists a specific bad thing (a Dutch Book against you). Iff you violate probability theory, there exists a specific good thing (a Czech Book for you) (p.797). Hájek explores ways of preserving the Dutch Book - I think with success, along with a few other arguments in the article.

I'll leave the interested reader there.

^{4}

1. See e.g. (Howson and Urbach, p.75).

2. This section is a quick read, as is the supplement, to give you some idea of what these arguments look like.

3. This standard is very lax. To use a shopworn illustration, a person who believes that the moon is made of cheese with 90% confidence is `rational' in this sense so long as he believes the moon is NOT made of cheese with 10% confidence. Probabilistic accounts of `reasonableness' will be discussed elsewhere.

4. In case the link breaks, the Hájek's article is

*Arguments for - or against - probabilism?*and is readily accessible in several locations via a Google search.

## No comments:

## Post a Comment