## Friday, July 15, 2011

### A Primer on Bayesian Philosophy: 4 - Interpretation and Subjectivity (part 1)

Within the philosophy of probability, interpretation, means of constraining probabilities to be deemed `reasonable', and equivocation norms comprise the most contested issues. By including the conditionalization norm, I have begged the question on some of these issues by presenting the subjective Bayesian point of view. Objective Bayesians accept equivocation, which is known to sometimes contradict conditionalization.1 I have also treated probabilities as credences, not frequencies. Careful readers may also have noticed my avoiding the word `chance'.

When discussing the subjective/objective distinction, it is important to be clear around what the debate actually revolves. As far as I am aware, nobody claims that probabilities are actual numbers floating around in the minds of Bayesians. Subjective Bayesians also do not usually claim that probabilities are purely subjective in that two agents are equally reasonable if both are coherent. That is, subjective Bayesians are empirical in that they accept calibration with respect to evidence. (Here I borrow Jon Williamson's terminology.) With possible exceptions, e.g. radioactive decay and QM, those involved also agree that probabilities are not `objective' in the sense that they are properties of systems external to minds. As Yudkowsky puts it: "Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind."

Subjective Bayesianism has the advantage of being `big tent'. If one has good reason to think that QM is governed by objective probabilistic rules - propensities - one may calibrate using Lewis's Principal Principle, which states that one's subjective probability should match one's expectation of objective probability, i.e. `chance'. The primary charge against subjective Bayesianism is that the tent is too big. Without a frequency rule, propensity theory, and/or equivocation, those who think unicorns exist with 99% certainty are both rational and reasonable. In subjective Bayesianism, prior probabilities wholly rely on a `background knowledge' term, usually left implicit in the calculations, the relationship of which to the probabilities in play being difficult to precisely determine.

As everyone has probably noticed by now, my opinions fall within the most accepted (subjective) account of probability. Like most, I accept calibration methods and the utility of a rational/reasonable distinction. I also think that it is sometimes appropriate to equivocate, but I do not think this translates into a norm of reasonableness. If a coherent probability is also constrained with respect to the evidence by some accepted rule, I think the relevant agent qualifies as `reasonable'. Admittedly, this almost always admits a very broad range of reasonableness, which understandably makes objective Bayesians uncomfortable. But I think that when philosophizing, one must be prepared to accept the absurdly unconventional and counter-intuitive. Though it may often deprive me of the pleasure of condemning all of those whom I consider unreasonable as such, I cannot pretend to uncontroversial formal machinery which corroborates my judgments. The machinery imposes a harsh discipline on the sort of broad claims which we are so wont to make, but perhaps this is exactly what it should do.

The first great battle for subjective Bayesianism is against frequentism. In my opinion, the great philosophical benefit of Kolmogorov's axioms is that we can capture frequencies when necessary while retaining the ability to speak of probabilities which are not frequencies. As Hájek notes in the article, it is accepted that frequencies and probabilities are related, but the identification of the same is dubious. A reference class is presupposed - what constitutes a trial? - and the ordering of occurrences in a sequence can change the frequency. In response to the latter and other shortcomings, limiting relative frequencies are introduced. And these require counter-factual claims. So frequency-as-objective-probability, in addition to the inherent difficulties, fails to be `objective' in normal senses of the term.

Frequency judgments also rely on multiple cases, but probabilities attach to single events and propositions. Sometimes, there is no suitable reference class. The probability that the Democrats win the next presidential race is, in many important respects, a hypothesis about a unique event. The probability that the United States will experience a Jacobin revolution is also a claim about unique events. And suppose that we ignore all frequency data concerning the outcome of a coin toss? Is it really the case that a coin will land on heads with probability 50%? For such judgments, the idea of propensity, the inherent tendency of a circumstance to produce another, is invoked. The usual probability of heads, 50%, depends on the difficulty of predicting the outcome with respect to deterministic or largely deterministic factors, such as the force with which your thumb hits the coin and at what location, the weight and distribution of the weight of the coin, air currents, and etc. While it is in the variation of our knowledge concerning these circumstances that makes the propensity of outcome heads approximately 50%, the case of radiodecay is more difficult, as there is no known deterministic process.

Propensity accounts have difficulties similar to those of frequency accounts. They are perhaps useful as another means of constraint in certain circumstances, but as an interpretation of probability they founder. As also mentioned in the SEP article, there may also be conflicts with Bayes' theorem. If p(A|B) is the propensity of `A given B', then by Bayes' theorem the propensity of `A given B' is equal to p(B|A)p(A)/p(B). Now, the `tendencies' of systems are captured or can be captured by causal language. But causation is asymmetric, whereas by Bayes' theorem conditional probabilities are symmetric. So the `propensity of decay products during time interval I given initial material' is a function of the probability of decay products, the probability of the initial material, and the propensity of initial material given decay products during time interval I. The time-dependence of decay processes is ignored by Bayes' theorem.

So broadly speaking, probabilities are subjective, and calibration norms are local. But this does not end the discussion. Objective Bayesians are not frequentists; they can and do accept wager interpretations of probability. The next battle is over equivocation, and this will be the topic of my next post.

1. See e.g. Teddy Seidenfeld's Why I am Not an Objective Bayesian or Jon Williamson's In Defence of Objective Bayesianism (2010). (This is available at Oxford Scholarship Online for those with a subcription, though there are some formatting errors in the e-version.) Williamson contributes to It's Only a Theory. Some of his articles are available on his faculty page.