In Defense of Objective Bayesianism: MaxEnt Puzzle.
In Defense of Objective Bayesianism by Jon Williamson was mentioned recently in a post by lukeprog as the sort of book that should be being read by people on Less Wrong. Now, I have been reading it, and found some of it quite bizarre. This point in particular seems obviously false. If it’s just me, I’ll be glad to be enlightened as to what was meant. If collectively we don’t understand, that’d be pretty strong evidence that we should read more academic Bayesian stuff.
Williamson advocates use of the Maximum Entropy Principle. In short, you should take account of the limits placed on your probability by the empirical evidence, and then choose a probability distribution closest to uniform that satisfies those constraints.
So, if asked to assign a probability to an arbitrary A, you’d say p = 0.5. But if you were given evidence in the form of some constraints on p, say that p ≥ 0.8, you’d set p = 0.8, as that was the new entropy-maximising level. Constraints are restricted to Affine constraints. I found this somewhat counter-intuitive already, but I do follow what he means.
But now for the confusing bit. I quote directly;
“Suppose A is ‘Peterson is a Swede’, B is ‘Peterson is a Norwegian’, C is ‘Peterson is a Scandinavian’, and ε is ‘80% of all Scandinavians are Swedes’. Initially, the agent sets P(A) = 0.2, P(B) = 0.8, P(C) = 1 P(ε) = 0.2, P(A & ε) = P(B & ε) = 0.1. All these degrees of belief satisfy the norms of subjectivism. Updating by maxent on learning ε, the agent believes Peterson is a Swede to degree 0.8, which seems quite right. On the other hand, updating by conditionalizing on ε leads to a degree of belief of 0.5 that Peterson is a Swede, which is quite wrong. Thus, we see that maxent is to be preferred to conditionalization in this kind of example because the conditionalization update does not satisfy the new constraints X’, while the maxent update does.”
p80, 2010 edition. Note that this example is actually from Bacchus et al (1990), but Williamson quotes approvingly.
His calculation for the Bayesian update is correct; you do get 0.5. What’s more, this seems to be intuitively the right answer; the update has caused you to ‘zoom in’ on the probability mass assigned to ε, while maintaining relative proportions inside it.
As far as I can see, you get 0.8 only if we assume that Peterson is a randomly chosen Scandinavian. But if that were true, the prior given is bizarre. If he was a randomly chosen individual, the prior should have been something like P(A & ε) = 0.16 P(B & ε) = 0.04 The only way I can make sense of the prior is if constraints simply “don’t apply” until they have p=1.
Can anyone explain the reasoning behind a posterior probability of 0.8?
Link: Facing the Mind-Killer
I've long opposed discussing politics on Less Wrong. Elsewhere, however, I have been known to gaze into the abyss; and so it came to be that I wrote a handful of blog posts of the Oxford Libertarian Society Blog. I had the deliberate intention of bring a little bit of rationality into politics - and so of course ended up writing in something like Eliezer's style.
I wanted to establish some theory first, so the initial posts were about The Conservation of Expected Evidence and Reductionism, and then one particular Death-Spiral.
As you'll probably notice, one of my defences against the little-death has been to err on the side of attacking Libertarian positions; I provided an account of Traditional Socialist Values so we remember that our enemies aren't inherently evil, and then analysed an abuse of The Law of Comparative Advantage, showing cases where it didn't apply.
I can't promise I'll update at all regularly.
Post inspired by Will Newsome and prompted by Vladimir Nesov.
Oxford (UK) Rationality & AI Risks Discussion Group
Alex Flint and I are doing a series of seminar/discussion events in Oxford, to which anyone from LW would be very welcome. Especially as the theme is Rationality & AI Risks!
They're being held at 5pm on Saturday in Exeter College, and will go on throughout November. We had over 10 people last Saturday discussing Heuristics and Biases, and plan to go onto Bayesianism this week. They'll probably last for about an hour, though we may decamp to the pub afterwards to continue discussion.
If you're in the area, you might also be interested in the other events run by the Oxford Transhumanist Society.
A Player of Games
Earlier today I had an idea for a meta-game a group of people could play. It’d be ideal if you lived in an intentional community, or were at university with a games society, or somewhere with regular Less Wrong Meetups.
Each time you would find a new game. Each of you would then study the rules for half an hour and strategise, and then you’d play it, once. Afterwards, compare thoughts on strategies and meta-strategies. If you haven’t played Imperialism, try that. If you’ve never tried out Martin Gardner’s games, try them. If you’ve never played Phutball, give it a go.
It should help teach us to understand new situations quickly, look for workable exploits, accurately model other people, and compute Nash equilibrium. Obviously, be careful not to end up just spending your life playing games; the aim isn't to become good at playing games, it's to become good at learning to play games - hopefully including the great game of life.
However, it’s important that no-one in the group know the rules before hand, which makes finding the new games a little harder. On the plus side, it doesn’t matter that the games are well-balanced: if the world is mad, we should be looking for exploits in real life.
It could be really helpful if people who knew of good games to play gave suggestions. A name, possibly some formal specifications (number of players, average time of a game), and some way of accessing the rules. If you only have the rules in a text-file, rot13 them please, and likewise for any discussion of strategy.
Burning Man Meetup: Bayes Camp
In celebration of the virtues of applied rationality, Less Wrong is going to Burning Man! And because Heinlein rationalists should win, Bayes Camp is going to be the most awesome place there.
A bunch of people from SingInst/Less Wrong will be descending upon the desert, bedecked as the members of the Bayesian Conspiracy. Kevin, Jasen, JustinShovelain, Peter de Blanc, Michael Vassar and Nick Tarleton, among others, will be there. If you'd like to stop by, say so in the comments!
We'll be at 6:50, F, and should be there from Monday 30th.
Please note: Burning Man is serious stuff, and if you don’t think you’re up to the desert, you shouldn’t come. Either way, read the survival guide.
EDIT: updated location
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)