# bcoburn comments on Stupid Questions Open Thread Round 3 - Less Wrong Discussion

8 07 July 2012 05:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 08 July 2012 06:27:15AM 1 point [-]

More concisely than the original/gwern: The algorithm used by the mugger is roughly:

Find your assessed probability of the mugger being able to deliver whatever reward, being careful to specify the size of the reward in the conditions for the probability

offer an exchange such that U(payment to mugger) < U(reward) * P(reward)

This is an issue for AI design because if you use a prior based on Kolmogorov complexity than it's relatively straightforward to find such a reward, because even very large numbers have relatively low complexity, and therefore relatively high prior probabilities.

Comment author: 08 July 2012 07:17:14AM *  1 point [-]

When you have a bunch of other data, you should be not interested in the Kolmogorov complexity of the number, you are interested in Kolmogorov complexity of other data concatenated with that number.

E.g. you should not assign higher probability that Bill Gates has made precisely 100 000 000 000 \$ than some random-looking value, as given the other sensory input you got (from which you derived your world model) there are random-looking values that have even lower Kolmogorov complexity of total sensory input, but you wouldn't be able to find those because Kolmogorov complexity is uncomputable. You end up mis-estimating Kolmogorov complexity when you don't have it given to you on a platter pre-made.

Actually, what you should use is algorithmic (Solomonoff) probability, like AIXI does, on the history of sensory input, to weighted sum among the world models that present you with the marketing spiel of the mugger. The shortest ones simply have the mugger make it up, then there will be the models where mugger will torture beings if you pay and not torture if you don't, it's unclear what's going to happen out of this and how it will pan out, because, again, uncomputable.

In the human approximation, you take what mugger says as privileged model, which is strictly speaking an invalid update (the probability jumps from effectively zero for never thinking about it, to nonzero), and the invalid updates come with a cost of being prone to losing money. The construction of model directly from what mugger says the model should be is a hack; at that point anything goes and you can have another hack, of the strategic kind, to not apply this string->model hack to ultra extraordinary claims without evidence.

edit: i meant, weighted sum, not 'select'.