I think that lying should be possible from the beginning but, since you are a detective, you have the ability to gauge someone's reliability which is displayed as a percentage (like in your drawings). Also while reading I thought maybe it would be possible to combine 'evidence' to create new evidence. ie: Alice's shoes are wet && Bob's weather records show that there hasn't been rain in weeks +=Alice has stepped into the local lake for something today.
Don't worry, that's not an uncomfortable question. UDT and MDT are quite different. UDT is a first-order decision theory. MDT is a way of extending decision theories - so that you take into account uncertainty about which decision theory to use. (So, one can have meta causal decision theory, meta evidential decision theory, and (probably, thought I haven't worked through it) meta updateless decision theory.)
UDT, as I understand it (and note I'm not at all fluent in UDT or TDT) always one-boxes; whereas if you take decision-theoretic uncertainty into account you should sometimes one-box and sometimes two-box, depending on the relative value of the contents of the two boxes. Also, UDT gets what most decision-theorists consider the wrong answer in the smoking lesion case, whereas the account I defend, meta causal decision theory, doesn't (or, at least, doesn't, depending on one's credences in first-order decision theories).
To illustrate, consider the case:
High-Stakes Predictor II (HSP-II) Box C is opaque; Box D, transparent. If the Predictor predicts that you choose Box C only, then he puts one wish into Box C, and also a stick of gum. With that wish, you save the lives of 1 million terminally ill children. If he predicts that you choose both Box C and Box D, then he puts nothing into Box C. Box D — transparent to you — contains an identical wish, also with the power to save the lives of 1 million children, so if one had both wishes one would save 2 million children in total. However, Box D contains no gum. One has two options only: choose Box C only, or both Box C and Box D.
In this case, intuitively, should you one box, or two box? My view is clear: that if someone one-boxes in the above case, they made the wrong decision. And it seems to me that this is best explained with appeal to decision-theoretic uncertainty.
Other questions: Bostrom's parliamentary model is different. Between EDT and CDT, the intertheoretic comparisons of value are easy, so there's no need to use the parliamentary analogy - one can just straightforwardly take an expectation over decision theories.
Pascal's Mugging (aka the "Fanaticism" worry). This is a general issue for attempts to take normative uncertainty into account in one's decision-making, and not something I discuss in my paper. But if you're concerned about Pascal's mugging and, say, think that a bounded Decision Theory is the best way to respond to the problem - then at the meta level you should also have a bounded decision theory (and at the meta meta level, and so on).
Can't we just assume that whatever we do was predicted correctly? The problem does assume an 'almost certain' predictor. Shouldn't that make two-boxing the worst move?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Further suggestion: Players should learn about the distinction between accuracy and calibration. There should occasionally be scenarios where the real solution is not something the information available to you singles out as probable. Players should learn that banking on an unlikely solution is never a good bet, but highly probable solutions are still only probable rather than certain.
Players' performance would be tracked, not just in terms of their ability to get the right answers, but their ability to be right about how often they're right.
I disagree that there should be situations where the less likely situation is correct only becaus it is less likely ( as a pre-programmed result). The likelihood of an event occurring in the game should be a result of your acquired evidence and only 100% certainty can exist when there is enough concrete evidence supporting the outcome. Within the game it should be possible for the true outcome to receive a high probability. Your idea however is essential in situations where the probability of events are very close. For example in a situation with 5 outcomes where all their probabilities are 15-30% it wouldn't and shouldn't be obvious.