As I understand what is meant by satisficing, this misses the mark. A satisficer will search for an action until it finds one that is good enough, then it will do that. A maximiser will search for the best action and then do that. A bounded maximser will search for the "best" (best according to its bounded utility function) and then do that.
So what the satisficer picks depends on what order the possible actions are presented to it in a way it doesn't for either maximiser. Now, if easier options are presented to it first then I guess your conclusion still follows, as long as we grant the premise that self-transforming will be easy.
But I don't think it's right to identify bounded maximisers and satisficers.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
There are a couple of things I find odd about this. First, it seems to be taken for granted that one-boxing is obviously better than two boxing, but I'm not sure that's right. J.M. Joyce has an argument (in his foundations of causal decision theory) that is supposed to convince you that two-boxing is the right solution. Importantly, he accepts that you might still wish you weren't a CDT (so that Omega predicted you would one-box). But, he says, in either case, once the boxes are in front of you, whether you are a CDT or a EDT, you should two-box! The dominance reasoning works in either case, once the prediction has been made and the boxes are in front of you.
But this leads me on to my second point. I'm not sure how much of a flaw Newcomb's problem is in a decision theory, given that it relies on the intervention of an alien that can accurately predict what you will do. Let's leave aside the general problem of predicting real agents' actions with that degree of accuracy. If you know that the prediction of your choice affects the success of your choices, I think that reflexivity or self reference simply makes the prediction meaningless. We're all used to self-reference being tricky, and I think in this case it just undermines the whole set up. That is, I don't see the force of the objection from Newcomb's problem, because I don't think it's a problem we could ever possibly face.
Here's an example of a related kind of "reflexivity makes prediction meaningless". Let's say Omega bets you $100 that she can predict what you will eat for breakfast. Once you accept this bet, you now try to think of something that you would never otherwise think to eat for breakfast, in order to win the bet. The fact that your actions and the prediction of your actions have been connected in this way by the bet makes your actions unpredictable.
Going on to the prisoner's dilemma. Again, I don't think that it's the job of decision theory to get "the right" result in PD. Again, the dominance reasoning seems impeccable to me. In fact, I'm tempted to say that I would want any future advanced decision theory to satisfy some form of this dominance principle: it's crazy to ever choice an act that is guaranteed to be worse. All you need to do to "fix" PD is to have the agent attach enough weight to the welfare of others. That's not a modification of the decision theory, that's a modification of the utility function.