Solomon's Problem and varients thereof are often cited as criticism of Evidential decision theory.
For background, here's Solomon's Problem: King Solomon wants to sleep with another man's wife. However, he knows that uncharismatic leaders frequently sleep with other men's wives, and charismatic leaders almost never do. Furthermore, uncharismatic leaders are frequently overthrown, and charismatic leaders rarely are. On the other hand, sleeping with other men's wives does not cause leaders to be overthrown. Instead, high charisma decreases the chance that a leader will sleep with another man's wife and the chance that the leader will be overthrown separately. Not getting overthrown is more important to King Solomon than getting the chance to sleep with the other guy's wife.
Causal decision theory holds that King Solomon can go ahead and sleep with the other man's wife because it will not directly cause him to be overthrown. Timeless decision theory holds that he can sleep with the woman because it will not cause his overthrow in any timeless sense either. Conventional wisdom holds that Evidential decision theory would have him refrain from her, because updating on the fact that he slept with her would suggest a higher probability that he will get overthrown.
The problem with that interpretation is that it assumes that King Solomon only updates his probability distributions based on information about him that is accessible to others. He cannot change whether or not he would sleep with another man's wife given no other disincentives by refraining from doing so in response to other disincentives. The fact that he is faced with the dilemma already indicates that he would. Updating on this information, he knows that he is probably uncharismatic, and thus likely to get overthrown. Updating further on his decision after taking into account the factors guiding his decision will not change the correct probability distribution.
This more complete view of Evidential decision theory is isomorphic to Timeless decision theory (edit: shown to be false in comments). I'm slightly perplexed as to why I have not seen it elsewhere. Is it flawed? Has it been mentioned elsewhere and I haven't noticed? If so, why isn't it so widely known?
I would like to say that I agree with the arguments presented in this post, even though the OP eventually retracted them. I think the arguments for why EDT leads to the wrong decision are themselves wrong.
As mentioned by others, EY referred to this argument as the 'tickle defense' in section 9.1 of his TDT paper. I am not defending the advocates which EY attacked, since (assuming EY hasn't misrepresented them) they have made some mistakes of their own. In particular they argue for two-boxing.
I will start by talking about the ability to introspect. Imagine God promised Solomon that Solomon won't be overthrown. Then the decision of weather or not to sleep with other men's wives is easy, and Solomon can just act on his preferences. Yet if Solomon can't introspect then in the original situation he doesn't know weather he prefers sleeping with others' wives or not. So Solomon not being able to introspect means that there is information that he can rationally react to in some situations and not in others. While a problems like that can occur in real people, I don't expect a theory of rational behavior to have to deal with them. So I assume an agent knows what its preferences are, or if not fails to act on them in consistently.
In fact, the meta-tickle defense doesn't really deal with lack of introspection either. It assumes an agent can think about an issue and 'decide' on it, only to not act on that decision but rather to use that 'decision' as information. An agent that really couldn't introspect wouldn't be able to do that.
The tickle defense has been used to defend two-boxing. While this argument isn't mentioned in the paper, it is described in one of the comments here. This argument has been rebutted by the original poster AlexMennen. I would like to add to that something: For an agent to find out for sure weather it is a one-boxer or a two-boxer, the agent must make a complete simulation of itself in Newcomb's problem. If they try to find this out as part of their strategy for Newcomb's problem, they will get into an infinite loop.
benelliott raised a final argument here. He postulated that charisma is not related to preference for screwing wives, but rather to weather a king's reasoning would lead them to actually do it. Here I have to question weather the hypothetical situation makes sense. For real people an intrinsic personality trait might change their bottom line conclusion, but this behavior is irrational. A ideal rational agent cannot have a trait of the form charisma is postulated to have. benelliott also left the possibility the populace have Omega-like abilities, but then situation is really just another form of Newcomb's problem, and the rational choice is to not screw wives.
Overall I think that EDT actually does lead to rational behavior in these sorts of situations. In fact I think it is better than TDT, because TDT relies on computations with one right answer to not only have probabilities and correlations, but also on there being causality between them. I am unconvinced of this and unsatisfied with the various attempts to deal with it.