FYI, IIRC there's a new LW debate feature where you could've tried to hash out your disagreement in a single post of asynchronous back-and-forth replies. But I don't know if the debate feature is actually live for the public, I just saw one debate post some time ago.
Blackmail and Bomb cases seem to be examples of not being able to comprehend large numbers.
Really, if the predictor mistake rate is indeed 1 in a trillion trillion then it's much more probable that the note lies than that you are in this extremely rare circumstances where you pick the left envelope and the bomb is indeed there.
On the other hand, I'm not sure that FDT really recommends you to procreate in Procreation. Maybe your FDT following father just made a mistake? How much your decision making is actually correlated? I don't think there is much of subjective dependence in this setting. Did he simulate you at some point? Because, otherwise, I don't see how "choose not to procreate and thus do not to exist" is a coherent outcome.
Also if you do not procreate and thus do not exist, how can you have an utility function valueing existence? Moreover, even if we accept the premise, aren't you dooming all your decendants to similarly miserable existence? They definetely do not exist yet and the fact that they would prefer to miserably exist conditionally on existing doesn't mean that it's a good idea to make them exist, when they do not exist yet.
Really, if the predictor mistake rate is indeed 1 in a trillion trillion then it's much more probable that the note lies than that you are in this extremely rare circumstances where you pick the left envelope and the bomb is indeed there.
Likely true in practice, but this is a hypothetical example and FDT does not rely on that.
On the other hand, I'm not sure that FDT really recommends you to procreate in Procreation.
That scenario did seem underspecified to me too.
Also if you do not procreate and thus do not exist, how can you have an utility function valueing existence?
Hypothetically, you have a particular utility function/decision procedure - but some values of those might be incompatible with you actually existing.
I think the analysis for "bomb" is missing something.
This is a scenario where the predictor is doing their best not to kill you: if they think you'll pick left they pick right, if they think you'll pick right they'll pick left.
The CDT strategy is to pick whatever box doesn't have a bomb in it. So if the player is a perfect CDTer, the predictor is 100% guaranteed to be correct in their pick. The predictor actually gets to pick whether the player loses 100 bucks or not. If the predictor is nice, the CDTer gets to walk away without paying anything and a 0% chance of death.
After omnizoid asked whether people want to debate him on Functional Decision Theory (FDT), he and I chatted briefly and agreed to have a (short) debate. We agreed the first post should be by me: a reaction to omnizoid's original post, where he explains why he believes FDT is "crazy". In this post, I'll assume the reader has a basic understanding of FDT. If not, I suggest reading the paper.
Let's just dive right in the arguments omnizoid makes against FDT. Here's the first one:
So if I understand this correctly, this problem works as follows:
(To be clear, this is my interpretation of the problem. omnizoid just says there's a 1/googol chance the blackmailer blackmails someone who wouldn't give in to the problem, and doesn't specify that in that case, the blackmailer was wrong in his prediction that the victim would pay. Maybe the blackmailer just blackmails everyone, and 1 in a googol people don't give in. If that's the case, FDT does pay.)
If this is the correct interpretation of the problem, FDT is fully correct to not pay the $1. omnizoid believes this causes your worst secrets to be spread, but it's specified in the problem statement that this only happens with probability 1/googol: if the blackmailer wrongly predicts that you will pay $1. With probability googol-1/googol, you don't get blackmailed and don't have to pay anything. A googol-1/googol probability of losing $1 is much worse than a 1/googol probability of losing $1,000,000. So FDT is correct.
omnizoid can counter here that it is also specified that the blackmailer does blackmail you. But this is a problem of which decision to make, and that decision is fist made in the blackmailer's brain (when he predicts what you will decide). If that decision is "don't pay the $1", the blackmailer will almost certainly not blackmail you.
Another way of looking at this is asking: "Which decision theory do you want to run, keeping in mind that you might run into the Blackmail problem?" If you run FDT, you virtually never get blackmailed in the first place.
On to the next argument. Here, omnizoid uses Wolfgang Schwarz's Procreation problem:
omnizoid doesn't explain why he believes FDT gives the wrong recommendation here, but Schwarz does:
This is strictly true: FDT recommends procreating, because not procreating would mean you don't exist (due to the subjunctive dependence with your father). CDT'ers don't have this subjunctive dependence with their FDT father (and wouldn't even care if it was there), don't procreate, and are happier.
This problem doesn't fairly compare FDT to CDT though. By specifying that the father follows FDT, FDT'ers can't possibly do better than procreating. Procreation directly punishes FDT'ers - not because of the decisions FDT makes, but for following FDT in the first place. I can easily make an analogous problem that punishes CDT'ers for following CDT:
ProcreationCDT. I wonder whether to procreate. I know for sure that doing so would make my life miserable. But I also have reason to believe that my father faced the exact same choice, and that he followed CDT. I highly value existing (even miserably existing). Should I procreate?
FDT'ers don't procreate here and live happily. CDT'ers wouldn't procreate either and don't exist. So in this variant, FDT'ers fare much better than CDT'ers.
We can also make a fair variant of Procreation - a version I've called Procreation* in the past:
Procreation*. I wonder whether to procreate. I know for sure that doing so would make my life miserable. But I also have reason to believe that my father faced the exact same choice, and I know he followed the same decision theory I do. If my decision theory were to recommend not procreating, there's a significant probability that I wouldn't exist. I value a miserable life to no life at all, but obviously I value a happy life to a miserable one. Should I procreate?
So if you're an FDT'er, your father was an FDT'er, and if you are a CDT'er, your father was a CDT'er. FDT'ers procreate and live; CDT'ers don't procreate and don't exist. FDT wins. It surprises me to this day that Schwarz didn't seem to notice his Procreation problem is unfair.
omnizoid's next argument is borrowed from William MacAskill's A Critique of Functional Decision Theory:
FDT's recommendation isn't implausible here. I doubt I could explain it much better than MacAskill himself, though, when he says
The point seems to be that FDT'ers burn to death, but, like in the Blackmail problem, that only happens with vanishingly small probability. Unless you value your life higher than $100 trillion trillion - since you lose it with probability 1 in a trillion trillion but save $100 - Left-boxing is the correct decision.
One could once again counter that the bomb is already in the Left-box. But again, the decision is made at two points - in your head, but also in the predictor's.
Guaranteed Payoffs? That principle, if applied, should be applied the first time your decision is made: in the head of the predictor. At that point it's (virtually) guaranteed that Left-boxing let's you live for free.
omnizoid:
Are you actually going to argue from authority here?! I've spoken to Nate Soares, one of the authors of the FDT paper, many times, and I assure you he "knows about decision theory". Furthermore, and with all due respect to MacAskill, his post fundamentally misrepresents FDT in the Implausible Discontinuities section:
This is just wrong: the critical factor is not whether "there's an agent making predictions". The critical factor is subjunctive dependence, and there is no subjunctive dependence between S and the decision maker here.
That's it for this post. I'm looking forward to your reaction, omnizoid!