All of Heighn's Comments + Replies

"In your problem description you said you receive the letter"

True, but the problem description also specifies subjunctive dependence between the agent and the predictor. When the predictor made her prediction the letter isn't yet sent. So the agent's decision influences whether or not she gets the letter.

"This intuition is actually false for perfect predictors."

I agree (and have written extensively on the subject). But it's the prediction the agent influences, not the presence of the termite infestation.

2jacob_cannell
The payoff and optimal move naturally depends on the exact time of measurement. Before receiving any letter you can save $1000 by precomitting to not paying: but that is a move both FDT and EDT will make. But after receiving the letter (which you assumed) the optimal move is to pay the $1000 to save $1M. FDT from my understanding fails here as it retroactively precommits to not paying and thus loses $1M. So this is a good example of where EDT > FDT. The only example i've seen so far where the retroactive precommitment of FDT actually could make sense is the specific variant 5 from here where we measure utility before the agent knows the rules or has observed anything. And even in that scenario FDT only has a net advantage if it is optimal to make the universal precommitmment everywhere. EDT can decide to do that: EDT->FDT is allowed, but FDT can never switch back. So in that sense EDT is 'dominant', or the question reduces to: is the universal precommitment of FDT a win on net across the multiverse? Which is far from clear.

Given that you receive the letter, paying is indeed evidence for not having termites and winning $999,000. EDT is elegant, but still can't be correct in my view. I wish it were, and have attempted to "fix" it.

My take is this. Either you have the termite infestation, or you don't.

Say you do. Then

  • being a "payer" means you never receive the letter, as both conditions are false. As you don't receive the letter, you don't actually pay, and lose the $1,000,000 in damages.
  • being a "non-payer" means you get the letter, and you don't pay. You lose $1,000,000.

Say you... (read more)

2jacob_cannell
In your problem description you said you receive the letter: Given that you did receive the letter, that eliminates 2 of the 4 possible worlds, and we are left with only (infested, dont_pay) and (uninfested, pay). Then the choice is obvious. EDT is correct here. Obviously if you don't receive the letter you have more options but then its not much of an interesting problem. This intuition is actually false for perfect predictors. A perfect predictor could simulate your mind (along with everything else) perfectly, which is somewhat equivalent to time travel. Its not actual time travel of course; in these 'perfect prediction' scenarios your future (perfectly predicted) decisions have already effected your past.

XOR Blackmail is (in my view) perhaps the clearest counterexample to EDT:

An agent has been alerted to a rumor that her house has a terrible termite infestation that would cost her $1,000,000 in damages. She doesn’t know whether this rumor is true. A greedy predictor with a strong reputation for honesty learns whether or not it’s true, and drafts a letter:

 I know whether or not you have termites, and I have sent you this letter iff exactly one of the following is true: (i) the rumor is false, and you are going to pay me $1,000 upon receiving this lette

... (read more)
2jacob_cannell
This just seems like a variant of newcomb's box, and EDT is naturally optimal here (as it is everywhere). Assume the predictor is never wrong and never lies. Then upon receiving the letter we know that in worlds where the house is not infested we pay, and in worlds where the house is infested we do not. So we pay and win $999,000, which is optimal. Perfect predictors are roughly equivalent to time travel. Its equivalent to filtering out all universes where the house is not infected and we don't pay, and all those where the house is infected and we pay. To compare decision algos we need a formal utility measure for our purposes of comparison. Given any such formal utility measure, we could then easily define the optimal decision algorithm - it is whatever argmaxes that measure! EDT is simply that, for the very reasonable expected utiltiy metric.
3the gears to ascension
I think the thing @jacob_cannell is imagining is not plain CDT, EDT, or FDT, and writing out what it is he's imagining in the language of https://arxiv.org/abs/2307.10987 would clarify. I suspect the RL thing he's imagining is some mix of CDT and EDT depending on the amount of experience the agent has with a context. He'd have to clarify. I bring this up because I anticipate any language model having the correct response to that example scenario, because it has experience with those dynamics in previous language, but it'll be vulnerable to tweaked versions of that, and yet also behave CDTishly in some scenarios. these decision theories are "pure", approximation-free models, and so approximation learning systems behave differently sometimes.

Is it necessary to be able to work during all MIRI office hours, or is it enough if my hours are partially compatible? My time difference with MIRI is 9 hours, but I could work in the evening (my time) every now and then.

4Gretta Duleba
We want a great deal of overlap with Pacific time hours, yes. A nine-hour time zone difference would probably be pretty rough unless you're able to shift your own schedule by quite a bit.

Btw, thanks for your comment! I edited my post with respect to fair problems.

I think this is potentially an overly strong criteria for decision theories - we should probably  restrict to something like the problems to a fair problem class,  else we end up with no decision theory receiving any credence.

Good point, I should have mentioned that in my article. (Note that XOR Blackmail is definitely a fair problem (not that you are claiming otherwise)).

I also think "wrong answer" is doing a lot of work here.

I at least in part agree here. This is why I picked XOR Blackmail, because it has such an obvious right answer. That's an... (read more)

Thanks for the comment!

There is a false dichotomy in the argument basing the conclusion only on the options CDT or EDT, when in fact both are wrong.

I wouldn't say there's a false dichotomy: the argument works fine if you also have credence in e.g. FDT. It just says that altruistic, morally motivated agents should favor EDT over CDT. (However, as I have attempted to demonstrate, 2 premises of the argument don't hold up.)

Suppose you know instead that Omega was miserly and almost all of the people who one-box don't get offered the opportunity to play - let's

... (read more)

Your original description doesn't specify subjunctive dependence, which is a critical component of the problem.

Heighn’s response to this argument is that this is a perfectly fine prescription.

Note that omnizoid hasn't checked with me whether this is my response, and if he had, I would have asked him to specify the problem more. In my response article, I attempt to specify the problem more, and with that particular specification, I do indeed endorse FDT's decision.

1omnizoid
The description is exactly as you describe in your article.  I think my original was clear enough, but you describe your interpretation, and your interpretation is right.  You proceed to bite the bullet.  

I'm surprised Wei Dai thinks this is a fair point. I disagree entirely with it: FDT is a decision theory and doesn't in and of itself value anything. The values need to be given by a utility function.

Consider the Psychological Twin Prisoner's Dilemma. Given the utility function used there, the agent doesn't value the twin at all: the agent just wants to go home free as soon as possible. FDT doesn't change this: it just recognizes that the twin makes the same decision the agent does, which has bearing on the prison time the agent gets.

2interstice
I explicitly said that this difference in values is meant to reproduce the way that FDT/CDT are usually argued to act in these sorts of scenarios, but is actually orthogonal to decision theory per se. This scenario is a stronger one for the decision theory FDT. But that's not the sort of scenario I was referring to: the argument in my comment applies to scenarios where one of the copies makes itself worse off to benefit the others, like the Bomb or transparent Newcomb. These were the main topic of discussion of the post, and I still think it's accurate to say that the difference in intuitions between CDTists/FDTists here comes down to a values/semantic dispute.

...which makes the Procreation case an unfair problem. It punishes FDT'ers specifically for following FDT. If we're going to punish decision theories for their identity, no decision theory is safe. It's pretty wild to me that @WolfgangSchwarz either didn't notice this or doesn't think it's a problem.

A more fair version of Procreation would be what I have called Procreation*, where your father follows the same decision theory as you (be it FDT, CDT or whatever).

Oh wait, of course, in this problem, Omega doesn't simulate the agent. Interesting! I'll have to think about this more :-)

2Chris_Leong
Well, firstly the assumption that there's a unique way of mapping a physical system to a particular function. Physical systems can be interpreted in many different ways. Secondly, I think it's a mistake to insist that we model subjunctive linking as logical counterfactuals. My memory isn't perfect, but I don't recall seeing a justification for this choice in the FDT paper, apart from "Wouldn't it be convenient if it were true?" I suspect this comes from the allergy of much of the LW crowd to philosophy. If you say that you're dealing with logical counterfactuals, then looks like you're dealing with mathematical formalisms, nevermind that it isn't really a formalism until you pin down a lot more details, since there's no objective fact of the matter of what it would mean for a function to be equal to something that it's not. It seems much more honest to just admit that you're not yet at the formalisation stage and to follow the philosophical route of asking, "So what do we really mean by counterfactuals?". And until you have a good answer to this question, you don't want to commit yourself to a particular route, such as assuming that the solution must be some kind of formalism for dealing with non-classical logic. A further point:  We aren't just trying to imagine that say f(x)=1 instead of 2 because we're interested in this question in and of itself, but rather because we're trying to figure out how to make better decisions. Throwing away the why is a mistake in my books. Even if we were only looking non-classical logics, we would be throwing away our criteria for distinguishing between different schemes. And at the point where we're keeping around our why, then there's no reason for reducing the question to a mere logical one.

I also wonder whether a different problem was intended.

Thanks for the link!

And hmm, it seems to me FDT one-boxes on ASP, as that gives the most utility from the subjunctive dependence perspective.

1Heighn
Oh wait, of course, in this problem, Omega doesn't simulate the agent. Interesting! I'll have to think about this more :-)

Why would Omega put $0 in the second box? The problem statement specifies Omega puts $100 in both boxes if she predicts you will two-box!

5Ben

If I have a two box policy the simulated me gets $200 before deletion, and the real me gets nothing.

Wait, why does the real you get nothing? It's specified you get $200. What am I missing?

1Ben
2lsusr
There's a couple different ways of exploiting an FDT agent. One method is to notice that FDT agents have implicitly precommitted to FDT (rather than the theorist's intended terminal value function). It's therefore possible to contrive scenarios in which those two objectives diverge. Another method is to modify your own value function such that "make functional decision theorists look stupid" becomes a terminal value. After you do that, you can blackmail them with impunity. FDT is a reasonable heuristic, but it's not secure against pathological hostile action.

Ah, I just read your substack post on this, and you've referenced two pieces I've already reacted to (and in my view debunked) before. Seems like we could have a good debate on this :)

1omnizoid
How'd you feel about a verbal debate? 

I would love to debate you on this. My view: there is no single known problem in which FDT makes an incorrect decision. I have thought about FDT a lot and it seems quite obviously correct to me.

You should take a look this list of UDT open problems that Vladimir Slepnev wrote 13 years ago, where 2 and 3 are problems in which UDT/FDT seemingly make incorrect decisions, and 1 and 5 are definitely also serious open problems.

2Heighn
Ah, I just read your substack post on this, and you've referenced two pieces I've already reacted to (and in my view debunked) before. Seems like we could have a good debate on this :)

Ah, so your complaint is that the author is ignoring evidence pointing to shorter timelines. I understand your position better now :)

"Insofar as your distribution has a faraway median, that means you have close to certainty that it isn't happening soon. And that, I submit, is ridiculously overconfident and epistemically unhumble."

Why? You can say a similar thing about any median anyone ever has. Why is this median in particular overconfident?

3Daniel Kokotajlo
Because it's pretty obvious that there's at least some chance of AGI etc. happening soon. Many important lines of evidence support this: --Many renowned world experts in AI and AGI forecasting say so, possibly even most --Just look at ChatGPT4 --Read the Bio Anchors report --Learn more about AI, deep learning, etc. and in particular about scaling laws and the lottery ticket hypothesis etc. and then get up to speed with everything OpenAI and other labs are doing, and then imagine what sorts of things could be built in the next few years using bigger models with more compute and  data etc.... --Note the scarcity of any decent object-level argument that it won't happen soon. Bio Anchors has the best arguments that it won't happen this decade, IMO. If you know of any better one I'd be interested to be linked to it or have it explained to me!

"And not only do I not expect the trained agents to not maximize the original “outer” reward signal"

Nitpick: one "not" too many?

3TurnTrout
Thanks, fixed.

I apologize, Said; I misinterpreted your (clearly written) comment.

Reading your newest comment, it seems I actually largely agree with you - the disagreement lies in whether farm animals have sentience.

-8[anonymous]

(No edit was made to the original question.)

Thanks for your answer!

I (strongly) disagree that sentience is uniquely human. It seems to me a priori very unlikely that this would be the case, and evidence does exist to the contrary. I do agree sentience is an important factor (though I'm unsure it's the only one).

7Said Achmiz
I didn’t say that sentience is uniquely human, though. Now, to be clear: on the “a priori very unlikely” point, I don’t think I agree. I don’t actually think that it’s unlikely at all; but nor do I think that it’s necessarily very likely, either. “Humans are the only species on Earth today that are sentient” seems to me to be something that could easily be true, but could also easily be false. I would not be very surprised either way (with the caveat that “sentience” seems at least partly to admit of degrees—“partly” because I don’t think it’s fully continuous, and past a certain point it seems obvious that the amount of sentience present is “none”, i.e. I am not a panpsychist—so “humans are not uniquely sentient” would almost certainly not be the same thing as “there exist other species with sentience comparable to humans”). But please note: nothing in the above paragraph is actually relevant to what we’ve been discussing in this thread! I’ve been careful to refer to “animals I eat”, “critters we normally eat”, “food animals”, listing examples like pigs and sheep and chickens, etc. Now, you might press me on some edge cases (what about octopuses, for instance? those are commonly enough found as food items even in the West), but on the whole, the distinction is clear enough. Dolphins, for example, might be sentient (though I wouldn’t call it a certainty by any means), and if you told me that there’s an industry wherein dolphins are subjected to factory-farming-type conditions, I’d certainly object to such a thing almost as much as I object to, e.g., China’s treatment of Uyghurs (to pick just one salient modern example out of many possible such). But I don’t eat any factory-farmed dolphins. And the topic here, recall, is my eating habits. Neither do I eat crows, octopuses (precisely for the reason that I am not entirely confident about their lack of sentience!), etc.

"but certainly none of the things that we (legally) do with animals are bad for any of the important reasons why torture of people is bad."

That seems very overconfident to me. What are your reasons for believing this, if I may ask? What quality or qualities do humans have that animals lack that makes you certain of this?

2Said Achmiz
Sorry, could you clarify? What specifically do you think I’m overconfident about? In other words, what part of this are you saying I could be mistaken about, the likelihood of which mistake I’m underestimating? Are you suggesting that things are done to animals of which I am unaware, which I would judge to be bad (for some or all of the same reasons why torture of people are bad) if I were aware of them? Or something else? EDIT: Ah, apologies, I just noticed on a re-read (was this added via edit after initial posting?) that you asked: This clarifies the question. As for the answer, it’s simple enough: sentience (in the classic sense of the term)—a.k.a. “subjective consciousness”, “self-awareness”, etc. Cows, pigs, chickens, sheep… geese… deer… all the critters we normally eat… they don’t have anything like this, very obviously. (There’s no reason why they would, and they show no sign of it. The evidence here is, on the whole, quite one-sided.) Since the fact that humans are sentient is most of what makes it bad to torture us—indeed, what makes it possible to “torture” us in the first place—the case of animals is clearly disanalogous. (The other things that make it bad to torture humans—having to do with things like social structures, game-theoretic incentives, etc.—apply to food animals even less.)

One-boxing on Newcomb's Problem is good news IMO. Why do you believe it's bad?

2Noosphere89
It basically comes down to the fact that agents using too smart decision theories like FDT or UDT can fundamentally be deceptively aligned, even if myopia is retained by default. That's the problem with one-boxing in Newcomb's problem, because it implies that our GPTs could very well become deceptively aligned. Link below: https://www.lesswrong.com/posts/LCLBnmwdxkkz5fNvH/open-problems-with-myopia The LCDT decision theory does prevent deception, assuming it's implemented correctly. Link below:

Of course! Thanks for your time.

I can, although I indeed don't think it is nonsense.

What do you think our (or specifically my) viewpoint is?

1Said Achmiz
I’m no longer sure; you and green_leaf appear to have different, contradictory views, and at this point that divergence has confused me enough that I could no longer say confidently what either of you seem to be saying without going back and carefully re-reading all the comments. And that, I’m afraid, isn’t something that I have time for at the moment… so perhaps it’s best to write this discussion off, after all.

Hmm, interesting. I don't know much about UDT. From and FDT perspective, I'd say that if you're in the situation with the bomb, your decision procedure already Right-boxed and therefore you're Right-boxing again, as logical necessity. (Making the problem very interesting.)

Sorry, I'm having trouble understanding your point here. I understand your analogy (I was a developer), but am not sure what you're drawing the analogy to.

I've been you ten years ago.

Just... no. Don't act like you know me, because you don't. I appreciate you trying to help, but this isn't the way.

2Vladimir_Nesov
These norms are interesting in how well they fade into the background, oppose being examined. If you happen to be a programmer or have enough impression of what that might be like, just imagine a programmer team where talking about bugs can be taboo in some circumstances, especially if they are hypothetical bugs imagined out of whole cloth to check if they happen to be there, or brought to attention to see if it's cheap to put measures in place to prevent their going unnoticed, even if it eventually turns out that they were never there to begin with in actuality. With rationality, that's hypotheses about how people think, including hypotheses about norms that oppose examination of such hypotheses and norms.

Seems to me Yudkowsky was (way) too pessimistic about OpenAI there. They probably knew something like this would happen.

To explain my view more, the question I try to answer in these problems is more or less: if I were to choose a decision theory now to strictly adhere to, knowing I might run into the Bomb problem, which decision theory would I choose?

"But by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot."

This is what we agree on. If you're in the situation with a bomb, all that matters is the bomb.

My stance is that Left-boxers virtually never get into the situation to begin with, because of the prediction Omega makes. So with probability close to 1, they never see a bomb.

Your stance (if I understand correctly) is that the problem statement says there is a bomb, so, that's what's true with... (read more)

4Vladimir_Nesov
But that's false for a UDT agent, it still matters to that agent-instance-in-the-situation what happens in other situations, those without a bomb, it's not the case that all that matters is the bomb (or even a bomb).
1Heighn
To explain my view more, the question I try to answer in these problems is more or less: if I were to choose a decision theory now to strictly adhere to, knowing I might run into the Bomb problem, which decision theory would I choose?

I see your point, although I have entertained Said's view as well. But yes, I could have done better. I tend to get like this when my argumentation is being called crazy, and I should have done better.

You could have just told me this instead of complaining about me to Said though.

I don't see how it is misleading. Achmiz asked what actually happens; it is, in virtually all possible worlds, that you live for free.

2Vladimir_Nesov
It is misleading because Said's perspective is to focus on the current situation, without regarding the other situations as decision relevant. From UDT perspective you are advocating, the other situations remain decision relevant, and that explains much of what you are talking about in other replies. But from that same perspective, it doesn't matter that you live in the situation Said is asking about, so it's misleading that you keep attention on this situation in your reply without remarking on how that disagrees with the perspective you are advocating in other replies. In the parent comment, you say "it is, in virtually all possible worlds, that you live for free". This is confusing: are you talking about the possible worlds within the situation Said was asking about, or also about possible worlds outside that situation? The distinction matters for the argument in these comments, but you are saying this ambiguously.

Note that it's my argumentation that's being called crazy, which is a large factor in the "antagonism" you seem to observe - a word choice I don't agree with, btw.

About the "needlessly upping the heat", I've tried this discussion from multiple different angles, seeing if we can come to a resolution. So far, no, alas, but not for lack of trying. I will admit some of my reactions were short and a bit provocative, but I don't appreciate nor agree with your accusations. I have been honest in my reactions.

2Vladimir_Nesov
I've been you ten years ago. This doesn't help, courtesy or honesty (purposes that tend to be at odds with each other) aren't always sufficient, it's also necessary to entertain strange points of view that are obviously wrong, in order to talk in another's language, to de-escalate where escalation won't help (it might help with feeding norms, but knowing what norms you are feeding is important). And often enough that is still useless and the best thing is to give up. Or at least more decisively overturn the chess board, as I'm doing with some of the last few comments to this post, to avoid remaining in an interminable failure mode.

Interesting. I'm having the opposite experience (due to timing, apparently), where at least it's making some sense now. I've seen it using tricks only applicable to addition and pulling numbers out of its ass, so I was surprised what it did wasn't completely wrong.

Asking the same question again even gives a completely different (but again wrong) result:

If you ask ChatGPT to multiply two 4-digit numbers it writes out the reasoning process in natural knowledge and comes to the right answer.

People keep saying such things. Am I missing something? I asked it to calculate 1024 * 2047, and the answer isn't even close. (Though to my surprise, the first 2 steps are at least correct steps, and not nonsense. And it is actually adding the right numbers together in step 3, again, to my surprise. I've seen it perform much, much worse.)

2ChristianKl
I did ask it at the beginning to multiply numbers and it seems to behave now differently than it did 5 weeks ago and isn't making correct multiplications anymore. Unfortunatley, I can't access the old chats.
1Heighn
Asking the same question again even gives a completely different (but again wrong) result:

That's what I've been saying to you: a contradiction.

And there are two ways to resolve it.

The scenario also stipulates the bomb isn't there if you Left-box.

What actually happens? Not much. You live. For free.

"So if you take the Left box, what actually, physically happens?"

You live. For free. Because the bomb was never there to begin with.

Yes, the situation does say the bomb is there. But it also says the bomb isn't there if you Left-box.

1Vladimir_Nesov
This is misleading. What happens is that the situation you found yourself in doesn't take place with significant measure. You live mostly in different situations, not this one.
2Said Achmiz
At the very least, this is a contradiction, which makes the scenario incoherent nonsense. (I don’t think it’s actually true that “it also says the bomb isn’t there if you Left-box”—but if it did say that, then the scenario would be inconsistent, and thus impossible to interpret.)

Agreed, but I think it's important to stress that it's not like you see a bomb, Left-box, and then see it disappear or something. It's just that Left-boxing means the predictor already predicted that, and the bomb was never there to begin with.

Put differently, you can only Left-box in a world where the predictor predicted you would.

2Said Achmiz
What stops you from Left-boxing in a world where the predictor didn’t predict that you would? To make the question clearer, let’s set aside all this business about the fallibility of the predictor. Sure, yes, the predictor’s perfect, it can predict your actions with 100% accuracy somehow, something about algorithms, simulations, models, whatever… fine. We take all that as given. So: you see the two boxes, and after thinking about it very carefully, you reach for the Right box (as the predictor always knew that you would). But suddenly, a stray cosmic ray strikes your brain! No way this was predictable—it was random, the result of some chain of stochastic events in the universe. And though you were totally going to pick Right, you suddenly grab the Left box instead. Surely, there’s nothing either physically or logically impossible about this, right? So if the predictor predicted you’d pick Right, and there’s a bomb in Left, and you have every intention of picking Right, but due to the aforesaid cosmic ray you actually take the Left box… what happens?
2Said Achmiz
But the scenario stipulates that the bomb is there. Given this, taking the Left box results in… what? Like, in that scenario, if you take the Left box, what actually happens?
1green_leaf
Yes, that's correct. By executing the first algorithm, the bomb has never been there. Here it's useful to distinguish between agentic 'can' and physical 'can.' Since I assume a deterministic universe for simplification, there is only one physical 'can.' But there are two agentic 'can''s - no matter the prediction, I can agentically choose either way. The predictor's prediction is logically posterior to my choice, and his prediction (and the bomb's presence) are the way they are because of my choice. So I can Left-box even if there is a bomb in the left box, even though it's physically impossible. (It's better to use agentic can over physical can for decision-making, since that use of can allows us to act as if we determined the output of all computations identical to us, which brings about better results. The agent that uses the physical can as their definition will see the bomb more often.) Unless I'm missing something.

I think we agree. My stance: if you Left-box, that just means the predictor predicted that with probability close to 1. From there on, there are a trillion trillion - 1 possible worlds where you live for free, and 1 where you die.

I'm not saying "You die, but that's fine, because there are possible worlds where you live". I'm saying that "you die" is a possible world, and there are way more possible worlds where you live.

I'm not going to make you cite anything. I know what you mean. I said Right-boxing is a consequence, given a certain resolution of the problem; I always maintained Left-boxing is the correct decision. Apparently I didn't explain myself well, that's on me. But I'm kinda done, I can't seem to get my point across (not saying it's your fault btw).

By construction it is not, because the scenario is precisely that we find ourselves in one such exceptional case; the posterior probability (having observed that we do so find ourselves) is thus ~1.

Except that we don't find ourselves there if we Left-box. But we seem to be going around in a circle.

… but you have said, in a previous post, that if you find yourself in this scenario, you Right-box. How to reconcile your apparently contradictory statements…?

Right-boxing is the necessary consequence if we assume the predictor's Right-box prediction is fixed now... (read more)

2Said Achmiz
There’s no “if” about it. The scenario is that we do find ourselves there. (If you’re fighting the hypothetical, you have to be very explicit about that, because then we’re just talking about two totally different, and pretty much unrelated, things. But I have so far understood you to not be doing that.) I don’t know what you mean by “apparently”. You have two boxes—that’s the scenario. Which do you choose—that’s the question. You can pick either one; where does “apparently” come in? What does this mean? The boxes are already in front of you. You just said in this very comment that you Right-box in the given scenario! (And also in several other comments… are you really going to make me cite each of them…?)

No, that's just plain wrong. If you Left-box given a perfect predictor, the predictor didn't put a bomb in Left. That's a given. If the predictor did put a bomb in Left and you Left-box, then the predictor isn't perfect.

Load More