Comment author: Eliezer_Yudkowsky 19 March 2009 07:45:47PM 5 points [-]

Nope. I don't care what quirks in my neurology do - I don't care what answer the material calculator returns, only the answer to 2 + 2 = ?

Comment author: fractalman 21 July 2013 05:10:54AM *  -2 points [-]

Meh, the original is badly worded.

Take 2. Omega notices a neuro-quirk. Then, based on what he's noticed, he offers you a 50/50 bet of 100$ to 43.25 dollars at just the right time with just the right intonation...

NOW do you take that bet?

...Why yes, yes you do. Even you. And you know it. it's related to why you don't think boxing an AI is the answer. only, Omega's already out of the box, and so can adjust your visual and auditory input with a much higher degree of precision.

Comment author: jimmy 19 March 2009 07:48:15PM *  2 points [-]

That's just like playing "Eeny, meeny, miny, moe" to determine who's 'it'. Once you figure out if there's an even or odd number of words, you know the answer, and it isn't random to you anymore. This may be great as a kid choosing who gets a cookie (wow! I win again!), but you're no longer talking about something that can go either way.

For a random output of a known function, you still need a random input.

Comment author: fractalman 21 July 2013 04:56:18AM *  0 points [-]

The trick with eeny-meeny-miney-moe is that it's long enough for us to not consciously and quickly identify whether the saying is odd or even, gives a 0, 1, or 2 on modulo 3, etc, unless we TRY to remember what it produces, or TRY to remember if it's odd or even before pointing it out. Knowing that doing so consciously ruins its capacity, we can turn to memory decay to restore some of the pseudo-random quality. basically, by sufficiently decoupling "point at A" from "choose A" to our internal cognitive algorithms...we change the way we route visual input and spit out a "point at X".

THAT"S where the randomness of eeny-meeny-miney-moe comes in...though I've probably got only one use left of it when it comes to situations with 2 items thanks to writing this up...

Comment author: [deleted] 31 May 2009 01:43:03AM 1 point [-]

And now I try to calculate what you should treat as being the probability that you're being emulated. Assume that Omega only emulates you if the coin comes up heads.

Suppose you decide beforehand that you are going to give Omega the $100, as you ought to. The expected value of this is $4950, as has been calculated.

Suppose that instead, you decide beforehand that E is the probability you're being emulated assuming you hear that came up tails. You'll still decide to give Omega the $100; therefore, your expected value if you hear that it came up heads is $10,000. Your expected value if you hear that the coin came up tails is -$100(1-E) + $10,000E.

The probability that you hear that the coin comes up tails should be given by P(H) + P(T and ~E) + P(T and E) = 0, P(H) = P(T and ~E), P(T and ~E) = P(T) - P(T and E), P(T and E) = P(E|T) * P(T). Solving these equations, I get P(E|T) = 2, which probably means I've made a mistake somewhere. If not, c'est l'Omega?

In response to comment by [deleted] on Counterfactual Mugging
Comment author: fractalman 21 July 2013 04:51:16AM -2 points [-]

um... lets see....

to REALLY evaluate that, we technically need to know how long omega runs the simulation for.

now, we have two options: one, assume omega keeps running the simulation indefinitely. two, assume that omega shuts the simulation down once he has the info he's looking for (and before he has to worry about debugging the simulation.)

in # 1, what we are left with is p(S)=1/3, p(H)=1/3, p(t)=1/3, which means we're moving 200$/3 from part of our possibility cloud to gain 10,000$/3 in another part.
In #2, we're moving a total of 100/2 $ to gain 10000/2. The 100$ in the simulation is quantum-virtual.

so, unless you have reason to suspect omega is running a LOT of simulations of you, AND not terminating them after a minute or so...(aka, is not inadvertently simulation-mugging you)...

You can generally treat Omega's simulation capacity as a dashed causality arrow from one universe to another-sortof like the shadow produced by the simulation...

Comment author: Nebu 19 March 2009 09:15:32PM 0 points [-]

This is a big issue which I unsucessfully tried to address in my non-existing 6+ paragraph explanation. Why the heck is Omega making bets if he can already predict everything anyway?

That said, it's not clear that when Omega offers you a bet, you should automatically refuse it under the assumption that Omega is trying to "beat" you. It seems like Omega doesn't really mind giving away money (pretty reasonable for an omniscient entity), since he seems to be willing to leave boxes with millions of dollars in them just lying around.

What is Omega's purpose is entirely unknown. Maybe he wants you to win these bets. If you're a rational person who "wants to win", I think you can just "not worry" about what Omega's intents are, and figure out what sequence of actions maximizes your utility (which in these examples always seems to directly translate into maximizing the amount of money you get).

Comment author: fractalman 21 July 2013 04:48:15AM 1 point [-]

Quantum Coins. seriously. they're easy enough to predict if you accept many worlds.
as for the rest... entertainment? Could be a case of "even though I can predict these humans so well, it's fascinating as to just how many of them two-box no matter how obvious i make it."
It's not impossible-we know that we exist, it is not impossible that some race resembling our own figured out a sufficient solution to the lob problem and became a race of omegas...

Comment author: conchis 19 March 2009 10:24:56PM *  7 points [-]

"Perfect knowledge would mean I also knew in advance that the coin would come up tails."

This seems crucial to me.

Given what I know when asked to hand over the $100, I would want to have pre-committed to not pre-committing to hand over the $100 if offered the original bet.

Given what I would know if I were offered the bet before discovering the outcome of the flip I would wish to pre-commit to handing it over.

From which information set I should evaluate this? The information set I am actually at seems the most natural choice, and it also seems to be the one that WINS (at least in this world).

What am I missing?

Comment author: fractalman 21 July 2013 04:10:56AM -2 points [-]

I'll give you the quick and dirty patch for dealing with omega: There is no way to know that, at that moment, you are not inside of his simulation. by giving him the 100$, there is a chance you are tranfering that money from within a simulation-which is about to be terminated-to outside of the simulation, with a nice big multiplier.

Comment author: swestrup 16 April 2009 08:39:14PM 0 points [-]

If we assume I'm rational, then I'm not going to assume anything about Omega. I'll base my decisions on the given evidence. So far, that appears to be described as being no more and no less than what Omega cares to tell us.

Comment author: fractalman 21 July 2013 04:08:20AM 0 points [-]

Fine, then interchange "assume omega is honest" with, say, "i've played a billiion rounds of one-box two-box with him" ...It should be close enough.

Comment author: MBlume 19 March 2009 11:19:44AM *  14 points [-]

I'm actually not quite satisfied with it. Probability is in the mind, which makes it difficult to know what I mean by "perfect knowledge". Perfect knowledge would mean I also knew in advance that the coin would come up tails.

I know giving up the $100 is right, I'm just having a hard time figuring out what worlds the agent is summing over, and by what rules.

ETA: I think "if there was a true fact which my past self could have learned, which would have caused him to precommit etc." should do the trick. Gonna have to sleep on that.

ETA2: "What would you do in situation X?" and "What would you like to pre-commit to doing, should you ever encounter situation X?" should, to a rational agent, be one and the same question.

Comment author: fractalman 21 July 2013 04:07:39AM -1 points [-]

|Perfect knowledge

use a Quantum coin-it conveniently comes up both.

Comment author: bogdanb 01 April 2009 07:08:00PM 4 points [-]

It's not capricious in the sense you give: you are capable of predicting some of its actions: because it's assumed Omega is perfectly trustworthy, you can predict with certainty what it will do if it tells you what it will do.

So, if it says it'll give you 10k$ in some condition (say, if you one-box its challenge), you can predict that it'll give it the money if that condition arises.

If it were capricious in the sense of complete inability of being predicted, it might amputate three of your toes and give you a flower garland.

Note that the problem supposes you do have certainty that Omega is trustworthy; I see no way of reaching that epistemological state, but then again I see no way Omega could be omnipotent, either.


On an somewhat unrelated note, why would Omega ask you for 100$ if it had simulated you wouldn't give it the money? Also, why would it do the same if it had simulated you would give it the money? What possible use would an omnipotent agent have for 100$?

Comment author: fractalman 21 July 2013 04:06:13AM 0 points [-]

Omega is assumed to be mildly bored and mildly anthropic. And his asking you for 100$ could always be PART of the simulation.

Comment author: Velorien 19 July 2013 12:29:48AM 9 points [-]

"Don't turn into a giant snake. That never helps."

Haven't we already had Quirrell all but explicitly invert this? He's admitted to reading the Evil Overlord List, and his comment on the Animagus transformation is "all sensible people do, if can. Thus very rare."

Comment author: fractalman 20 July 2013 10:38:35PM 3 points [-]

"become animagus" is a bit more general than "turn into a giant snake". The original evil-overloard rule is about how turning into a snake lets the hero kill you without losing alignment points, which is why it's such a bad idea.

that ISN'T what quirrel does. He uses it to slip into harry's pouch instead and reduce the sense of doom. much smarter than Jafar.

Comment author: CAE_Jones 18 July 2013 11:11:49AM 1 point [-]

Agreed (I get the impression this was how it was supposed to work in canon as well, with the chief difference being that Voldemort was much weaker and avoided taking direct control, so Quirrell was still capable of attempting wandless magic by the end of the year). I'm actually a little curious as to whether or not the unicorn blood detail has any relevance to HPMoR, but it hasn't been mentioned yet that I remember, so probably not.

Comment author: fractalman 20 July 2013 09:48:13PM 0 points [-]

it MIGHT be the glint of silver in chapter 1. maybe. :shrugs:

View more: Prev | Next