WrongBot comments on Open Thread: July 2010, Part 2 - Less Wrong

6 Post author: Alicorn 09 July 2010 06:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (770)

You are viewing a single comment's thread.

Comment author: WrongBot 31 July 2010 12:50:26AM 0 points [-]

A general question about decision theory:

Is it possible to assign a non-zero prior probability to statements like "my memory has been altered", "I am suffering from delusions", and "I live in a perfectly simulated matrix"?

Apologies if this has been answered elsewhere.

Comment author: ocr-fork 31 July 2010 12:57:05AM 2 points [-]

The first two questions aren't about decisions.

"I live in a perfectly simulated matrix"?

This question is meaningless. It's equivalent to "There is a God, but he's unreachable and he never does anything."

Comment author: Blueberry 31 July 2010 02:01:01AM 1 point [-]

No, it's not meaningless, because if it's true, the matrix's implementers could decide to intervene (or for that matter create an afterlife simulation for all of us). If it's true, there's also the possibility of the simulation ending prematurely.

Comment author: katydee 31 July 2010 01:40:17AM 1 point [-]

Yes.

Comment author: ata 31 July 2010 01:21:37AM *  0 points [-]

Is it possible to assign a non-zero prior probability to statements like "my memory has been altered", "I am suffering from delusions", and "I live in a perfectly simulated matrix"?

Of course we have to assign non-zero probabilities to them, but I'm not quite sure how we'd figure out the right priors. Assuming that the hypotheses that your memory has been altered or you're delusional do not actually cause you to anticipate anything differently (see the bit about the blue tentacle in Technical Explanation), you may as well live in whatever reality appears to you to be the outermost one accessible to your mind.

(As for the last one, Nick Bostrom argues that we can actually assign a very high probability to a statement somewhat similar to "I live in a perfectly simulated matrix" — see the Simulation Argument. I have doubts about the meaningfulness of that on the basis of modal realism, but I'm not too confident one way or the other.)

Comment author: PaulAlmond 18 August 2010 02:55:54AM 0 points [-]

I disagree with the idea that modal realism, whether right or not, changes the chances of any particular hypothesis like that being true. I am not saying that we can never have a rational belief about whether or not modal realism is true: There may or may not be a philosophical justification for modal realism. However, I do think that whether modal realism applies has no bearing on the probability of you being in some situation, such as in a computer simulation. I think this issue needs debating, so for that purpose I have asserted this is a rule, which I call "The Principle of Modal Realism Equivalence", and that gives us something well-defined to argue for or against. I define and assert the rule, and give a (short) justification of it here: http://www.paul-almond.com/ModalRealismEquivalence.pdf.

Comment author: WrongBot 31 July 2010 05:57:12AM *  0 points [-]

But what if you should anticipate things very differently, if your memory has been altered? If I assigned a high probability to my memory having been altered, then I should expect that the technology exists to alter memories, and all manner of even stranger things that that would imply. Figuring out what prior to assign to a case like that, or whether it can be done at all, is what I'm struggling with.

Comment author: CronoDAS 31 July 2010 10:48:41PM 2 points [-]
Comment author: CronoDAS 31 July 2010 12:55:05AM 0 points [-]

Why not?

Comment author: WrongBot 31 July 2010 01:03:05AM 0 points [-]

"Where'd you get your universal prior, Neo?"

Eliezer seems to think (or, at least he did at the time) that this isn't a solvable problem. To phrase the question in a way more relevant to recent discussions, are those statements in any way similar to "a halting oracle exists"?

Comment author: saturn 31 July 2010 06:35:05AM 0 points [-]

Solomonoff's prior can't predict something uncomputable, but I don't see anything obviously uncomputable about any of the 3 statements you asked about.

Comment author: WrongBot 31 July 2010 07:02:01PM 0 points [-]

Right. But can it predict computable scenarios in which it is wrong?

Comment author: saturn 31 July 2010 09:21:28PM 0 points [-]

Yes. Anything that can be represented by a turing machine gets a nonzero prior. And its model of itself goes in the same turing machine with the rest of the world.