Will_Newsome comments on The Irrationality Game - Less Wrong

38 Post author: Will_Newsome 03 October 2010 02:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (910)

You are viewing a single comment's thread.

Comment author: Will_Newsome 03 October 2010 03:01:34AM *  55 points [-]

This is an Irrationality Game comment; do not be too alarmed by its seemingly preposterous nature.

We are living in a simulation (some agent's (agents') computation). Almost certain. >99.5%.

(ETA: For those brave souls who reason in terms of measure, I mean that a non-negligible fraction of my measure is in a simulation. For those brave souls who reason in terms of decision theoretic significantness, screw you, you're ruining my fun and you know what I mean.)

Comment author: Will_Newsome 03 October 2010 06:11:06AM *  10 points [-]

I am shocked that more people believe in a 95% chance of advanced flying saucers than a 99.5% change of not being in 'basement reality'. Really?! I still think all of you upvoters are irrational! Irrational I say!

Comment author: LucasSloan 03 October 2010 08:03:57AM 0 points [-]

I certainly agree with you now, but it wasn't entirely certain what you meant by your statement. A qualifier might help.

Comment author: Will_Newsome 03 October 2010 08:07:50AM 0 points [-]

Most won't see the need for precision, but you're right, I should add a qualifier for those who'd (justifiably) like it.

Comment author: Perplexed 04 October 2010 12:31:56AM 2 points [-]

Help! There is someone reasoning in terms of decision theoretic significantness ruining my fun by telling me that my disagreement with you is meaningless.

Comment author: Will_Newsome 04 October 2010 01:13:57AM 3 points [-]

Ahhh! Ahhhhh! I am extremely reluctant to go into long explanations here. Have you read the TDT manual though? I think it's up at the singinst.org website now, finally. It might dissolve confusions of interpretation, but no promises. Sorry, it's just a really tricky and confusing topic with lots of different intuitions to take into account and I really couldn't do it justice in a few paragraphs here. :(

Comment author: Nick_Tarleton 05 October 2010 08:16:38AM *  3 points [-]

99.5%

I'm surprised to hear you say this. Our point-estimate best model plausibly says so, but, structural uncertainty? (It's not privileging the non-simulation hypothesis to say that structural uncertainty should lower this probability, or is it?)

Comment author: Will_Newsome 05 October 2010 10:48:02PM 2 points [-]

Our point-estimate best model plausibly says so, but, structural uncertainty? (It's not privileging the non-simulation hypothesis to say that structural uncertainty should lower this probability, or is it?)

That is a good question. I feel like asking 'in what direction would structural uncertainty likely bend my thoughts?' leads me to think, from past trends, 'towards the world being bigger, weirder, and more complex than I'd reckoned'. This seems to push higher than 99.5%. If you keep piling on structural uncertainty, like if a lot of things I've learned since becoming a rationalist and hanging out at SIAI become unlearned, then this trend might be changed to a more scientific trend of 'towards the world being bigger, less weird, and simpler than I'd reckoned'. This would push towards lower than 99.5%.

What are your thoughts? I realize that probabilities aren't meaningful here, but they're worth naively talking about, I think. Before you consider what you can do decision theoretically you have to think about how much of you is in the hands of someone else, and what their goals might be, and whether or not you can go meta by appeasing those goals instead of your own and the like. (This is getting vaguely crazy, but I don't think that the craziness has warped my thinking too much.) Thus thinking about 'how much measure do I actually affect with these actions' is worth considering.

Comment author: wedrifid 05 October 2010 10:07:31AM 0 points [-]

Our point-estimate best model plausibly says so, but, structural uncertainty? (It's not privileging the non-simulation hypothesis to say that structural uncertainty should lower this probability, or is it?)

That's a good question. My impression is that it is somewhat. But in the figures we are giving here we seem to be trying to convey two distinct concepts (not just likelyhoods).

Comment author: LucasSloan 03 October 2010 07:09:48AM 3 points [-]

What do you mean by this? Do you mean "a non-negligible fraction of my measure is in a simulation" in which case you're almost certainly right. Or do you mean "this particular instantiation of me is in a simulation" in which case I'm not sure what it means to assign a probability to the statement.

Comment author: Will_Newsome 03 October 2010 07:31:36AM *  0 points [-]

What do you mean by this? Do you mean "a non-negligible fraction of my measure is in a simulation" in which case you're almost certainly right. Or do you mean "this particular instantiation of me is in a simulation" in which case I'm not sure what it means to assign a probability to the statement.

So you know which I must have meant, then. I do try to be almost certainly right. ;)

(Technically, we shouldn't really be thinking about probabilities here either because it's not important and may be meaningless decision theoretically, but I think LW is generally too irrational to have reached the level of sophistication such that many would pick that nit.)

Comment author: Mass_Driver 03 October 2010 05:14:07AM 4 points [-]

Propositions about the ultimate nature of reality should never be assigned probability greater than 90% by organic humans, because we don't have any meaningful capabilities for experimentation or testing.

Comment author: Jonathan_Graehl 03 October 2010 07:38:25AM 2 points [-]

Yep. Over-reliance on anthropic arguments IMO.

Comment author: Will_Newsome 03 October 2010 08:15:21AM *  2 points [-]

Huh, querying my reasons for thinking 99.5% is reasonable, few are related to anthropics. Most of it is antiprediction about the various implications of a big universe, as well as the antiprediction that we live in such a big universe.

(ETA: edited out 'if any', I do indeed have a few arguments from anthropics, but not in the sense of typical anthropic reasoning, and none that can be easily shared or explained. I know that sounds bad. Oh well.)

Comment author: Will_Newsome 03 October 2010 05:16:01AM 2 points [-]

Pah! Real Bayesians don't need experiment or testing; Bayes transcends the epistemological realm of mere Science. We have way more than enough data to make very strong guesses.

Comment author: [deleted] 03 October 2010 05:26:03AM 1 point [-]

This raises an interesting point: what do you think about the Presumptuous Philosopher thought experiment?

Comment author: AlephNeil 07 October 2010 07:35:24AM *  2 points [-]

If 'living in a simulation' includes those scenarios where the beings running the simulation never intervene then I think it's a non-trivial philosophical question whether "we are living in a simulation" actually means anything. Even assuming it does, Hilary Putnam made (or gave a tantalising sketch of) an argument that even if we were living in a simulation, a person claiming "we are living in a simulation" would be incorrect.

On the other hand, if 'living in a simulation' is restricted to those scenarios where there is a two-way interaction between beings 'inside' and 'outside' the simulation then surely everything we know about science - the uniformity and universality of physical laws - suggests that this is false. At least, it wouldn't merit 99.5% confidence. (The counterarguments are essentially the same as those against the existence of a God who intervenes.)

Comment author: Will_Newsome 07 October 2010 10:28:34AM *  3 points [-]

If 'living in a simulation' includes those scenarios where the beings running the simulation never intervene then I think it's a non-trivial philosophical question whether "we are living in a simulation" actually means anything.

It's a nontrivial philosophical question whether 'means anything' means anything here. I would think 'means anything' should mean 'has decision theoretic significance'. In which case knowing that you're in a simulation could mean a lot.

First off, even if the simulators don't intervene, we still intervene on the the simulators just by virtue of our existence. Decision theoretically it's still fair game, unless our utility function is bounded in a really contrived and inelegant way.

(Your link is way too long for me to read. But I feel confident in making the a priori guess that Putnam's just wrong, and is trying too hard to fit non-obvious intuitive reasoning into a cosmological framework that is fundamentally mistaken (i.e., non-ensemble).)

[S]urely everything we know about science - the uniformity and universality of physical laws - suggests that this is false.

What if I told you I'm a really strong and devoted rationalist who has probably heard of all the possible counterarguments and has explicitly taken into account both outside view and structural uncertainty considerations, and yet still believes 99.5% to be reasonable, if not perhaps a little on the overconfident side?

Comment author: AlephNeil 07 October 2010 01:11:09PM *  1 point [-]

It's a nontrivial philosophical question whether 'means anything' means anything here.

Oh sure - non-trivial philosophical questions are funny like that.

Anyway, my idea is that for any description of a universe, certain elements of that description will be ad hoc mathematical 'scaffolding' which could easily be changed without meaningfully altering the 'underlying reality'. A basic example of this would be a choice of co-ordinates in Newtonian physics. It doesn't mean anything to say that this body rather than that one is "at rest".

Now, specifying a manner in which the universe is being simulated is like 'choosing co-ordinates' in that, to do a simulation, you need to make a bunch of arbitrary ad hoc choices about how to represent things numerically (you might actually need to be able to say "this body is at rest"). Of course, you also need to specify the laws of physics of the 'outside universe' and how the simulation is being implemented and so on, but perhaps the difference between this and a simple 'choice of co-ordinates' is a difference in degree rather than in kind. (An 'opaque' chunk of physics wrapped in a 'transparent' mathematical skin of varying thickness.)

I'm not saying this account is unproblematic - just that these are some pretty tough metaphysical questions, and I see no grounds for (near-)certainty about their correct resolution.

(Your link is way too long for me to read. But I feel confident in making the a priori guess that Putnam's just wrong, and is trying too hard to fit non-obvious intuitive reasoning into a cosmological framework that is fundamentally mistaken (i.e., non-ensemble).)

He's not talking about ensemble vs 'single universe' models of reality, he's talking about reference - what's it's possible for someone to refer to. He may be wrong - I'm not sure - but even when he's wrong he's usually wrong in an interesting way. (Like this.)

What if I told you I'm a really strong and devoted rationalist who has probably heard of all the possible counterarguments and has explicitly taken into account both outside view and structural uncertainty considerations, and yet still believes 99.5% to be reasonable, if not perhaps a little on the overconfident side?

I'm unmoved - it's trite to point out that even smart people tend to be overconfident in beliefs that they've (in some way) invested in. (And please note that the line you were responding to is specifically about the scenario where there is 'intervention'.)

Comment author: wedrifid 07 October 2010 08:04:40AM 1 point [-]

Hilary Putnam made (or gave a tantalising sketch of) an argument that even if we were living in a simulation, a person claiming "we are living in a simulation" would be incorrect.

Err... I'm not intimately acquainted with the sport myself... What's the approximate difficulty rating of that kind of verbal gymnastics stunt again? ;)

Comment author: AlephNeil 07 October 2010 08:43:26AM 1 point [-]

It's a tricky one - read the paper. I think what he's saying is that there's no way for a person in a simulation (assuming there is no intervention) to refer to the 'outside' world in which the simulation is taking place. Here's a crude analogy: Suppose you were a two-dimensional being living on a flat plane, embedded in an ambient 3D space. Then Putnam would want to say that you cannot possibly refer to "up" and "down". Even if you said "there is a sphere above me" and there was a sphere above you, you would be 'incorrect' (in the same paradoxical way).

Comment author: MugaSofer 17 September 2012 02:00:30PM 3 points [-]

But ... we can describe spaces with more than three dimensions.

Comment author: [deleted] 17 September 2012 06:50:15PM 0 points [-]

Upvoted mainly because of the last sentence (though upvoting it does coincide with what I'd have to do according to the rules of the game).

Comment author: [deleted] 06 October 2010 06:39:47AM 0 points [-]

For those brave souls who reason in terms of measure

I'm confused about the justification for reasoning in terms of measure. While the MUH (or at least its cousin the CUH) seems to be preferred from complexity considerations, I'm unsure of how to account for the fact that it is unknown whether the cosmological measure problem is solvable.

Also, what exactly do you consider making up "your measure"? Just isomorphic computations?

Comment author: Will_Newsome 06 October 2010 06:53:04AM 1 point [-]

Also, what exactly do you consider making up "your measure"? Just isomorphic computations?

Naively, probabilistically isomorphic computations, where the important parts of the isomorphism are whatever my utility function values... such that, on a scale from 0 to 1, computations like Luke Grecki might be .9 'me' based on qualia valued by my utility function, or 1.3 'me' if Luke Grecki qualia are more like the qualia my utility function would like to have if I knew more, thought faster, and was better at meditation.

Comment author: [deleted] 06 October 2010 07:08:54AM 0 points [-]

Ah, you just answered the easier part!

Comment author: Will_Newsome 06 October 2010 07:20:04AM 1 point [-]

Yeah... I ain't a mathematician! If 'measure' turns out not to be the correct mathematical concept, then I think that something like it, some kind of 'reality fluid' as Eliezer calls it, will take its place.

Comment author: Liron 03 October 2010 08:20:29PM *  0 points [-]

99.5% is just too certain. Even if you think piles of realities nested 100 deep are typical, you might only assign 99% to not being in the basement.

Comment author: Perplexed 03 October 2010 06:48:44PM 0 points [-]

a non-negligible fraction of my measure is in a simulation.

How is that different than "I believe that I am a simulation with non-negligible probability"?

I'm leaving you upvoted. I think the probability is negligible however you play with the ontology.

Comment author: Will_Newsome 03 October 2010 08:52:35PM 1 point [-]

How is that different than "I believe that I am a simulation with non-negligible probability"?

If the same computation is being run in so-called 'basement reality' and run on a simulator's computer, you're in both places; it's meaningless to talk about the probability of being in one or the other. But you can talk about the relative number of computations of you that are in 'basement reality' versus on simulators' computers.

This also breaks down when you start reasoning decision theoretically, but most LW people don't do that, so I'm not too worried about it.

In a dovetailed ensemble universe, it doesn't even really make sense to talk about any 'basement' reality, since the UTM computing the ensemble eventually computes itself, ad infinitum. So instead you start reasoning about 'basement' as computations that are the product of e.g. cosmological/natural selection-type optimization processes versus the product of agent-type optimization processes (like humans or AGIs).

The only reason you'd expect there to be humans in the first place is if they appeared in 'basement' level reality, and in a universal dovetailer computing via complexity, there's then a strong burden of proof on those who wish to postulate the extra complexity of all those non-basement agent-optimized Earths. Nonetheless I feel like I can bear the burden of proof quite well if I throw a few other disjunctions in. (As stated, it's meaningless decision theoretically, but meaningful if we're just talking about the structure of the ensemble from a naive human perspective.)

Comment author: Perplexed 03 October 2010 11:11:08PM 1 point [-]

If the same computation is being run in so-called 'basement reality' and run on a simulator's computer, you're in both places; it's meaningless to talk about the probability of being in one or the other.

Why meaningless? It seems I can talk about one copy of me being here, now, and one copy of myself being off in the future in a simulation. Perhaps I do not know which one I am, but I don't think I am saying something meaningless to assert that I (this copy of me that you hear speaking) am the one in basement reality, and hence that no one in any reality knows in advance that I am about to close this sentence with a hash mark#

I'm not asking you to bear the burden of proving that non-basement versions are numerous. I'm asking you to justify your claim that when I use the word "I" in this universe, it is meaningless to say that I'm not talking about the fellow saying "I" in a simulation and that he is not talking (in part) about me. Surely "I" can be interpreted to mean the local instance.

Comment author: LucasSloan 03 October 2010 11:35:10PM 0 points [-]

Both copies will do exactly the same thing, right down to their thoughts, right? So to them, what does it matter which one they are? It isn't just that given that they have no way to test, this means they'll never know, it's more fundamental than that. It's kinda like how if there's an invisible, immaterial dragon in your garage, there might as well not be a dragon there at all, right? If there's no way, even in principle, to tell the difference between the two states, there might as well not be any difference at all.

Comment author: Perplexed 03 October 2010 11:53:47PM 0 points [-]

I must be missing a subtlety here. I began by asking "Is saying X different from saying Y?" I seem to be getting the answer "Yes, they are different. X is meaningless because it can't be distinguished from Y."

Comment author: LucasSloan 03 October 2010 11:59:51PM 3 points [-]

Ah, I think I see your problem. You insist on seeing the universe from the perspective of the computer running the program - and in this case, we can say "yes, in memory position #31415926 there's a human in basement reality and in memory position #2718281828 there's an identical human in a deeper simulation". However, those humans can't tell that. They have no way of determining which is true of them, even if they know that there is a computer that could point to them in its memory, because they are identical. You are every (sufficiently) identical copy of yourself.

Comment author: Perplexed 04 October 2010 12:27:23AM 0 points [-]

No, you don't see the problem. The problem is that Will_Newsome began by stating:

We are living in a simulation... Almost certain. >99.5%.

Which is fine. But now I am being told that my counter claim "I am not living in a simulation" is meaningless. Meaningless because I can't prove my statement empirically.

What we seem to have here is very similar to Godel's version of St. Anselm's "ontological" proof of the existence of a simulation (i.e. God).

Comment author: LucasSloan 04 October 2010 12:37:03AM *  -1 points [-]

Oh. Did you see my comment asking him to tell whether he meant "some of our measure is in a simulation" or "this particular me is in a simulation"? The first question is asking whether or not we believe that the computer exists (ie, if we were looking at the computer-that-runs-reality could we notice that some copies of us are in simulations or not) and the second is the one I have been arguing is meaningless (kinda).

Comment author: Will_Newsome 04 October 2010 12:18:47AM 0 points [-]

Right; I thought the intuitive gap here was only about ensemble universes, but it also seems that there's an intuitive gap that needs to be filled with UDT-like reasoning, where all of your decisions are for also decisions for agents sufficiently like you in the relevant sense (which differs for every decision).

Comment author: [deleted] 06 October 2010 11:25:25PM *  0 points [-]

In a dovetailed ensemble universe, it doesn't even really make sense to talk about any 'basement' reality, since the UTM computing the ensemble eventually computes itself, ad infinitum.

I don't get this. Consider the following ordering of programs; T' < T iff T can simulate T'. More precisely:

T' < T iff for each x' there exists an x such that T'(x') = T(x)

It's not immediately clear to me that this ordering shouldn't have any least elements. If it did, such elements could be thought of as basements. I don't have any idea about whether or not we could be part of such a basement computation.

I still think your distinction between products of cosmological-type optimization processes and agent-type optimization processes is important though.

Comment author: timtyler 03 October 2010 06:58:46PM *  0 points [-]

So: you think there's a god who created the universe?!?

Care to lay out the evidence? Or is this not the place for that?

Comment author: Will_Newsome 03 October 2010 09:14:02PM 2 points [-]

Care to lay out the evidence? Or is this not the place for that?

I really couldn't; it's such a large burden of proof to justify 99.5% certainty that I would have to be extremely careful in laying out all of my disjunctions and explaining all of my intuitions and listing every smart rationalist who agreed with me, and that's just not something I can do in a blog comment.