Will_Newsome comments on Open Thread: July 2010, Part 2 - Less Wrong

6 Post author: Alicorn 09 July 2010 06:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (770)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 15 July 2010 11:32:00PM *  -2 points [-]

I haven't checked this calculation at all, but I'm confident that it's wrong, for the simple reason that it is far more likely that some "mathematician" gave them the wrong numbers than that any compactly describable event with odds of 1 in 18 septillion against it has actually been reported on, in writing, in the history of intelligent life on my Everett branch of Earth.

Hm. Have you looked at the multiverse lately? It's pretty apparent that something has gone horribly weird somewhere along the way. Your confidence should be limited by that dissonance.

It's the same with MWI, and cryonics, and moral cognitivism, and any other belief where your structural uncertainty hasn't been explicitly conditioned on your anthropic surprise. I'm not sure to what extent your implied confidence in these matters is pedagogical rather than indicative of your true beliefs. I expect mostly pedagogical? That's probably fine and good, but I doubt such subtle epistemic manipulation for the public good is much better than the Dark Arts.

(Added: In this particular case, something less metaphysical is probably amiss, like a math error.)

Comment author: Will_Newsome 16 July 2010 02:48:44AM *  3 points [-]

So let me try to rewrite that (and don't be afraid to call this word salad):

(Note: the following comment is based on premises which are very probably completely unsound and unusually prone to bias. Read at your own caution and remember the distinction between impressions and beliefs. These are my impressions.)

You're Eliezer Yudkowsky. You live in a not-too-far-from-a-Singularity world, and a Singularity is a BIG event, decision theoretically and fun theoretically speaking. Isn't it odd that you find yourself at this time and place given all the people you could have found yourself as in your reference class? Isn't that unsettling? Now, if you look out at the stars and galaxies and seemingly infinite space (though you can't see that far), it looks as if the universe has been assigned measure via a universal prior (and not a speed prior) as it is algorithmically about as simple as you can get while still having life and yet seemingly very computationally expensive. And yet you find yourself as Eliezer Yudkowsky (staring at a personal computer, no less) in a close-to-Singularity world: surely some extra parameters must have been thrown into the description of this universe; surely your experience is not best described with a universal prior alone, instead of a universal prior plus some mixture of agents computing things according to their preference. In other words, this universe looks conspicuously like it has been optimized around Eliezer-does-something-multiversally-important. (I suppose this should also up your probability that you're a delusional narcissist, but there's not much to do about that.)

Now, if such optimization pressures exist, then one has to question some reductionist assumptions: if this universe gets at least some of its measure from the preferences of simulator-agents, then what features of the universe would be affected by those preferences? Computational cost is one. MWI implies a really big universe, and what are the chances that you would find yourself where you are in a really big universe as well as finding yourself in a conspicuously-optimized-seeming universe? Seemingly the two hypotheses are at odds. And what about cryonics? Do you really expect to die in a universe that seems to be optimized for having you around doing interesting things? (The answer to that could very well be yes, especially if your name is Light.) And when you have simulators in the picture, with explicit values, perhaps they have encoded rightness and wrongness into the fabric of reality via selectively pruning multiverse branches or something. Heaven knows what the gods do for fun.

These are of course ridiculous ideas, but ridiculous ideas that I am nonetheless hesitant to assign negligible probability to.

Maybe you're a lot less surprised to find yourself in this universe than I am, in which case none of my arguments apply. But I get the feeling that something is awfully odd is going on, and this makes me hesitant to be confident about some seemingly basic reductionist conclusions. Thus I advise you to buy a lottery ticket. It's the rational thing to do.

(Note: Although I personalized this for Eliezer, it applies to pretty much everyone to a greater or lesser degree. I remember (perhaps a secondhand and false memory, though, so don't take it too seriously) at some point Michael Vassar was really confused about why he didn't find himself as Eliezer Yudkowsky. I think the answer I would have thought up if I was him is that Michael Vassar is more decision theoretically multiversally important than Eliezer. Any other answer makes the question appear silly. Which it might be.)

(Alert to potential bias: I kinda like to be the contrarian-contrarian. Cryonics is dumb, MWI is wrong, buying a lottery ticket is a good idea, moral realism is a decent hypothesis, anthropic reasoning is more important than reductionist reasoning, CEV-like things won't ever work and are ridiculously easy to hack, TDT is unlikely to lead to any sort of game theoretic advantage and precommitments not to negotiate with blackmailers are fundamentally doomed, winning timeless war is more important than facilitating timeless trade, the Singularity is really near, religion is currently instrumentally rational for almost everyone, most altruists are actually egoists with relatively loose boundaries around identity, et cetera, et cetera.)

Comment author: Kevin 16 July 2010 03:27:53AM 1 point [-]

It all adds up to normality, damn it!

Comment author: Will_Newsome 16 July 2010 03:29:55AM *  1 point [-]

What whats to what?

More seriously, that aphorism begs the question. Yes, your hypothesis and your evidence have to be in perfectly balanced alignment. That is, from a Bayesian perspective, tautological. However, it doesn't help you figure out how it is exactly that the adding gets done. It doesn't help distinguish between hypotheses. For that we need Solomonoff's lightsaber. I don't see how saying "it (whatever 'it' is) adds up to (whatever 'adding up to' means) normality (which I think should be 'reality')" is at all helpful. Reality is reality? Evidence shouldn't contradict itself? Cool story bro, but how does that help me?

Comment author: Kevin 16 July 2010 04:45:11AM 0 points [-]
Comment author: Douglas_Knight 16 July 2010 05:35:59AM 0 points [-]

it looks as if the universe has been assigned measure via a universal prior (and not a speed prior) as it is algorithmically about as simple as you can get while still having life and yet seemingly very computationally expensive.

This is rather tangential to your point, but the universe looks very computationally cheap to me. In terms of the whole ensemble, quantum mechanics is quite cheap. It only looks expensive to us because we measure by a classical slice, which is much smaller. But even if we call it exponential, that is very quick by the standards of the Solomonoff prior.

Comment author: Will_Newsome 16 July 2010 05:26:24PM *  0 points [-]

Hm, I'm not sure I follow: both a classical and quantum universe are cheap, yes, but if you're using a speed prior or any prior that takes into account computational expense, then it's the cost of the universes relative to each other that helps us distinguish between which universe we expect to find ourselves in, not their cost relative to all possible universes.

I could very, very well just be confused.

Added: Ah, sorry, I think I missed your point. You're saying that even infinitely large universes seem computationally cheap in the scheme of things? I mean, compared to all possible programs in which you would expect life to evolve, the universe looks hugeeeeeee to me. It looks infinite, and there are tons of finite computations... when you compare anything to the multiverse of all things, that computation looks cheap. I guess we're just using different scales of comparison: I'm comparing to finite computations, you're comparing to a multiverse.

Comment author: Douglas_Knight 17 July 2010 12:55:50AM 0 points [-]

No, that's not what I meant; I probably meant something silly in the details, but I think the main point still applies. I think you're saying that the size of the universe is large compared to the laws of physics. To which I still reply: not large by the standards of computable functions.

Comment author: Eliezer_Yudkowsky 16 July 2010 01:46:05AM 0 points [-]

Whowha?

Comment author: Will_Newsome 16 July 2010 02:03:38AM *  1 point [-]

Er, sorry, I'm guessing my comment came across as word salad?

Added: Rephrased and expanded and polemicized my original comment in a reply to my original comment.

Comment author: Kevin 16 July 2010 02:06:19AM 0 points [-]

Yeah I didn't get it either.

Comment author: Will_Newsome 16 July 2010 02:08:23AM 2 points [-]

Hm. It's unfortunate that I need to pass all of my ideas through a Nick Tarleton or a Steve Rayhawk before they're fit for general consumption. I'll try to rewrite that whole comment when I'm less tired.

Comment author: Vladimir_Nesov 16 July 2010 08:47:18AM 2 points [-]

It's unfortunate that I need to pass all of my ideas through a Nick Tarleton or a Steve Rayhawk before they're fit for general consumption.

Illusion of transparency: they can probably generate sense in response to anything, but it's not necessarily faithful translation of what you say.

Comment author: Will_Newsome 16 July 2010 05:19:17PM 2 points [-]

Consider that one of my two posts, Abnormal Cryonics, was simply a narrower version of what I wrote above (structural uncertainty is highly underestimated) and that Nick Tarleton wrote about a third of that post. He understood what I meant and was able to convey it better than I could. Also, Nick Tarleton is quick to call bullshit if something I'm saying doesn't seem to be meaningful, which is a wonderful trait.

Comment author: Vladimir_Nesov 16 July 2010 09:47:43PM 2 points [-]

Well, that was me calling bullshit.

Comment author: Will_Newsome 17 July 2010 04:14:26PM 0 points [-]

Thanks! But it seems you're being needlessly abrasive about it. Perhaps it's a cultural thing? Anyway, did you read the expanded version of my comment? I tried to be clearer in my explanation there, but it's hard to convey philosophical intuitions.

Comment author: Vladimir_Nesov 17 July 2010 05:59:35PM *  0 points [-]

I find myself unable to clearly articulate what's wrong with your idea, but in my own words, it reads as follows:

"One should believe certain things to be probable because those are the kinds of things that people believe through magical thinking."

Comment author: Kevin 16 July 2010 02:18:27AM 2 points [-]

Was your point that Eliezer's Everett Branch is weird enough already that it shouldn't be that surprising if universally improbable things have occurred?

Comment author: Will_Newsome 16 July 2010 02:57:25AM 1 point [-]

Erm, uh, kinda, in a more general sense. See my reply to my own comment where I try to be more expository.

Comment author: Vladimir_Nesov 16 July 2010 08:41:58AM 0 points [-]

Er, sorry, I'm guessing my comment came across as word salad?

I'm afraid it is word salad.