You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielFilan comments on Stupid Questions December 2014 - Less Wrong Discussion

16 Post author: Gondolinian 08 December 2014 03:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (341)

You are viewing a single comment's thread. Show more comments above.

Comment author: ike 17 December 2014 05:10:06AM 0 points [-]

I downloaded the paper you linked to and will read it shortly. I'm totally sympathetic to the "didn't want to make a long comment longer" excuse, having felt that way many times myself.

I agree in the single-world case, I wouldn't want to do it. That's not because I care about the single world without me per se (as in caring for the people in the world), but because I care about myself who would not exist with ~1 probability. In a multiverse, I still exist with ~1 probability. You can argue that I can't know for sure that I live in a multiverse, which is one of the reasons I'm still alive in your world (the main reason being it's not practical for me right now, and I'm not really confident enough to bother researching and setting something like that up.) However, you also don't know that anything you do is safe, by which I mean things like driving, walking outside, etc. (I'd say those things are far more rational in a multiverse, anyway, but even people who believe in single world still do these things.)

Another reason I don't have a problem with discontinuity is that the whole problem seems only to arise when you have an infinite number of worlds, and I just don't feel like that argument is convincing.

I don't think you need infinite knowledge to know whether x=0 or x>0, especially if you give some probability to higher level multiverses. You don't need to know for sure that x>0 (as you can't know anyway), but you can have 99.9% confidence that x>0 rather easily, conditional on MWI being true. As I explained, that is enough to take risks.

If I wake up after, in my case that I laid out, that would mean that I won, as I specified I would be killed while asleep. I could even specify that the entire lotto picking,noise generation, and checking is done while I sleep, so I don't have to worry about it. That said, I don't think the question of my subjective expectation of no longer existing is well-defined, because I don't have a subjective experience if I no longer exist. If am cloned, then told one of me is going to be vaporized without any further notice, and it happens fast enough not to have them feel anything, then my subjective expectation is 100% to survive. That's different from the torture case you mentioned above, where I expect to survive, and have subjective experiences. I think we do have some more fundamental disagreement about anthropics, which I don't want to argue over until I hash out my viewpoint more. (Incidentally, it seemed to me that Eliezer agrees with me at least partly, from what he writes in http://lesswrong.com/lw/14h/the_hero_with_a_thousand_chances/:

"What would happen if the Dust won?" asked the hero. "Would the whole world be destroyed in a single breath?"

Aerhien's brow quirked ever so slightly. "No," she said serenely. Then, because the question was strange enough to demand a longer answer: "The Dust expands slowly, using territory before destroying it; it enslaves people to its service, before slaying them. The Dust is patient in its will to destruction."

The hero flinched, then bowed his head. "I suppose that was too much to hope for; there wasn't really any reason to hope, except hope... it's not required by the logic of the situation, alas..."

I interpreted that as saying that you can only rely on the anthropic principle (and super quantum psychic powers), if you die without pain.)

I'm actually planning to write a post about Big Worlds, anthropics, and some other topics, but I've got other things and am continuously putting it off. Eventually. I'd ideally like to finish some anthropics books and papers, including Bostrom's, first.

Comment author: DanielFilan 17 December 2014 07:52:55AM *  0 points [-]

Another, more concise way of putting my troubles with discontinuity: I think that your utility function over universes should be a computable function, and the computable functions are continuous.

Also - what, you have better things to do with your time than read long academic papers about philosophy of physics right now because an internet stranger told you to?!