Armok_GoB comments on This post is for sacrificing my credibility! - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (341)
You're looking at it all wrong, "you" are not "in" any simulation or universe. There exists instantiations of the algorithm, including the fact that it remembers winning the lottery, which is you in various universes and simulations and boltzman brains and other things, with certainty (for our purposes), and what you need to do depends on what you want ALL instances to do. It doesn't matter how many simulations of you are run, or what measure they have, or anything else like that, if your decisions within them don't matter for the multiverse at large.
None of the evolved concepts and heuristics, which you have been wired to assume so deeply alternatives may be literally unthinkable, are inapplicable in this kind of situation. These concepts include the self, anticipation, and reality. Anthropic is an heuristic as well, and a rather crappy one at that.
So ask yourself, what is your objective, non-local utility function over the entirety of the tegmark-4 multiverse, and for what action would it be logically implied to be the largest if all algorithms similar to yours outputted that action?
Yes, I really despise non-decision-theoretic approaches to anthropics. I know how to write a beautiful post that explains where almost all anthropic theories go wrong -- the key point is a combination of double counting evidence and only ever considering counterfactual experiences that logically couldn't be factual -- but it'd take awhile, and it's easier to just point people at UDT. Might give me some philosophy cred, which is cred I'd be okay with.
Actually, it does wrong on a much deeper and earlier level than that, and also you don't grok UDT as well as you think you do, or you wouldn't have considered the lottery question worth even considering.
More precisely, though, I thought the subject was worth your consideration, because I hadn't seen you in decision theory discussion. (Sorry, I don't mean to be or come across as defensive here. I'm a little surprised your model of me doesn't predict me asking those as trick questions. But only a little.)
Re deeper problems, there are metaphysical problems that are deeper and should be obvious, but the tack I wanted to take was purely epistemological, such that there's less wiggle room. Many people reject UDT because "values shouldn't affect anticipation", and I think I can neatly argue against anthropics without hitting up against that objection. Which would be necessary to convince the philosophers, I think.
Compensating over duplicitous behavior in models can tend to clog up simulations and lead to processing halting.
I generally would take all statements as reflective of exactly what some one means if at all possible.
Its also great fun to short circuit sarcasm in a similar way.
I'd be very interested in seeing such a post.
I should at least make a few paragraphs of summary, because I've referenced the idea like three times now, I've never written it down, and if it ends up being wrong I'm going to feel pretty dumb. I'll try to respond to your comment in the next few days with said paragraphs.