AlexU comments on Real-Life Anthropic Weirdness - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (86)
And not only Obama. The closer you are to the center of human history, the more likely you are to be on a holodeck. People simulating others should be more likely to simulate people in historically interesting times, and people simulating themselves for fun and blocking their memory should be more likely to simulate themselves as close to interesting events as possible.
And...if Singularity theory is true, the Singularity will be the most interesting and important event in all human history. Now, all of us are suspiciously close to the Singularity, with a suspiciously large ability to influence its course. Even I, a not-too-involved person who's just donated a few hundred dollars to SIAI and gets to sit here talking to the SIAI leadership each night, am probably within the top millionth of humans who have ever lived in terms of Singularity "proximity".
And Michael Vassar and Eliezer are so close to the theorized center of human history that they should assume they're holodecking with probability ~1.
After all, which is more likely from their perspective - that they're one of the dozen or so people most responsible for creating the Singularity and ensuring Friendly AI, or that they're some posthuman history buff who wanted to know what being the guy who led the Singularity Institute was like?
(the alternate explanation, of course, is that we're all on the completely wrong track and that we're simply in the larger percentage of humans who think they're extremely important.)
Shouldn't the fact that they can probably imagine better versions of themselves reduce this probability? If you're in a holodeck, in addition to putting yourself at the center of the Singularity, why wouldn't you also give yourself the looks of Brad Pitt and the wealth of Bill Gates?
We are actually in a 'chip-punk' version of the past in which silicon based computers became available all the way back in the late 20th century. The original Eliezer made friendly AI with vacuum tubes.
Not really. Think of Nozick's experience machine. If you were to use the machine to simulate yourself in a situation extremely close to the center of the singularity, would you also give yourself the looks of Brad Pitt and the wealth of Bill Gates?
a) Would this not make the experience feel so 'unreal' that your simulated self would have trouble believing it's not a simulation, and therefore not enjoy the simulation at all? In constructing the simulation, you need to define how many positive attributes you can give your simulated self before it realizes that its situation is so improbable that it must be a simulation. I'd use caution and not make my simulated self too 'lucky.'
b) More importantly, you may believe that a) doesn't apply, and that your simulated self would take the blue pill, and willingly choose to continue to live in the simulation. Even then, having great looks and great wealth would probably distract you from creating the singularity. All I'd care about is the singularity, and I'd design the simulation so that I have a comfortable, not too distracting life that would allow me to focus maximally on the singularity, and nothing else.
I agree these are possibilities. However, it seems to me that if you're going to use improbable good fortune in some areas as evidence for being in a holodeck, it only makes sense to use misfortune (or at least lack of optimization, or below-averageness) in other areas as evidence against it. It doesn't sit well with me to write off every shortcoming as an intentional contrivance to make the simulation more "real" for you, or to give you additional challenges. Of course, we're only talking a priori probability here; if, say, Eliezer directly catalyzed the Singularity and found himself historically renowned, the odds would have to go way up.