So with what probability should Barack Obama believe he is on a holodeck, and how should this belief influence his behavior?
And not only Obama. The closer you are to the center of human history, the more likely you are to be on a holodeck. People simulating others should be more likely to simulate people in historically interesting times, and people simulating themselves for fun and blocking their memory should be more likely to simulate themselves as close to interesting events as possible.
And...if Singularity theory is true, the Singularity will be the most interesting and important event in all human history. Now, all of us are suspiciously close to the Singularity, with a suspiciously large ability to influence its course. Even I, a not-too-involved person who's just donated a few hundred dollars to SIAI and gets to sit here talking to the SIAI leadership each night, am probably within the top millionth of humans who have ever lived in terms of Singularity "proximity".
And Michael Vassar and Eliezer are so close to the theorized center of human history that they should assume they're holodecking with probability ~1.
After all, which is more likely from their perspective - that they're one of the dozen or so people most responsible for creating the Singularity and ensuring Friendly AI, or that they're some posthuman history buff who wanted to know what being the guy who led the Singularity Institute was like?
(the alternate explanation, of course, is that we're all on the completely wrong track and that we're simply in the larger percentage of humans who think they're extremely important.)
Still, I think that in most EU calculations, the weight of "holy crap this is improbable, how am I actually this important?" on the one side, and of "well, if I am this dude, I'd really better not @#$% this up" on the other should more or less scale together. I don't think I'm stepping into Pascalian territory here.
And Michael Vassar and Eliezer are so close to the theorized center of human history that they should assume they're holodecking with probability ~1.
The "with probability ~1" part is wrong, AFAICT. I'm confused about how to think about anthropics, and everybody I've talked to is also confused as far as I've noticed. Given this confusion, we can perhaps obtain simulation-probabilities by estimating the odds that our best-guess means of calculating anthropic probabilities is reliable, and then obtaining an estimate that we’re in a holodeck conditional on our anthropic calculation methods being correct. But it would be foolish to assign more than, say, a 90% estimate to “our best-guess means of calculating anthropic probabilities is basically correct”, unless someone has a better analysis of such methods than I’d expect.
We are actually in a 'chip-punk' version of the past in which silicon based computers became available all the way back in the late 20th century. The original Eliezer made friendly AI with vacuum tubes.
I don't think it should influence his behavior very much. Even if he assigns strong probability to being in a holodeck, his expected utility calculations should, I think, be dominated by the case in which he is in fact PotUS, since a president is in a better position to purchase utility.
So if you find you ARE that friend, presumably you'd have no fear of stepping in front of that gun barrel yourself for a few million flips right afterwards. I mean it's pretty convincing proof. Then you get to see the confusion in each other's face!
Though you're both more likely to end up mopping your friend's blood of the floor.
On the whole, I think a good friend probably doesn't let a friend test the Quantum Theory of Immortality.
Even if QTI is true, a good friend doesn't test it, for fear of leaving behind (many copies of) a bereaved friend.
Although it was not via the lottery, my wife's sister won one million dollars on a TV show in the 1980s called "The one million dollar chance of a lifetime". It turns out that she and her husband would get $40,000 a year for 25 years, but they got divorced a few years later, so she received $20,000 a year until recently. It was quite a contrast between the show's promise to "make you a millionaire" and the actual very modest improvement in lifestyle from an extra $20,000 a year.
Anyway, none of you know her so this doesn't disprove the ...
I get the feeling that there must be an "anthropic weirdness" literature out there that I don't know about. I don't know how else to explain why no one else is reacting to these paradoxes in the way that seems to me to be obvious. But perhaps my reaction would be quickly dismissed as naïve by those who have thought more about this.
The "obvious" reaction seems to me to be this:
The winner of the lottery, or Barack Obama for that matter, has no more evidence that he or she is in a holodeck than anyone else has.
Take the lottery winner. W...
The idea of a holodeck is that it's a simulated reality centred around you. In fact, many, most, or all of the simulated people in the holodeck may not be conscious observers at all.
So, either I am one of 6 billion conscious people on Earth, or I am the centre of some relatively tiny simulation. Winning the lottery seems like evidence for the latter, because if I am in a holodeck, interesting things are more likely to happen to me.
As you say, when someone wins the lottery, all 6 billion people on Earth get the same information. But that's assuming they're real in the first place, and so seems to beg the question.
Ha ha ha. Classic.
This is one of those stories you can show to would-be rationalists that will make them both laugh and think about probability. Well done.
If the hypothesis "this world is a holodeck" is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck. (I.e. to believe that the universe is such that most conscious observers observe ridiculously improbable positive events.)
Most conscious observers? I would think a universe/multiverse containing holodecks would still contain many people not in them. At best, you can conclude that most observers who don't see a world containing holodecks are in hol...
Just for fun: Not only are we living in someone's holodeck fantasy, it's a badly written holodeck fantasy!
(Taken from http://davidbrin.blogspot.com/2005/10/holodeck-scenario-part-i.html)
Let me weigh in on what I consider to be the worst possible catastrophe of them all. One that would explain every stupidity in the world today. That we are living in a very poor simulation.
...
...All right, the notion is gaining some degree of plausibility. But suppose it's true. In that case, whose simulation are we living in? Some vast future Omega Point consciousness?
After a lot of improbable things happen the main thing you have evidence for is that the universe is large enough to have improbable things happen. This could happen in MWI, or it could just happen in an ordinary very large universe. Or it could happen in a simulation that focuses on special events, so if your event is special this is also something that gets more support, relative to a small universe.
But I don't at all see how such events give you evidence about what sort of large universe you live in. And I don't see how winning the lottery is remotely unlikely enough to kick in such considerations.
As I alluded to in a previous discussion this sort of thing is veering quickly into the territory of the small world phenomenon in human social networks.
With something likely to be remarked on in idle chatter with casual acquaintances, such as winning a lottery, you end up with a unexpectedly large likelihood of becoming aware of a small number of links from yourself to someone who had (Unusual Event X) occur to them.
For those unfamiliar with QTI, it's a simple simultaneous test of many-worlds plus a particular interpretation of anthropic observer-selection effects: You put a gun to your head and wire up the trigger to a quantum coinflipper. After flipping a million coins, if the gun still hasn't gone off, you can be pretty sure of the simultaneous truth of MWI+QTI.
Wouldn't any of several multiverse theories predict the survival outcome, and therefore you can't conclude that the quantum MWI is correct? That is, a world which is single, yet contains you infinitely ...
From a statistical standpoint, lottery winners don't exist - you would never encounter one in your lifetime, if it weren't for the selective reporting.
When you said that, it seemed to me that you were saying that you shouldn't play the lottery even if the expected payoff - or even the expected utility - were positive, because the payoff would happen so rarely.
Does that mean you have a formulation for rational behavior that maximizes something other than expected utility? Some nonlinear way of summing the utility from all possible worlds?
If someone sugg...
But what is your watching friend supposed to think? Though his predicament is perfectly predictable to you - that is, you expected before starting the experiment to see his confusion - from his perspective it is just a pure 100% unexplained miracle.
OK, I don't get this at all, but I totally understand the lottery example. I think Tyrrell McAllister raised this question, but only his other question was ever addressed. Are the two cases really the same? If so, how?
It's true that, as the person next to the gun, you should expect to live with the same prob...
I get the feeling that I missed a lot of prediscussion to this topic. I am new here and new to these types of discussions, so if I am way off target please nudge me in the right direction. :)
If the statistics of winning a lottery are almost none, they are not none. As such, the chances of a lottery winner existing as time goes on increases with each lottery ticket purchased. (The assumption here is that "winner" simply means "holding the right ticket".)
Furthermore, it seems like the concept of the QTI is only useful if you already k...
I've pondered a toned-down version of this argument in the context of religious experience and other hallucinations. Also, this is an important consideration for Utilitarian-style Pascalian religion.
I don't think it's an exception to the Agreement Theorem. All you have to do to to communicate the evidence is give your friend root access to your brain so he can verify you aren't lying. Of course Omega could have just rigged your brain so you think you survived a million QTI tests, but that possibility shouldn't worry your friend any more than it worries you.
also, BTW one of my Dad's ham radio buddies won the lottery recently.
In passing, I said:
And lo, CronoDAS said:
To which I replied:
There's a certain resemblance here - though not an actual analogy - to the strange position your friend ends up in, after you test the Quantum Theory of Immortality.
For those unfamiliar with QTI, it's a simple simultaneous test of many-worlds plus a particular interpretation of anthropic observer-selection effects: You put a gun to your head and wire up the trigger to a quantum coinflipper. After flipping a million coins, if the gun still hasn't gone off, you can be pretty sure of the simultaneous truth of MWI+QTI.
But what is your watching friend supposed to think? Though his predicament is perfectly predictable to you - that is, you expected before starting the experiment to see his confusion - from his perspective it is just a pure 100% unexplained miracle. What you have reason to believe and what he has reason to believe would now seem separated by an uncrossable gap, which no amount of explanation can bridge. This is the main plausible exception I know to Aumann's Agreement Theorem.
Pity those poor folk who actually win the lottery! If the hypothesis "this world is a holodeck" is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck. (I.e. to believe that the universe is such that most conscious observers observe ridiculously improbable positive events.)
It's a sad situation to be in - but don't worry: it will always happen to someone else, not you.