Eitan_Zohar comments on A resolution to the Doomsday Argument. - Less Wrong

-2 Post author: Eitan_Zohar 24 May 2015 05:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eitan_Zohar 24 May 2015 07:55:43PM *  0 points [-]

The question isn't how many simulated observers exist in total (although that's also unknown), but how many of them are like you in some relevant sense, i.e. what to consider "typical".

I also find it hard to believe that humans of any sort would hold special interest to a superintelligence. Do I really have the burden of proof there?

But in any case, I don't think your original idea works. Running a simulation of your ancestors causes your simulated ancestors to be wrong about the DA, but it doesn't cause yourself to be wrong about it.

The whole point is that the simulators want to find themselves in a simulation, and would only discover the truth after disaster has been avoided. It's a way of ensuring that superintelligence does not fulfill the DA.

Comment author: DanArmak 25 May 2015 12:53:27PM 1 point [-]

I also find it hard to believe that humans of any sort would hold special interest to a superintelligence. Do I really have the burden of proof there?

It's plausible, to me, that a superintelligence built by humans and intended by them to care about humans would in fact care about humans, even if it didn't have the precise goals they intended it to have.

Comment author: Eitan_Zohar 25 May 2015 01:14:47PM *  0 points [-]

This is overly complex. Now we assume that AI goes wrong? These people want to be in a simulation; they need a Schelling point with other humanities. Why wouldn't they just give clear instructions to the AI to simulate other Earths?