drnickbone comments on SIA, conditional probability and Jaan Tallinn's simulation tree - Less Wrong

10 Post author: Stuart_Armstrong 12 November 2012 05:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (3)

You are viewing a single comment's thread.

Comment author: drnickbone 14 November 2012 03:08:16PM *  1 point [-]

Jaan Tallinn's talk is amazing... The link to that alone is worth the upvote.

I'm struck by Jaan's proposal that super intelligences will try to simulate each others' origins in the most efficient way possible, which Jaan describes as for "communication purposes", though he may be thinking of acausal trade. I had similar thoughts a few months ago in comments here and here.

One fly in the ointment though is that we seem to be a bit too far away from the singularity for the argument to work correctly (if the simulation approach is maximally efficient, and the number of simulants grows exponentially up to the singularity, we ought to be a fraction of a second away, surely, not years or decades away). Another problem is how much computational resource the super intelligences should spend simulating pre-singularity intelligences, as opposed to post-singularity intelligences (which presumably they are more interested in). If they only spend a small fraction of their resources simulating pre-singularitarians, we still have a puzzle about "why now".

Comment author: timtyler 18 November 2012 02:20:54PM 0 points [-]

Another problem is how much computational resource the super intelligences should spend simulating pre-singularity intelligences, as opposed to post-singularity intelligences (which presumably they are more interested in).

Origins are important - if what you want to do is understand what other superintelligences you might run into.