You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Will_Newsome comments on Best shot at immortality? - Less Wrong Discussion

4 Post author: tomme 22 March 2012 10:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (85)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 22 March 2012 03:39:51PM 1 point [-]

how likely would you consider it to be conditional on us not being simulated/overseen?

So it's possible that spacetime is infinitely dense and if you're a superintelligence there's no reason to expand. Dunno how likely that is, though blackholes do creep me out. Abiogenesis really doesn't seem all that impossible, and anyway I think anthropic explanations are fundamentally confused. If your AI never expands then it can't get precise info about its past, but maybe there are non-physical computational ways to do that, so the costs might not be worth the benefits. It seems like I might've been wrong in that LessWrong folk migh prefer anthropic solutons to Fermi, but I'm not sure how much evidence that is, especially as anthropics is confusing and possibly confused. So yeah... maybe 25% or so, but that's only factoring in some structural uncertainty. Meh.

'Course, my primary hypothesis is that we are being overseen, and brains sometimes have trouble reasoning about hypothetical scenarios which aren't already the default expectation. It's at times like this when advanced rationality skills would be helpful.