You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gwern comments on Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book - Less Wrong Discussion

35 Post author: Kaj_Sotala 07 October 2010 06:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 08 January 2011 05:53:05PM 0 points [-]

Can you come up with problem scenarios that don't involve interactions with other intelligent agents that have a significant speed advantage or disadvantage?

Existential risks come to mind - even if you ignore the issue of astronomical waste - as setting a lower bound on how stupid lifeforms like us can afford to be.

(If we were some sort of interstellar gas cloud or something which could only be killed by a nearby supernova or collapse of the vacuum or other really rare phenomena, then maybe it wouldn't be so bad to take billions of years to develop in the absence of other optimizers.)