You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

drethelin comments on Discussion: Which futures are good enough? - Less Wrong Discussion

5 Post author: WrongBot 24 February 2013 12:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: drethelin 24 February 2013 01:49:46AM 4 points [-]

Also: this seems like a pretty great stopgap if it's more easily achievable than actual full on friendly universe optimization, but doesn't prevent the AI from working on this in the meanwhile and implementing it in the future. I would not be unhappy to wake up in a world where the AI tells me "I was simulating you but now I'm powerful enough to actually create utopia, time for you to help!"

Comment author: FeepingCreature 24 February 2013 10:04:51PM 0 points [-]

If the AI was not meaningfully committed to telling you the truth, how could you trust it if it said it was about to actually create utopia?

Comment author: drethelin 24 February 2013 10:12:10PM 1 point [-]

Why would I care? I'm a simulation fatalist. At some point in the universe, every "meaningful" thing will have been either done or discovered, and all that will be left will functionally be having fun in simulations. If I trust the AI to simulate well enough to keep me happy, I trust it to tell me the appropriate amount of truth to make me happy.