You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Viliam_Bur comments on Open thread for December 9 - 16, 2013 - Less Wrong Discussion

5 Post author: NancyLebovitz 09 December 2013 04:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (371)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 11 December 2013 03:14:29PM 0 points [-]

I am not sure how obvious is the part that there are multiple possible futures. Most likely, the AI would not be able to model all of them. However, without AI most of them wouldn't happen anyway.

It's like saying "if I don't roll a die, I lose the chance of rolling 6", to which I add "and if you do roll the die, you still have 5/6 probability of not rolling 6". Just to make it clear that by avoiding the "spontaneous" future of humankind, we are not avoiding one specific future magically prepared for us by destiny. We are avoiding the whole probability distribution, which contains many possible futures, both nice and ugly.

Just because AI can model something imperfectly, it does not mean that without the AI the future would be perfect, or even better on average than with the AI.

Comment author: NancyLebovitz 11 December 2013 03:58:18PM 0 points [-]

'Unmediated' may not have been quite the word to convey what I meant.

My impression is that CEV is permanently established very early in the AI's history, but I believe that what people are and want (including what we would want if we knew more, thought faster, were more the people we wished we were, and had grown up closer together) will change, both because people will be doing self-modification and because they will learn more.