You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Viliam_Bur comments on Open thread for December 9 - 16, 2013 - Less Wrong Discussion

5 Post author: NancyLebovitz 09 December 2013 04:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (371)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 11 December 2013 10:07:13AM *  1 point [-]

we would still be constantly pruned back to the CEV of 2045 humans

Two connotational objections: 1) I don't think that "constantly pruned back" is an appropriate metaphor for "getting everything you have ever desired". The only thing that would prevent us from doing X would be the fact that after reflection we love non-X. 2) The extrapolated 2045 humans would be probably as different from the real 2045 humans, as the 2045 humans are different from the MINUS 2045 humans.

I wonder if the FAI will be sad to not be able to see what evolution in its unlimited ignorance would have come up with for us?

Sad? Why, unless we program it to be? Also, with superior recursively self-improving intelligence it could probably make a good estimate of what would have happened in an alternative reality where all AIs are magically destroyed. But such estimate would most likely be a probability distribution of many different possibilities, not one specific goal.

Comment author: NancyLebovitz 11 December 2013 02:06:48PM 0 points [-]

I'm dubious about the extrapolation-- the universe is more complex than the AI, and the AI may not be able to model how our values would change as a result of unmediated choices and experiense.

Comment author: Viliam_Bur 11 December 2013 03:14:29PM 0 points [-]

I am not sure how obvious is the part that there are multiple possible futures. Most likely, the AI would not be able to model all of them. However, without AI most of them wouldn't happen anyway.

It's like saying "if I don't roll a die, I lose the chance of rolling 6", to which I add "and if you do roll the die, you still have 5/6 probability of not rolling 6". Just to make it clear that by avoiding the "spontaneous" future of humankind, we are not avoiding one specific future magically prepared for us by destiny. We are avoiding the whole probability distribution, which contains many possible futures, both nice and ugly.

Just because AI can model something imperfectly, it does not mean that without the AI the future would be perfect, or even better on average than with the AI.

Comment author: NancyLebovitz 11 December 2013 03:58:18PM 0 points [-]

'Unmediated' may not have been quite the word to convey what I meant.

My impression is that CEV is permanently established very early in the AI's history, but I believe that what people are and want (including what we would want if we knew more, thought faster, were more the people we wished we were, and had grown up closer together) will change, both because people will be doing self-modification and because they will learn more.