Vladimir_Nesov comments on Open Thread: December 2009 - Less Wrong

3 Post author: CannibalSmith 01 December 2009 04:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (263)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 04 December 2009 12:09:46AM *  3 points [-]

In the comment above, I explained why what AI does is irrelevant, as long as it's not guaranteed to actually have the right values: once it goes unchecked, it just reverts to whatever it actually prefers, be it in a flurry of hard takeoff or after a thousand years of close collaboration. "Safeguards", in every context I saw, refer to things that don't enforce values, only behavior, and that's not enough. Even the ideas for enforcement of behavior look infeasible, but the more important point is that even if we win this one, we still lose eventually with such an approach.

Comment author: Johnicholas 04 December 2009 03:12:41AM 0 points [-]

My symbiotic-ecology-of-software-tools scenario was not a serious proposal as the best strategy to Friendliness. I was trying to increase the plausibility of SOME return at SOME cost, even given that AIs could produce value.

I seem to have stepped onto a cached thought.

Comment author: Vladimir_Nesov 04 December 2009 03:16:34AM 1 point [-]

I'm afraid I see the issue as clear-cut, you can't get "some" return, you can only win or lose (probability of getting there is of course more amenable to small nudges).

Comment author: wedrifid 04 December 2009 04:23:28AM 0 points [-]

I seem to have stepped onto a cached thought.

Making such a statement significantly increases the standard of reasoning I expect from a post. That is, I expect you to be either right or at least a step ahead of the one with whom you are communicating.