You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on Reframing the Problem of AI Progress - Less Wrong Discussion

21 Post author: Wei_Dai 12 April 2012 07:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 12 April 2012 11:57:58PM 2 points [-]

This version is essentially Eliezer's "complexity and fragility of values", right? I suggest we keep calling it that, instead of "orthogonality" which again sounds like a too strong claim which makes it less likely for people to consider it seriously.

Comment author: Vladimir_Nesov 13 April 2012 01:13:33AM *  1 point [-]

This version is essentially Eliezer's "complexity and fragility of values", right?

Basically, but there is a separate point here that greater optimization power doesn't help with the problem and instead makes it worse. I agree that the word "orthogonality" is somewhat misleading.

Comment author: Wei_Dai 13 April 2012 04:31:53PM *  1 point [-]

David Dalrymple was nice enough to illustrate my concern with "orthogonality" just as we're talking about it. :)

Comment author: Vladimir_Nesov 13 April 2012 05:12:56PM 0 points [-]

...which also presented an opportunity to make a consequentialist argument for FAI under the assumption that all AGIs are good.