IlyaShpitser comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong

8 Post author: KatjaGrace 11 November 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread. Show more comments above.

Comment author: IlyaShpitser 12 November 2014 11:55:22AM *  1 point [-]

There are two issues:

(a) In what settings do you want an architecture like that, and

(b) Ethics dictate we don't just want to replace entities for the sake of efficiency even if they disagree. This leads to KILL ALL HUMANS. So, we might get an architecture like that due to how history played out. And then it's just a brute fact.

I am guessing (a) has to do with "robustness" (I am not prepared to mathematise what I mean yet, but I am thinking about it).


People that think about UDT/blackmail are thinking precisely about how to win in settings I am talking about.

Comment author: Luke_A_Somers 12 November 2014 01:52:15PM *  0 points [-]

Pick a side of this fence. Will AI resist running-in-circles trivially, or is its running in circles all that's saving us from KILL ALL HUMANS objectives like you say in part b?

If the latter, we are so utterly screwed.