lukeprog comments on Lone Genius Bias and Returns on Additional Researchers - Less Wrong

24 Post author: ChrisHallquist 01 November 2013 12:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (63)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 05 November 2013 04:27:20AM 0 points [-]

Well, there are many ways it could turn out to be that making Kludge AI safe is the less-impossible option. The way I had in mind was that maybe goal stability and value-loading turn out to be surprisingly feasible with Kludge AI, and you really can just "bolt on" Friendliness. I suppose another way making Kludge AI safe could be the less-impossible option is if it turns out to be possible to keep superintelligences boxed indefinitely but also use them to keep non-boxed superintelligences from being boxed, or something. In which case boxing research would be more relevant.