Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

siIver comments on Existential risk from AI without an intelligence explosion - Less Wrong

13 Post author: AlexMennen 25 May 2017 04:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread.

Comment author: siIver 26 May 2017 02:23:04PM 0 points [-]

This seems like something we should talk about more.

Although, afaik there shouldn't be a decision between motivation selection and capability controlling measures – the former is obviously the more important part, but you can also always "box" the AI in addition (insofar as that's compatible with what you want it to do).