You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dmytry comments on [draft] Concepts are Difficult, and Unfriendliness is the Default: A Scary Idea Summary - Less Wrong Discussion

7 Post author: Kaj_Sotala 31 March 2012 10:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dmytry 31 March 2012 05:32:07PM *  3 points [-]

Agreed. I get same feeling basically, on top of which it feels to me that the formalization of fuzzily defined goal systems, be it FAI or paperclip maximizer, may well be impossible in practice (nobody can do it even in a toy model given infinite computing power!), leaving us with either the neat AIs that implement something like 'maximize own future opportunities' (the AI will have to be able to identify separate courses of action to begin with), or altogether with some messy AIs (neural network, cortical column network, et cetera) for which none of the argument is applicable. If I put speculative hat on, I can make up argument that the AI will be a Greenpeace activist just as well, by considering what the simplest self protective goal systems may be (and discarding the bias that the AI is self aware in man-like way)