You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

torekp comments on Concept Safety: Producing similar AI-human concept spaces - Less Wrong Discussion

31 Post author: Kaj_Sotala 14 April 2015 08:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: torekp 19 April 2015 08:55:16AM 0 points [-]

one would still expect the parameters and properties of those algorithms to be at least partially genetically tuned towards the kinds of learning tasks that were most useful in the EEA

Compare jacob_cannell's earlier point that

obviously for any set of optimization criteria, constraints (including computational), and dataset there naturally can only ever be a single optimal solution (emphasis added)

Do we know or can we reasonably infer what those optimization criteria were like, so that we can implement them into our AI? If not, how likely and by how much would we expect the optimal solution to change?