You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lfghjkl comments on Goal retention discussion with Eliezer - Less Wrong Discussion

56 Post author: MaxTegmark 04 September 2014 10:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread. Show more comments above.

Comment author: lfghjkl 05 September 2014 07:01:03PM 6 points [-]

Very relevant article from the sequences: Detached Lever Fallacy.

Not saying you're committing this fallacy, but it does explain some of the bigger problems with "raising an AI like a child" that you might not have thought of.

Comment author: ciphergoth 06 September 2014 12:41:08PM 4 points [-]

I completely made this mistake right up until the point I read that article.

Comment author: TheAncientGeek 06 September 2014 03:51:24PM -1 points [-]

Hardly dispositive. A utility function that says "learn and care what your parents care about" looks relatively simple on paper. And we know the minumum intelligence required is that of a human toddler,

Comment author: VAuroch 06 September 2014 08:59:31PM 1 point [-]

A utility function that says "learn and care what your parents care about" looks relatively simple on paper.

Citation needed. That sounds extremely complex to specify.

Comment author: TheAncientGeek 06 September 2014 09:09:26PM -1 points [-]

relatively

Comment author: VAuroch 06 September 2014 10:53:06PM 1 point [-]

I don't think "learn and care about what your parents care about" is noticeably simpler than abstractly trying to determine what an arbitrary person cares about or CEV.