gRR comments on How likely the AI that knows it's evil? Or: is a human-level understanding of human wants enough? - Less Wrong

1 Post author: ChrisHallquist 21 May 2012 05:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (29)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 21 May 2012 09:58:36AM *  8 points [-]

Yeah, it's weird that Eliezer's metaethics and FAI seem to rely on figuring out "true meanings" of certain words, when Eliezer also wrote a whole sequence explaining that words don't have "true meanings".

For example, Eliezer's metaethical approach (if it worked) could be used to actually answer questions like "if a tree falls in the forest and no one's there, does it make a sound?", not just declare them meaningless :-) Namely, it would say that "sound" is not a confused jumble of "vibrations of air" and "auditory experiences", but a coherent concept that you can extrapolate by examining lots of human brains. Funny I didn't notice this tension until now.

Comment author: gRR 21 May 2012 10:27:53AM 0 points [-]

Does is rely on true meanings of words, particularly? Why not on concepts? Individually, "vibrations of air" and "auditory experiences" can be coherent.

Comment author: cousin_it 21 May 2012 11:39:16AM *  1 point [-]

What's the general algorithm you can use to determine if something like "sound" is a "word" or a "concept"?

Comment author: gRR 21 May 2012 12:29:00PM 0 points [-]

If it extrapolates coherently, then it's a single concept, otherwise it's a mixture :)

This may actually be doable, even at present level of technology. You gather a huge text corpus, find the contexts where the word "sound" appears, do the clustering using some word co-occurence metric. The result is a list of different meanings of "sound", and a mapping from each mention to the specific meaning. You can also do this simultaneously for many words together, then it is a global optimization problem.

Of course, AGI would be able to do this at a deeper level than this trivial syntactic one.