gRR comments on How likely the AI that knows it's evil? Or: is a human-level understanding of human wants enough? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (29)
If it extrapolates coherently, then it's a single concept, otherwise it's a mixture :)
This may actually be doable, even at present level of technology. You gather a huge text corpus, find the contexts where the word "sound" appears, do the clustering using some word co-occurence metric. The result is a list of different meanings of "sound", and a mapping from each mention to the specific meaning. You can also do this simultaneously for many words together, then it is a global optimization problem.
Of course, AGI would be able to do this at a deeper level than this trivial syntactic one.