Eliezer_Yudkowsky comments on Intuitive supergoal uncertainty - Less Wrong

4 Post author: JustinShovelain 04 December 2009 05:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: Mitchell_Porter 05 December 2009 04:28:15AM 9 points [-]

It's a total digression from this post, but: it occurs to me that someone ought to try to figure out what the "supergoal" or utility function of C. elegans is, or what the coherent extrapolated volition of the C. elegans species might be. That organism's nervous system has been mapped down to every last neuron (not so hard since there's only about 300 of them). If we can't make a C.elegans-Friendly AI given that information, we certainly can't do it for H. sapiens.

Comment author: Eliezer_Yudkowsky 05 December 2009 08:22:43AM 1 point [-]

My understanding is that we have a connection map but have not successfully simulated the behavior.