wedrifid comments on Intuitive supergoal uncertainty - Less Wrong

4 Post author: JustinShovelain 04 December 2009 05:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 05 December 2009 04:53:54AM *  1 point [-]

If we can't make a C.elegans-Friendly AI given that information, we certainly can't do it for H. sapiens.

I like the suggestion you make. (But) I would perhaps fall just short of certainty. It is not unreasonable to suppose that a supergoal or utility function is something that was evolved alongside higher level adaptations like, say, an executive function and goal directed behaviour. C. elegans just wouldn't get much benefit from having a supergoal encoded in its nervous system.

Looking at the difficulty of creating a C. elegans-FAI would highlight one of the difficulties with FAI in general. There is the inevitable and somewhat arbitrary decision on just how much weight we want to give of implicit goals of humanity. The line between terminal and instrumental values is somewhat dependent on one's perspective.