ArisKatsaris comments on The genie knows, but doesn't care - Less Wrong

54 Post author: RobbBB 06 September 2013 06:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (515)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 07 September 2013 08:21:14PM *  2 points [-]

I think it's a question of what you program in, and what you let it figure out for itself. If you want to prove formally that it will behave in certain ways, you would like to program in explicitly, formally, what its goals mean. But I think that "human pleasure" is such a complicated idea that trying to program it in formally is asking for disaster. That's one of the things that you should definitely let the AI figure out for itself. Richard is saying that an AI as smart as a smart person would never conclude that human pleasure equals brain dopamine levels.

Eliezer is aware of this problem, but hopes to avoid disaster by being especially smart and careful. That approach has what I think is a bad expected value of outcome.

Comment deleted 12 September 2013 10:51:22AM [-]
Comment author: ArisKatsaris 12 September 2013 10:59:45AM *  4 points [-]

People manage to be friendly without apriori knowledge of everyone else's preferences. Human values are very complex...and one person's preferences are not another's.

Being the same species comes with certain advantages for the possiibility of cooperation. But I wasn't very friendly towards a wasp-nest I discovered in my attic. People aren't very friendly to the vast majority of different species they deal with.

Comment deleted 12 September 2013 12:36:10PM [-]
Comment author: ArisKatsaris 12 September 2013 05:14:05PM 3 points [-]

I'm superintelligent in comparison to wasps, and I still chose to kill them all.