shokwave comments on The Urgent Meta-Ethics of Friendly Artificial Intelligence - Less Wrong

45 Post author: lukeprog 01 February 2011 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 03 February 2011 12:03:56PM 2 points [-]

Depends on what would satisfy us, I suppose.

It might turn out that what does satisfy us is to be "free", to do what we want, even if that means that we will mess up our own future. It might turn out that humans are only satisfied if they can work on existential problems, "no risk no fun". Or we might simply want to learn about the nature of reality. The mere existence of an FAI might spoil all of it. Would you care to do science if there was some AI-God that already knew all the answers? Would you be satisfied if it didn't tell you the answers or made you forget that it does exist so that you'd try to invent AGI without ever succeeding?

But there is another possible end. Even today many people are really bored and don't particularly enjoy life. What if it turns out that there is no "right" out there or that it can be reached fairly easily without any way to maximize it further. In other words, what if fun is something that isn't infinite but a goal that can be reached? What if it all turns out to be wireheading, the only difference between 10 minutes of wireheading or 10^1000 years being the number enumerating the elapsed time? Think about it, would you care about 10^1000 years of inaction? What would you do if that was the optimum? Maybe we'll just decide to choose the void instead.

Comment author: shokwave 03 February 2011 12:20:49PM 1 point [-]

It might turn out that

It might indeed.