Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Abigail comments on Free to Optimize - Less Wrong

25 Post author: Eliezer_Yudkowsky 02 January 2009 01:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Abigail 02 January 2009 01:32:14PM 0 points [-]

Do the humans know that the Friendly AI exists?

From my own motivation, if I knew that the rules had been made easier than independent life, I would lack all motivation to work. Would the FAI allow me to kill myself, or harm others? If not, then why not provide a Culture-like existence?

I would want to be able to drop out of the game, now and then, have a rest in an easier habitat. Humans can Despair. If the game is too painful, then they will.

A good parent will bring a child on, giving challenges which are just challenging enough to be interesting, without being too challenging and guaranteeing failure. If the FAI is always going to be more superior to any individual than any parent can be, could one opt to be challenged like that, directly by the FAI, to reach ones greatest potential?

What I want are fundamental choices, not choices within a scheme the FAI dreams up.