timtyler comments on Evaluating the feasibility of SI's plan - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (186)
If you check with Creating Friendly AI you will see that the term is defined by its primary proponent as follows:
It's an anthropocentric term. Only humans would care about creating this sort of agent. You would have to redefine the term if you want to use it to refer to something more general.
Half specifically referred to "creating a successor that shares it's goals"; this is the problem we face when building an FAI. Nobody is saying an agent with arbitrary goals must at some point face the challenge of building an FAI.
(Incidentally, while Friendly is anthropocentric by default, in common usage analogous concepts relating to other species are referred to as "Friendly to X" of "X-Friendly", just a good is by default used to mean by human standards, but is sometimes used in "good for X".