RobbBB comments on The genie knows, but doesn't care - Less Wrong

54 Post author: RobbBB 06 September 2013 06:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (515)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 29 April 2014 03:25:35PM 0 points [-]

Software that looks friendly isn't really friendly in the sense that it really understands what we want. It isn't dangerously unfriendly because we're still here. If its commercially successful, it's friendly enough for us to want it in our lives.

Comment author: RobbBB 29 April 2014 11:10:58PM 0 points [-]

Human beings aren't friendly, in the Friendly-AI sense. If a random human acquired immense power, it would probably result in an existential catastrophe. Humans do have a better sense of human value than, say, a can-opener does; they have more power and autonomy than a can-opener, so they need fuller access to human values in order to reach similar safety levels. A superintelligent AI would require even more access to human values to reach comparable safety levels.

Comment author: TheAncientGeek 30 April 2014 12:06:21PM *  1 point [-]

There is more than one sense to friendly .AI.

If you grafted absolute power onto a human with average ethical insight, you might get absolute corruption. But what is that analogous to in .AI terms? Why assume asymmetric development by default?

If you assume top down singleton AI with a walled of ethics module, things look difficult. If you reverse this assumptions, FAI is already happening.