You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Juno_Watt comments on Muehlhauser-Wang Dialogue - Less Wrong Discussion

24 Post author: lukeprog 22 April 2012 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (284)

You are viewing a single comment's thread. Show more comments above.

Comment author: Juno_Watt 23 May 2013 02:05:25PM 2 points [-]

Children routinely grow up to be awful people.

On average, they grow up to be average people. They generally don't grow up to be Genghis Khan or a James Bond villain, which is what the UFAI scenario predicts. FAI only needs to produce AIs that are as good as the average person, however bad that is in average terms.

Comment author: TheOtherDave 23 May 2013 04:05:31PM 2 points [-]

How dangerous would an arbitrarily selected average person be to the rest of us if given significantly superhuman power?

Comment author: Juno_Watt 24 May 2013 01:34:04AM *  1 point [-]

The topic is intelligence. Some people have superhuman (well, more than 99.99% of humans) intelligence, and we are generally not afraid of them. We expect them to have ascended to the higher reaches of the Kohlberg hierarchy. There doesn't seem to be a problem of Unfriendly Natural Intelligence. We don't kill off smart people on the basis that they might be a threat. We don't refuse people education on the grounds that we don't know what they will do with all that dangerous knowledge. (There may have been societies that worked that way, but they don't seem to be around any more).

Comment author: TheOtherDave 24 May 2013 03:42:21AM 0 points [-]

Agreed with all of this.