lessdazed comments on Open thread, October 2011 - Less Wrong

5 Post author: MarkusRamikin 02 October 2011 09:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (308)

You are viewing a single comment's thread. Show more comments above.

Comment author: lessdazed 03 October 2011 07:48:15AM 15 points [-]

Based on the abstract, it's not worth my time to read it.

Abstract. Insanity is doing the same thing over and over and expecting a different result. “Friendly AI” (FAI) meets these criteria on four separate counts by expecting a good result after: 1) it not only puts all of humanity’s eggs into one basket but relies upon a totally new and untested basket, 2) it allows fear to dictate our lives, 3) it divides the universe into us vs. them, and finally 4) it rejects the value of diversity. In addition, FAI goal initialization relies on being able to correctly calculate a “Coherent Extrapolated Volition of Humanity” (CEV) via some as-yet-undiscovered algorithm. Rational Universal Benevolence (RUB) is based upon established game theory and evolutionary ethics and is simple, safe, stable, self-correcting, and sensitive to current human thinking, intuitions, and feelings. Which strategy would you prefer to rest the fate of humanity upon?

Points 2), 3), and 4) are simply inane.

Comment author: [deleted] 03 October 2011 04:50:34PM 6 points [-]

Upvoted, agreed, and addendum: Similarly inane is the cliche "insanity is doing the same thing over and over and expecting a different result."