You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ilzolende comments on Open thread, Jan. 19 - Jan. 25, 2015 - Less Wrong Discussion

3 Post author: Gondolinian 19 January 2015 12:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (302)

You are viewing a single comment's thread. Show more comments above.

Comment author: blogospheroid 20 January 2015 11:50:57AM 0 points [-]

A booster for getting AI values right is the 2 sidedness of the process. Existential risk and benefit.

To illustrate - You solve poverty, you still have to face climate change, you solve climate change, you still have to face biopathogens, you solve biopathogens, you still have to face nanotech, you solve nanotech, you still have to face SI. You solve SI correctly, the rest are all done. For people who use the cui bono argument, I think this answer is usually the best one to give.

Comment author: JoshuaZ 20 January 2015 01:26:29PM 0 points [-]

This assumes that you get a very strong singularity with either a hard take off or a fairly fast takeoff. If someone doesn't assign that high a probability to AI engaging in recursive self-improvement this argument will be unpersuasive.