You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Open thread, Oct. 10 - Oct. 16, 2016 - Less Wrong Discussion

3 Post author: MrMind 10 October 2016 07:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanArmak 10 October 2016 02:55:47PM 2 points [-]

We do know it isn't an AI that kills us. Options b and c still qualify.

Comment author: Lumifer 10 October 2016 03:10:09PM 1 point [-]

Options (b) and (c) are basically wishes and those are complex X-D

"Not kill us" is an easy criterion, we already have an AI like that, it plays Go well.

Comment author: DanArmak 10 October 2016 04:18:24PM 3 points [-]

We don't have an AGI that doesn't kill us. Having one would be a significant step towards FAI. In fact, "a human-equivalent-or-better AGI that doesn't do anything greatly harmful to humanity" is a pretty good definition of FAI, or maybe "weak FAI".

Comment author: Lumifer 10 October 2016 04:43:09PM 0 points [-]

If it's a tool AGI, I don't see how it would help with friendliness, and if it's an active self-developing AGI, I thought the canonical position of LW was that there could be only one? and it's too late to do anything about friendliness at this point?

Comment author: DanArmak 10 October 2016 09:32:01PM 0 points [-]

I agree there would probably only be one successful AGI, so it's not the first step of many. I meant it would be a step in that direction. Poor phrasing on my part.