You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielLC comments on Be Wary of Thinking Like a FAI - Less Wrong Discussion

6 Post author: kokotajlod 18 July 2014 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread.

Comment author: DanielLC 18 July 2014 09:06:28PM 1 point [-]

The ideal FAI wouldn't care about its personal identity over time;

Why not? If it's trying to maximize human values, humans consider death to have negative value, and humans consider the FAI to be alive, then the FAI would try to die as little as possible, presumably by cloning itself less. It might clone itself a bunch early on so that it can prevent other people from dying and otherwise do enough good to make the sacrifice worth it, but it would still care about its personal identity over time.

Comment author: ThisSpaceAvailable 19 July 2014 03:54:50AM 0 points [-]

You're equivocating. Humans consider death of humans to have negative value. If the humans that create the FAI don't assign negative value to AI death, then the FAI won't either.

Comment author: DanielLC 19 July 2014 05:45:29AM 0 points [-]

It's not clear that humans wouldn't assign negative value to AI death. It's certainly intelligent. It's not entirely clear what other requirements there are, and it's not clear what requirements an AI would fulfill.