Eliezer_Yudkowsky comments on Best career models for doing research? - Less Wrong

27 Post author: Kaj_Sotala 07 December 2010 04:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (999)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 10 December 2010 09:37:28PM 4 points [-]

You intend to give it a morality based on the massed wishes of humanity -

See the "Last Judge" section of the CEV paper.

Therefore, you are, by your own statements, raising the risk of my infinite torture from zero to a tiny non-zero probability.

As Vladimir observes, the alternative to SIAI doesn't involve nothing new happening.

Comment author: [deleted] 10 December 2010 09:45:58PM 3 points [-]

That just pushes the problem along a step. IF the Last Judge can't be mistaken about the results of the AI running AND the Last Judge is willing to sacrifice the utility of the mass of humanity (including hirself) to protect one or more people from being tortured, then it's safe. That's very far from saying there's a zero probability.

Comment author: ata 11 December 2010 12:28:59AM 2 points [-]

IF ... the Last Judge is willing to sacrifice the utility of the mass of humanity (including hirself) to protect one or more people from being tortured, then it's safe.

If the Last Judge peeks at the output and finds that it's going to decide to torture people, that doesn't imply abandoning FAI, it just requires fixing the bug and trying again.