Peter_de_Blanc comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (682)
If Omega materialized and told you Robin was correct and you are wrong, what do you do for the next week? The next decade?
About what? Everything?
Given the context of Eliezer's life-mission and the general agreement of Robin & Eliezer: FAI, AI's timing, and its general character.
Right. Robin doesn't buy the "AI go foom" model or that formulating and instilling a foolproof morality/utility function will be necessary to save humanity.
I do miss the interplay between the two at OB.