Peter_de_Blanc comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread. Show more comments above.

Comment author: retired_phlebotomist 13 November 2009 07:10:04AM 16 points [-]

If Omega materialized and told you Robin was correct and you are wrong, what do you do for the next week? The next decade?

Comment author: Peter_de_Blanc 16 November 2009 04:30:50AM 0 points [-]

About what? Everything?

Comment author: gwern 16 November 2009 04:59:06AM 3 points [-]

Given the context of Eliezer's life-mission and the general agreement of Robin & Eliezer: FAI, AI's timing, and its general character.

Comment author: retired_phlebotomist 17 November 2009 07:22:17AM 1 point [-]

Right. Robin doesn't buy the "AI go foom" model or that formulating and instilling a foolproof morality/utility function will be necessary to save humanity.

I do miss the interplay between the two at OB.