Lara_Foster comments on Morality as Fixed Computation - Less Wrong

14 Post author: Eliezer_Yudkowsky 08 August 2008 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Lara_Foster 08 August 2008 04:09:10PM 0 points [-]

I should also add:

6) Where do you place the odds of you/your institute creating an unfriendly AI in an attempt to create a friendly one?

7) Do you have any external validation (ie, unassociated with your institute and not currently worshiping you) for this estimate, or does it come exclusively from calculations *you* made?