Lara_Foster comments on Morality as Fixed Computation - Less Wrong

14 Post author: Eliezer_Yudkowsky 08 August 2008 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Lara_Foster 08 August 2008 04:00:43PM 0 points [-]

Eliezer, I have a few practical questions for you. If you don't want to answer them in this tread, that's fine, but I am curious:

1) Do you believe humans have a chance of achieving uploading without the use of a strong AI? If so, where do you place the odds?

2) Do you believe that uploaded human minds might be capable of improving themselves/increasing their own intelligence within the framework of human preference? If so, where do you place the odds?

3) Do you believe that increased-intelligence-uploaded humans might be able to create an fAI with more success than us meat-men? If so, where do you place the odds?

4) Where do you place the odds of you/your institute creating an fAI faster than 1-3 occurring?

5) Where do you place the odds of someone else creating an unfriendly AI faster than 1-3 occurring?

Thank you!!!