JamesAndrix comments on What I would like the SIAI to publish - Less Wrong

27 Post author: XiXiDu 01 November 2010 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: JamesAndrix 02 November 2010 05:54:30AM 2 points [-]

I'm myself a bit suspicious if the argument for strong self-improvement is as compelling as it sounds though. Something you have to take into account is if it is possible to predict that a transcendence does leave your goals intact, e.g. can you be sure to still care about bananas after you went from chimphood to personhood.

Isn't that exactly the argument against non-proven AI values in the first place?

If you expect AI-chimp to be worried that AI-superchimp won't love bannanas , then you should be very worried about AI-chimp.

I don't get what you're saying about the paperclipper.

Comment author: XiXiDu 02 November 2010 09:04:08AM 1 point [-]

It is a reason not to transcend if you are not sure that you'll still be you afterwards, i.e. keep your goals and values. I just wanted to point out that the argument runs both directions. It is an argument for the fragility of values and therefore the dangers of fooming but also an argument for the difficulty that could be associated with radically transforming yourself.