Jack comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 29 January 2011 03:05:48AM *  3 points [-]

Most of this assumes that values are independent of intelligence, as Hume argued. But if Hume was wrong and Kant was right, then we will be less able to constrain the values of a superintelligent machine, but the more rational the machine is, the better values it will have.

Are there any LW-rationalist-vetted philosophical papers on this theme in modern times? (I'm somewhat skeptical of the idea that there isn't a universal morality (relative to some generalized Occamian prior-like-thing) that even a paperclip maximizer would converge to (if it was given the right decision theoretic (not necessarily moral per se) tools for philosophical reasoning, which is by no means guaranteed, so we should of course still be careful when designing AGIs).)

Comment author: Jack 29 January 2011 04:56:51AM 6 points [-]

Since it keeps coming up I think I'll write a top level post on the subject- I'll probably do some research when writing so I'll see what has been written recently. Hopefully I'll publish in the next week or two.