timtyler comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 05 March 2012 08:10:30PM 1 point [-]

I believe that it is practically impossible to systematically and consistently assign utility to world states. I believe that utility can not even be grounded and therefore defined. I don't think that there exists anything like "human preferences" and therefore human utility functions, apart from purely theoretical highly complex and therefore computationally intractable approximations. I don't think that there is anything like a "self" that can be used to define what constitutes a human being, not practically anyway. I don't believe that it is practically possible to decide what is morally right and wrong in the long term, not even for a superintelligence.

Strange stuff.

Surely "right" and "wrong" make the most sense in the context of a specified moral system.

If you are using those terms outside such a context, it usually implies some kind of moral realism - in which case, one wonders what sort of moral realism you have in mind.