XiXiDu comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (161)
No. I believe that it is practically impossible to systematically and consistently assign utility to world states. I believe that utility can not even be grounded and therefore defined. I don't think that there exists anything like "human preferences" and therefore human utility functions, apart from purely theoretical highly complex and therefore computationally intractable approximations. I don't think that there is anything like a "self" that can be used to define what constitutes a human being, not practically anyway. I don't believe that it is practically possible to decide what is morally right and wrong in the long term, not even for a superintelligence.
I believe that stable goals are impossible and that any attempt at extrapolating the volition of people will alter it.
Besides I believe that we won't be able to figure out any of the following in time:
I further believe that the following problems are impossible to solve, respectively constitute a reductio ad absurdum of certain ideas:
Strange stuff.
Surely "right" and "wrong" make the most sense in the context of a specified moral system.
If you are using those terms outside such a context, it usually implies some kind of moral realism - in which case, one wonders what sort of moral realism you have in mind.