XiXiDu comments on Siren worlds and the perils of over-optimised search - Less Wrong

27 Post author: Stuart_Armstrong 07 April 2014 11:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (411)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 14 May 2014 08:48:12AM *  -1 points [-]

However, even a cursory look at the actual research literature shows that the mathematically most simple agents (ie: those that get discovered first by rational researchers interested in finding universal principles behind the nature of intelligence) are capital-U Unfriendly, in that they are expected-utility maximizers...

If I believed that anything as simple as AIXI could possibly result in practical general AI, or that expected utility maximizing was at all feasible, then I would tend to agree with MIRI. I don't. And I think it makes no sense to draw conclusions about practical AI from these models.

...if you are talking about the social process of AGI development: plainly, humans want to develop AGI that will do what humans intend for it to do.

This is crucial.

Did you actually expect that in this utterly uncaring universe of blind mathematical laws, you would find that intelligence necessitates certain values?

That's largely irrelevant and misleading. Your autonomous car does not need to feature an encoding of an amount of human values that correspondents to its level of autonomy.

Otherwise, the genie will know, but not care.

That post has been completely debunked.

ETA: Fixed a link to expected utility maximization.