Dmytry comments on SMBC comic: poorly programmed average-utility-maximizing AI - Less Wrong

9 Post author: Jonathan_Graehl 06 April 2012 07:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dmytry 06 April 2012 01:53:53PM *  0 points [-]

Anything that happens without a causal line of descent from human values is unlikely to align with human values.

Unlikely to align how exactly? There's also the common causes, you know; A and B can be correlated when A causes B, when B causes A, or when C causes A and B.

It seems to me that you can require arbitrary degree of alignment to arrive at arbitrary unlikehood, but some alignment via common cause is nonetheless probable.

Comment author: Incorrect 06 April 2012 02:22:42PM 0 points [-]

Well yes, but I would assume you would want more alignment, not less.

Comment author: Dmytry 06 April 2012 02:49:13PM *  0 points [-]

There's such thing as over-fitting... if you have some noisy data, the theory that fits the data ideally is just the table of the data (e.g. heights and falling times); the useful theory doesn't fit data exactly in practice. If we make the AI perfectly fit to what mankind does, we could just as well make a brick and proclaim it an omnipotent omniscient mankind-friendly AI that will never stop the mankind from doing something that mankind wants (including taking the extinction risks).