timtyler comments on Complexity of Value ≠ Complexity of Outcome - Less Wrong

32 Post author: Wei_Dai 30 January 2010 02:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (198)

You are viewing a single comment's thread. Show more comments above.

Comment author: jhuffman 30 January 2010 03:53:41AM -1 points [-]

If it were the case that only a few of our values scale, then we can potentially obtain almost all that we desire by creating a superintelligence with just those values.

Can we really expect a superintelligence to stick with the values we give it ? Our own values change over time; sometimes without even external stimulus just internal reflection. I don't see how we can bound a superintelligence without doing more computation than we expect it to do in its lifetime.

Comment author: timtyler 30 January 2010 01:35:17PM 0 points [-]

Quite a bit of ink has been spilled on this issue. Eliezer Yudkowsky and Steve Omohundro have argued that it is possible. Have you examined their arguments?