You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gRR comments on Formalizing Value Extrapolation - Less Wrong Discussion

14 Post author: paulfchristiano 26 April 2012 12:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread.

Comment author: gRR 26 April 2012 03:23:47AM 1 point [-]

Possible objection: the proposal appears to fix U in terms of a mathematical description H of some current human brain. What happens in the future, when humans significantly self-modify?

Comment author: paulfchristiano 26 April 2012 05:10:50AM *  2 points [-]

H is used to start off the process. H is then able to interact with a hypothetical unbounded computer, which may eventually run many (potentially quite exotic) simulations, among them of the sorts of minds humans self-modify into.

Comment author: gRR 26 April 2012 11:09:16AM -1 points [-]

But your point (as I understood it) is that all these exotic simulations don't actually get run, they mostly just get reasoned about. If this is so, then as we go farther into future, U becomes increasingly obsolete.