Wei_Dai comments on Why might the future be good? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (14)
When you say "we all want the same outcome", do you mean we all want consequentialist systems, with our values and not subject to value drift, to be built before too much evolution has taken place? But many AGI researchers seem to prefer working on "heuristic soup" type designs (which makes sense if those AGI researchers are not themselves "properly consequentialist" and don't care strongly about long range outcomes).
What I mean is that the kind of value-stable consequentialist that humans can build in the relevant time frame may be too inefficient to survive under competitive pressure from other cognitive/organizational architectures that will exist (even if it can survive as a singleton).