jacob_cannell comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (202)
Roko combined the conccept with the (rather less sensible) idea of promoting those instrumental values into terminal values - and was met with a chorus of "Unfriendly AI".
Hollerith produced several pages on the topic.
Probably the best-known continuation is via Omohundro.
"Universal Instrumental Values" is much the same idea as "Basic AI drives" dressed up a little differently:
http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/
http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/
Interesting, hadn't seen Hollerith's posts before. I came to a similar conclusion about AIXI's behavior as exemplifying a final attractor in intelligent systems with long planning horizons.
If the horizon is long enough (infinite), the single behavioral attractor is maximizing computational power and applying it towards extensive universal simulation/prediction.
This relates to simulism and the SA, as any superintelligences/gods can thus be expected to create many simulated universes, regardless of their final goal evaluation criteria.
In fact, perhaps the final goal criteria applies to creating new universes with the desired properties.