jacob_cannell comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 31 January 2011 09:53:03PM *  0 points [-]

Interesting, hadn't seen Hollerith's posts before. I came to a similar conclusion about AIXI's behavior as exemplifying a final attractor in intelligent systems with long planning horizons.

If the horizon is long enough (infinite), the single behavioral attractor is maximizing computational power and applying it towards extensive universal simulation/prediction.

This relates to simulism and the SA, as any superintelligences/gods can thus be expected to create many simulated universes, regardless of their final goal evaluation criteria.

In fact, perhaps the final goal criteria applies to creating new universes with the desired properties.