Vaniver comments on The Fabric of Real Things - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (305)
That seems plausible but I must admit that I don't know enough details about possible "acausal universes" to be particularly confident.
If intelligence is seen as optimization power- in the sense of agents that constrain possible futures for their benefit- then it seems clear that the rewards to intelligence are 0 or negative in acausal universes, and so they should be less likely than in universes where they have positive rewards.