Vaniver comments on The Fabric of Real Things - Less Wrong

16 Post author: Eliezer_Yudkowsky 12 October 2012 02:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (305)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 13 October 2012 12:03:38AM 1 point [-]

P(intelligence is possible|causal universe)>P(intelligence|acausal universe).

That seems plausible but I must admit that I don't know enough details about possible "acausal universes" to be particularly confident.

Comment author: Vaniver 13 October 2012 05:10:26PM *  2 points [-]

If intelligence is seen as optimization power- in the sense of agents that constrain possible futures for their benefit- then it seems clear that the rewards to intelligence are 0 or negative in acausal universes, and so they should be less likely than in universes where they have positive rewards.