You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on False thermodynamic miracles - Less Wrong Discussion

13 Post author: Stuart_Armstrong 05 March 2015 05:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 06 March 2015 12:21:49PM 3 points [-]

C need not be a low constant, btw. The only requirement is that u(false,action a, A) = u(false, action b, A) for all actions a and b and all A. ie nothing the AI does affects the utility of worlds where w is false, so this does not constrain its actions.

Basically the AI observes the ON signal going through, and knows that either a) the signal went through normally, or b) the signal was overwritten by coincidence by exactly the same signal. It's actions have no consequences in the first case, so it ignores it, and acts "as if" it was certain there had been a thermodynamic miracle that happened.

Comment author: TylerJay 07 March 2015 03:38:31AM *  3 points [-]

Thanks. I understand now. Just needed to sleep on it, and today, your explanation makes sense.

Basically, the AI's actions don't matter if the unlikely event doesn't happen, so it will take whatever actions would maximize its utility if the event did happen. This maximizes expected utility

Maximizing [P(no TM) * C + P(TM) * u(TM, A))] is the same as maximizing u(A) under assumption TM.

Comment author: Stuart_Armstrong 09 March 2015 11:42:32AM 3 points [-]

Maximizing [P(no TM) * C + P(TM) * u(TM, A))] is the same as maximizing u(A) under assumption TM.

Yes, that's a clear way of phrasing it.