TylerJay comments on False thermodynamic miracles - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (28)
Thanks. I understand now. Just needed to sleep on it, and today, your explanation makes sense.
Basically, the AI's actions don't matter if the unlikely event doesn't happen, so it will take whatever actions would maximize its utility if the event did happen. This maximizes expected utility
Maximizing [P(no TM) * C + P(TM) * u(TM, A))] is the same as maximizing u(A) under assumption TM.
Yes, that's a clear way of phrasing it.