twanvl comments on Decision Theories: A Less Wrong Primer - Less Wrong

69 Post author: orthonormal 13 March 2012 11:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 15 March 2012 11:06:07AM *  0 points [-]

edit: ahh, wait, the EDT is some pretty naive theory that can not even process anything as complicated as evidence for causality working in our universe. Whatever then, a thoughtless approach leads to thoughtless results, end of story. The correct decision theory should be able to control for pre-existing lesion when it makes sense to do so.

I think you've got it. Pure EDT and CDT really just are that stupid - and irredeemably so because agents implementing them will not want to learn how to replace their decision strategy (beyond resolving themselves to their respective predetermined stable outcomes). Usually when people think either of them are a good idea it is because they have been incidentally supplementing and subverting them with a whole lot of their own common sense!

Comment author: twanvl 15 March 2012 08:08:54PM 0 points [-]

Usually when people think either of them are a good idea it is because they have been incidentally supplementing and subverting them with a whole lot of their own common sense!

As a person who (right now) thinks that EDT is a good idea, could you help enlighten me?

Wikipedia states that under EDT the action with the maximum value is chosen, where value is determined as V(A) =sum{outcomes O} P(O|A) U(O). The agent can put in knowledge about how the universe works into P(O|A), right?

Now the smoking lesion problem. It can be formally written as something like this,

U(smoking) = 1
U(cancer) = -100000
P(cancer | lesion) > P(cancer | !lesion)
P(smoking | lesion) > P(smoking | !lesion)
P(cancer | lesion&smoking) = P(cancer | lesion&!smoking) = P(cancer | lesion)
P(cancer | !lesion&smoking) = P(cancer | !lesion&!smoking) = P(cancer | !lesion)

I think the tricky part is P(smoking | lesion) > P(smoking | !lesion), because this puts a probability on something that the agent gets to decide. Since probabilities are about uncertainty, and the agent would be certain about its actions, this makes no sense.

Is that the main problem with EDT?

Actually the known fact is more like P(X smoking | X lesion), the probability of any agent with a lesion deciding to smoke. From this the agent will have to derive P(me smoking | me lesion). If the agent is an avarage human being, then they would be equal. But if the agent is special because he uses some specific decision theory or utility function, he should only look at a smaller reference class. I think in this way you get quite close to TDT/UDT.