shokwave comments on Open Thread: September 2011 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (441)
I keep running into problems with various versions of what I internally refer to as the "placebo paradox", and can't find a solution that doesn't lead to Regret Of Rationality. Simple example follows:
You have an illness from wich you'll either get better, or die. The probability of recovering is exactly half of what you estimate it to be due to the placebo effect/positive thinking. Before learning this you have 80% confidence in your recovery. Since you estimate 80%, your actual chance is 40% so you update to this. Since the estimate is now 40%, the actual chance is 20%, so you update to this. Then it's 10%, so you update to that. etc. Until both your estimated and actual chance of recovery are 0. then you die.
An irrational agent, on the other hand, upon learning this could self delude to 100% certainty of recovery, and have a 50% chance of actually recovering.
This is actually causing me real world problems, such as inability to use techniques based on positive thinking, and a lot of cognitive dissonance.
Another version of this problem features in HP:MoR, in the scene where harry is trying to influence the behaviour of dementors.
And to show this isn't JUST a quirk of human mind design, one can envision Omega setting up an isomorphic problem for any kind of AI.
Updating on the evidence of yourself updating is almost as much as a problem as is updating on the evidence of "I updated on the evidence of myself updating". Tongue-in-cheek!
That is to say, the decision theory you are currently running is not equipped to handle the class of problems where your response to a problem is evidence that changes the nature of the very problem you are responding to - in the same way that arithmetic is not equipped to handle problems requiring calculus or CDT is not equipped to handle Omega's two-box problem.
(If it helps your current situation, placebo effects are almost always static modifiers on your scientific/medical chances of recovery)
Do you have a suggestion for a better decision theory, or a suggestion on how exactly I have misinterpreted TDT to cause my current problems?
Knowing that MIGHT help, but probably not in practice. Specifically I'd need to know for every given instance of the problem a probability to assign which if it is assigned is also the actual chance.