Normal_Anomaly comments on Open Thread: September 2011 - LessWrong

5 Post author: Pavitra 03 September 2011 07:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (441)

You are viewing a single comment's thread. Show more comments above.

Comment author: Armok_GoB 03 September 2011 08:29:30PM 1 point [-]

I keep running into problems with various versions of what I internally refer to as the "placebo paradox", and can't find a solution that doesn't lead to Regret Of Rationality. Simple example follows:

You have an illness from wich you'll either get better, or die. The probability of recovering is exactly half of what you estimate it to be due to the placebo effect/positive thinking. Before learning this you have 80% confidence in your recovery. Since you estimate 80%, your actual chance is 40% so you update to this. Since the estimate is now 40%, the actual chance is 20%, so you update to this. Then it's 10%, so you update to that. etc. Until both your estimated and actual chance of recovery are 0. then you die.

An irrational agent, on the other hand, upon learning this could self delude to 100% certainty of recovery, and have a 50% chance of actually recovering.

This is actually causing me real world problems, such as inability to use techniques based on positive thinking, and a lot of cognitive dissonance.

Another version of this problem features in HP:MoR, in the scene where harry is trying to influence the behaviour of dementors.

And to show this isn't JUST a quirk of human mind design, one can envision Omega setting up an isomorphic problem for any kind of AI.

Comment author: Normal_Anomaly 04 September 2011 07:02:32PM 0 points [-]

I think one way to avoid having to call this regret of rationality would be to see optimism as deceiving, not yourself, but your immune system. The fact that the human body acts differently depending on the person's beliefs is a problem with human biology, which should be fixed. If Omega does the same thing to an AI, Omega is harming that AI, and the AI should try to make Omega stop it.

Comment author: Armok_GoB 04 September 2011 09:02:57PM 0 points [-]

Well, deceiving somehting else by means of deceiving yourself still involves doublethink. It's the same as saying humans should not try to be rational.

Comment author: Normal_Anomaly 04 September 2011 10:12:42PM 4 points [-]

It's saying that it may be worth sacrificing accuracy (after first knowing the truth so you know whether to deceive yourself!) in order to deceive another agent: your immune system. It's still important to be rational in order to decide when to be irrational: all the truth still has to pass through your mind at some point in order to behave optimally.

On another note, you may benefit from reciting the Litany of Tarski:

If lying to myself can sometimes be useful, I want to believe that lying to myself can sometimes be useful.

If lying to myself cannot be useful, I want to believe that lying to myself cannot be useful.

Let me not become attached to beliefs I may not want.

Comment author: Armok_GoB 04 September 2011 10:26:44PM 0 points [-]

I know by brain is a massively parallel neural network with only smooth fitness curves, and certainly isn't running an outdated version of Microsoft Windows, but for how it's behaving in response to this you couldn't tell. I'm a sucky rationalist. :(