Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

DSimon comments on Strategic ignorance and plausible deniability - Less Wrong

41 Post author: Kaj_Sotala 10 August 2011 09:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: Davorak 11 August 2011 12:13:56AM -1 points [-]

Additional necessary assumption seems to be that Alex cares about "Whatever can be destroyed by the truth, should be." He is selfish but does his best to act rationally.

Let's call the person Alex. Alex avoids getting tested in order to avoid possible blame; assuming Alex is selfish and doesn't care about their partners' sexual health (or the knock-on effects of people in general not caring about their partners' sexual health) at all, then this is the right choice instrumentally.

Therefore Alex does not value knowing whether or not his has an std and instead pursues other knowledge.

However, by acting this way Alex is deliberately protecting an invalid belief from being destroyed by the truth. Alex currently believes or should believe that they have a low probability (at the demographic average) of carrying a sexual disease. If Alex got tested, then this belief would be destroyed one way or the other; if the test was positive, then the posterior probability goes way upwards, and if the test is negative, then it goes downwards a smaller but still non-trivial amount.

Alex is faced with the choice of getting an std test and improving his probability estimate of his state of infection or spending his time doing something he considers more valuable. He chooses to not to get an std test because the information is not very valuable him and focuses on more important matters.

Instead of doing this, Alex simply acts as though they already knew the results of the test to be negative in advance, and even goes on to spread the truth-destroyable-belief by encouraging others to take it on as well.

Alex is selfish and does not care that he is misleading people.

By avoiding evidence, particularly useful evidence (where by useful I mean easy to gather and having a reasonably high magnitude of impact on your priors if gathered), Alex is being epistemically irrational (even though they might well be instrumentally rational).

Avoiding the evidence would be irrational. Focusing on more important evidence is not. Alex is not doing a "crappy" job of finding out what is false he has just maximized finding out the truth he cares about.

I tried to present a rational, selfish, uncaring, Alex who chooses not to get an STD test even though he cares deeply about "Whatever can be destroyed by the truth, should be.", as far as his personal beliefs are concerned.

Comment author: DSimon 11 August 2011 03:01:15AM 2 points [-]

Avoiding the evidence would be irrational. Focusing on more important evidence is not.

This is a very good point. We cannot gather all possible evidence all the time, and trying to do so would certainly be instrumentally irrational.

Is the standard then that it's instrumentally rational to prioritize Bayesian experiments by how likely their outcomes are to affect one's decisions?

Comment author: Davorak 11 August 2011 01:49:25PM 1 point [-]

Is the standard then that it's instrumentally rational to prioritize Bayesian experiments by how likely their outcomes are to affect one's decisions?

It weighs into the decision, but it seems like it is insufficient by itself. An experiment can change my decision radically but be on unimportant topic(s). Topics that do not effect goal achieving ability. It is possible to imagine spending ones time on experiments that change one's decisions and never get close to achieving any goals. The vague answer seems to be prioritize by how much the experiments will be likely to help achieve ones goals.