Logical counterfactuals are when you say something like "Suppose  , what would that imply?"

They play an important role in logical decision theory.

Suppose you take a false proposition  and then take a logical counterfactual in which  is true. I am imagining this counterfactual as a function  that sends counterfactually true statements to 1 and false statements to 0.

Suppose  is " not Fermat's last theorem".  In the counterfactual where Fermat's last theorem is false, I would still expect 2+2=4. Perhaps not with measure 1, but close. So 

On the other hand, I would expect trivial rephrasing of Fermat's last theorem to be false, or at least mostly false.  

But does this counterfactual produce a specific counter example? Does it think that ? Or does it do something where the counterfactual insists a counter-example exists, but spreads probability over many possible counter-examples. Or does it act as if there is a non-standard number counterexample?

How would I compute the value of  in general? 

 

Suppose you are a LDT agent trying to work out whether to cooperate or defect in a prisoners dilemma.

What does the defect counterfactual look like? Is it basically the same as reality except you in particular defect. (So exact clones of you defect, and any agent that knows your exact source-code and is running detailed simulations will defect.)

Or is it broader than that, is this a counterfactual world in which all LDT agents defect in prisoners dilemma situations in general. Is this a counterfactual world in which a bunch of homo-erectus defected on each other, and then all went extinct, leaving a world without humans? 

All of the thought about logical counterfactuals I have seen so far is on toy problems that divide the world into Exact-simulations-of-you and Totally-different-from-you. 

I can't see any clear idea about what to do with the vaguely similar but not identical to you agents. 

New Answer
New Comment

2 Answers sorted by

JBlack

Ω3100

Truly logical counterfactuals really only make sense in the context of bounded rationality. That is, cases where there is a logically necessary proposition, but the agent cannot determine it within their resource bounds. Essentially all aspects of bounded rationality have no satisfactory treatment as yet.

The prisoners' dilemma question does not appear to require dealing with logical counterfactuals. It is not logically contradictory for two agents to make different choices in the same situation, or even for the same agent to make different decisions given the same situation, though the setup of some scenarios may make it very unlikely or even direct you to ignore such possibilities.

There is a model of bounded rationality, logical induction. 

Can that be used to handle logical counterfactuals?

If two Logical Decision Theory agents with perfect knowledge of each other's source code play prisoners dilemma, theoretically they should cooperate. 

LDT uses logical counterfactuals in the decision making.

If the agents are CDT, then logical counterfactuals are not involved.

2JBlack
If they have source code, then they are not perfectly rational and cannot in general implement LDT. They can at best implement a boundedly rational subset of LDT, which will have flaws. Assume the contrary: Then each agent can verify that the other implements LDT, since perfect knowledge of the other's source code includes the knowledge that it implements LDT. In particular, each can verify that the other's code implements a consistent system that includes arithmetic, and can run the other on their own source to consequently verify that they themselves implement a consistent system that includes arithmetic. This is not possible for any consistent system. The only way that consistency can be preserved is that at least one cannot actually verify that the other has a consistent deduction system including arithmetic. So at least one of those agents is not a LDT agent with perfect knowledge of each other's source code. We can in principle assume perfectly rational agents that implement LDT, but they cannot be described by any algorithm and we should be extremely careful in making suppositions about what they can deduce about each other and themselves.

ektimo

Ω010

This seems like 2 questions:

  1. Can you make up mathematical counterfactuals and propagate the counterfactual to unrelated propositions? (I'd guess no. If you are just breaking a conclusion somewhere you can't propagate it following any rules unless you specify what those rules are, in which case you just made up a different mathematical system.)
  2. Does the identical twin one shot prisoners dilemma only work if you are functionally identical or can you be a little different and is there anything meaningful that can be said about this? (I'm interested in this one also.)
[-]ViliamΩ130

Does the identical twin one shot prisoners dilemma only work if you are functionally identical or can you be a little different and is there anything meaningful that can be said about this?

I guess it depends on how much the parts that make you "a little different" are involved in your decision making.

If you can put it in numbers, for example -- I believe that if I choose to cooperate, my twin will choose to cooperate with probability p; and if I choose to defect, my twin will defect with probability q; also I care about the well-being of my twin with a coe... (read more)

2Donald Hobson
  And here the main difficulty pops up again. There is no causal connection between your choice and their choice. Any correlation is a logical one. So imagine I make a copy of you. But the copying machine isn't perfect. A random 0.001% of neurons are deleted. Also, you know you aren't a copy. How would you calculate that probability p,q? Even in principle.