I think my core issue with the above is the nature of the specification of the problem of "replacing in ". Allowing passing to an arbitrary equivalent program before replacing exact instances of seems overly permissive, and to allow in exactly the kind of principle-of-explosion issue that logical counterfactuals have. Suppose for instance that and both halt with a defined value, say . As framed above, I can take to be a program that computes (for some ), where is the result of substituting all exact instances of in w
...I agree that that is a problem that both this approach to counterfactuals and the FDT logical-counterfactual approach share. The particular problem I was hoping this approach avoids was the one of assuming mutually-exclusive logical facts, such that all-but-one of them must necessarily be false, and the implications this has for the agent's consistency and reasoning about its actions. Are you saying that they are the same problem, that the second problem is comparatively not worth solving, or something else?
I have indeed read many of those posts already (though I appreciate some reference to them in the original post would have been sensible, I apologise). Chris_Leong's Deconfusing Logical Counterfactuals comes pretty close to this - the counterfactual model I'm interested in corresponds to their notion of "Raw Counterfactual", but AFAICT they're going in a somewhat different direction with the notion of "erasure" (I don't think it should be necessary to forget that you've seen a full box in the transparent variant of Newcomb's problem, if you explicitly cons
...Ah, got there. From , we get specifically and thus . But we have directly as a theorem (axiom?) about the behaviour of , and we can lift this to , so also and thus .
I'm having difficulty following the line of the proof beginning "so, either way, PA is inconsistent". We have and , which together imply that , but I'm not immediately seeing how this leads to ?
In fact, all you know is that your credence of event H is somewhere in the interval [0.4, 0.6]
This really isn't how I understand credences to work. Firstly, they don't take ranges, and secondly, they aren't dictated to me by the background information, they're calculated from it. This isn't immediately fatal, because you can say something like:
The coin was flipped one quintillion times, and the proportion of times it came up heads was A, where A lies in the range [0.4, 0.6]
This is something you could actually tell me, and would have the effect that ...
Again, I agree that the problem of identifying what logical structures (whereever they occur) count as implementing a particular function is a deep and interesting one, and not one that I am claiming to have solved. But again, I do not agree that it is a problem I have introduced? An FDT agent correctly inferring the downstream causal results of setting FDT(P––,G––)=a would, in general, have to identify FDT(P––,G––) being computed inside a Game of Life simulation, if and where such a calculation so occured.
While I am indeed interested in exploring the answ
... (read more)