All of incogn's Comments + Replies

incogn-20

The values of A, C and P are all equivalent. You insist on making CDT determine C in a model where it does not know these are correlated. This is a problem with your model.

incogn-20

This only shows that the model is no good, because the model does not respect the assumptions of the decision theory.

incogn-20

Decision theories do not compute what the world will be like. Decision theories select the best choice, given a model with this information included. How the world works is not something a decision theory figures out, it is not a physicist and it has no means to perform experiments outside of its current model. You need take care of that yourself, and build it into your model.

If a decision theory had the weakness that certain, possible scenarios could not be modeled, that would be a problem. Any decision theory will have the feature that they work with the model they are given, not with the model they should have been given.

incogn-10

You are applying a decision theory to the node C, which means you are implicitly stating: there are multiple possible choices to be made at this point, and this decision can be made independent of nodes not in front of this one. This means that your model does not model the Newcomb's problem we have been discussing - it models another problem, where C can have values independent of P, which is indeed solved by two-boxing.

It is not the decision theory's responsibility to know that the values of node C is somehow supposed to retrospectively alter the state o... (read more)

0nshepperd
Yes. That's basically the definition of CDT. That's also why CDT is no good. You can quibble about the word but in "the literature", 'CDT' means just that.
2Creutzer
You don't promote C to the action node, it is the action node. That's the way the decision problem is specified: do you one-box or two-box? If you don't accept that, then you're talking about a different decision problem. But in Newcomb's problem, the algorithm is trying to decide that. It's not trying to decide which algorithm it should be (or should have been). Having the algorithm pretend - as a means of reaching a decision about C - that it's deciding which algorithm to be is somewhat reminiscent of the idea behind TDT and has nothing to do with CDT as traditionally conceived of, despite the use of causal reasoning.
incogn10

Could you try to maybe give a straight answer to, what is your problem with my model above? It accurately models the situation. It allows CDT to give a correct answer. It does not superficially resemble the word for word statement of Newcomb's problem.

Therefore, even if the CDT algorithm knows that its choice is predetermined, it cannot make use of that in its decision, because it cannot update contrary to the direction of causality.

You are trying to use a decision theory to determine which choice an agent should make, after the agent has already had its algorithm fixed, which causally determines which choice the agent must make. Do you honestly blame that on CDT?

3Creutzer
No, it does not, that's what I was trying to explain. It's what I've been trying to explain to you all along: CDT cannot make use of the correlation between C and P. CDT cannot reason backwards in time. You do know how surgery works, don't you? In order for CDT to use the correlation, you need a causal arrow from C to P - that amounts to backward causation, which we don't want. Simple as that. I'm not sure what the meaning of this is. Of course the decision algorithm is fixed before it's run, and therefore its output is predetermined. It just doesn't know its own output before it has computed it. And I'm not trying to figure out what the agent should do - the agent is trying to figure that out. Our job is to figure out which algorithm the agent should be using. PS: The downvote on your post above wasn't from me.
incogn-20

If you apply CDT at T=4 with a model which builds in the knowledge that the choice C and the prediction P are perfectly correlated, it will one-box. The model is exceedingly simple:

  • T'=0: Choose either C1 or C2
  • T'=1: If C1, then gain 1000. If C2, then gain 1.

This excludes the two other impossibilities, C1P2 and C2P1, since these violate the correlation constraint. CDT makes a wrong choice when these two are included, because then you have removed the information of the correlation constraint from the model, changing the problem to one in which Omega is not a predictor.

What is your problem with this model?

0Creutzer
Okay, so I take it to be the defining characteristic of CDT that it uses of counterfactuals. So far, I have been arguing on the basis of a Pearlean conception of counterfactuals, and then this is what happens: Your causal network has three variables, A (the algorithm used), P (Omega's prediction), C (the choice). The causal connections are A -> P and A -> C. There is no causal connection between P and C. Now the CDT algorithm looks at counterfactuals with the antecedent C1. In a Pearlean picture, this amounts to surgery on the C-node, so no inference contrary to the direction of causality is possible. Hence, whatever the value of the P-node, it will seem to the CDT algorithm not to depend on the choice. Therefore, even if the CDT algorithm knows that its choice is predetermined, it cannot make use of that in its decision, because it cannot update contrary to the direction of causality. Now it turns out that natural language counterfactuals work very much, but not quite like Pearl's counterfactuals: they allow a limited amount of backtracking contrary to the direction of causality, depending on a variety of psychological factors. So if you had a theory of counterfactuals that allowed backtracking in a case like Newcomb's problem, then a CDT-algorithm employing that conception of counterfactuals would one-box. The trouble would of course be to correctly state the necessary conditions for backtracking. The messy and diverse psychological and contextual factors that seem to be at play in natural language won't do.
incogn00

If you take a careful look at the model, you will realize that the agent has to be precommited, in the sense that what he is going to do is already fixed. Otherwise, the step at T=1 is impossible. I do not mean that he has precommited himself consciously to win at Newcomb's problem, but trivially, a deterministic agent must be precommited.

It is meaningless to apply any sort of decision theory to a deterministic system. You might as well try to apply decision theory to the balls in a game of billiards, which assign high utility to remaining on the table but... (read more)

0Creutzer
Yes, it is. The point is that you run your algorithm at T=4, even if it is deterministic and therefore its output is already predetermined. Therefore, you want an algorithm that, executed at T=4, returns one-boxing. CDT does simply not do that. Ultimately, it seems that we're disagreeing about terminology. You're apparently calling something CDT even though it does not work by surgically altering the node for the action under consideration (that action being the choice of box, not the precommitment at T<1) and then looking at the resulting expected utilities.
incogn00

Playing prisoner's dilemma against a copy of yourself is mostly the same problem as Newcomb's. Instead of Omega's prediction being perfectly correlated with your choice, you have an identical agent whose choice will be perfectly correlated with yours - or, possibly, randomly distributed in the same manner. If you can also assume that both copies know this with certainty, then you can do the exact same analysis as for Newcomb's problem.

Whether you have a prediction made by an Omega or a decision made by a copy really does not matter, as long as they both are automatically going to be the same as your own choice, by assumption in the problem statement.

-2private_messaging
The copy problem is well specified, though. Unlike the "predictor". I clarified more in private. The worst part about Newcomb's is that all the ex religious folks seem to substitute something they formerly knew as 'god' for predictor. The agent can also be further specified; e.g. as a finite Turing machine made of cogs and levers and tape with holes in it. The agent can't simulate itself directly, of course, but it knows some properties of itself without simulation. E.g. it knows that in the alternative that it chooses to cooperate, it's initial state was in set A - the states that result in cooperation, in the alternative that it chooses to defect, it's initial state was in set B - the states that result in defection, and that no state is in both sets.
incogn10

Excellent.

I think laughably stupid is a bit too harsh. As I understand thing, confusion regarding Newcomb's leads to new decision theories, which in turn makes the smoking lesion problem interesting because the new decision theories introduce new, critical weaknesses in order to solve Newcomb's problem. I do, agree, however, that the smoking lesion problem is trivial if you stick to a sensible, CDT model.

0private_messaging
The problems with EDT are quite ordinary... its looking for good news, and also, it is kind of under-specified (e.g. some argue it'd two-box in Newcomb's after learning physics). A decision theory can not be disqualified for giving 'wrong' answer in the hypothetical that 2*2=5 or in the hypothetical that a or not a = false, or in the hypothetical that the decision is simultaneously controlled by the decision theory, and set, without involvement of the decision theory, by the lesion (and a random process if correlation is imperfect).
incogn-10

We do, by and large, agree. I just thought, and still think, the terminology is somewhat misleading. This is probably not a point I should press, because I have no mandate to dictate how words should be used, and I think we understand each other, but maybe it is worth a shot.

I fully agree that some values in the past and future can be correlated. This is more or less the basis of my analysis of Newcomb's problem, and I think it is also what you mean by imposing constraints on the past light cone. I just prefer to use different words for backwards correlati... (read more)

-1private_messaging
I'd be the first to agree on terminology here. I'm not suggesting that choice of the box causes money in the box, simply that those two are causally connected, in the physical sense. The whole issue seems to stem from taking the word 'causal' from causal decision theory, and treating it as more than mere name, bringing in enormous amounts of confused philosophy which doesn't capture very well how physics work. When deciding, you evaluate hypotheticals of you making different decisions. A hypothetical is like a snapshot of the world state. Laws of physics very often have to be run backwards from the known state to deduce past state, and then forwards again to deduce future state. E.g. a military robot sees a hand grenade flying into it's field of view, it calculates motion backwards to find where it was thrown from, finding location of the grenade thrower, then uses model of grenade thrower to predict another grenade in the future. So, you process the hypothetical where you picked up one box, to find how much money you get. You have the known state: you picked one box. You deduce that past state of deterministic you must have been Q which results in picking up one box, a copy of that state has been made, and that state resulted in prediction of 1 box. You conclude that you get 1 million. You do same for picking 2 boxes, the previous state must be R, etc, you conclude you get 1000 . You compare, and you pick the universe where you get 1 box. (And with regards to the "smoking lesion" problem, smoking lesion postulates a blatant logical contradiction - it postulates that the lesion affects the choice, which contradicts that the choice is made by the agent we are speaking of. As a counter example to a decision theory, it is laughably stupid)
incogn-30

I agree with the content, though I am not sure if I approve of a terminology where causation traverses time like a two-way street.

2private_messaging
Underlying physics is symmetric in time. If you assume that the state of the world is such that one box is picked up by your arm, that imposes constraints on both the future and the past light cone. If you do not process the constraints on the past light cone then your simulator state does not adhere to the laws of physics, namely, the decision arises out of thin air by magic. If you do process constraints fully then the action to take one box requires pre-copy state of "you" that leads to decision to pick one box, which requires money in one box; action to take 2 boxes likewise, after processing constraints, requires no money in the first box. ("you" is a black box which is assumed to be non-magical, copyable, and deterministic, for the purpose of the exercise). edit: came up with an example. Suppose 'you' is a robotics controller, you know you're made of various electrical components, you're connected to the battery and some motors. You evaluate a counter factual where you put a current onto a wire for some time. Constraints imposed on the past: battery has been charged within last 10 hours, because else it couldn't supply enough current. If constraints contradict known reality then you know you can't do this action. Suppose there's a replacement battery pack 10 meters away from the robot, the robot is unsure if 5 hours ago the packs have been swapped; in the alternative that they haven't been, it would not have enough charge to get to the extra pack, in the alternative that they have been swapped, it doesn't need to get to the spent extra pack. Evaluating the hypothetical where it got to the extra pack it knows the packs have been swapped in the past and extra pack is spent. (Of course for simplicity one can do all sorts of stuff, such as electrical currents coming out of nowhere, but outside the context of philosophical speculation the cause of the error is very clear).
incogn00

I tend to agree with mwengler - value is not a property of physical objects or world states, but a property of an observer having unequal preferences for different possible futures.

There is a risk we might be disagreeing because we are working with different interpretations of emotion.

Imagine a work of fiction involving no sentient beings, not even metaphorically - can you possibly write a happy or tragic ending? Is it not first when you introduce some form of intelligence with preferences that destruction becomes bad and serenity good? And are not preferences for this over that the same as emotion?

incogn-10

I do not want to make estimates on how and with what accuracy Omega can predict. There is not nearly enough context available for this. Wikipedia's version has no detail whatsoever on the nature of Omega. There seems to be enough discussion to be had, even with the perhaps impossible assumption that Omega can predict perfectly, always, and that this can be known by the subject with absolute certainty.

incogn10

I do not think the standard usage is well defined, and avoiding these terms altogether is not possible, seeing as they are in the definition of the problem we are discussing.

Interpretations of the words and arguments for the claim are the whole content of the ancestor post. Maybe you should start there instead of quoting snippets out of context and linking unrelated fallacies? Perhaps, by specifically stating the better and more standard interpretations?

incogn50

Then I guess I will try to leave it to you to come up with a satisfactory example. The challenge is to include Newcomblike predictive power for Omega, but not without substantiating how Omega achieves this, while still passing your own standards of subject makes choice from own point of view. It is very easy to accidentally create paradoxes in mathematics, by assuming mutually exclusive properties for an object, and the best way to discover these is generally to see if it is possible construct or find an instance of the object described.

I don't think it

... (read more)
0Creutzer
But isn't this precisely the basic idea behind TDT? The algorithm you are suggesting goes something like this: Chose that action which, if it had been predetermined at T=0 that you would take it, would lead to the maximal-utility outcome. You can call that CDT, but it isn't. Sure, it'll use causal reasoning for evaluating the counterfactual, but not everything that uses causal reasoning is CDT. CDT is surgically altering the action node (and not some precommitment node) and seeing what happens.
-2private_messaging
Well, a practically important example is a deterministic agent which is copied and then copies play prisoner's dilemma against each other. There you have agents that use physics. Those, when evaluating hypothetical choices, use some model of physics, where an agent can model itself as a copyable deterministic process which it can't directly simulate (i.e. it knows that the matter inside it's head obeys known laws of physics). In the hypothetical that it cooperates, after processing the physics, it is found that copy cooperates, in the hypothetical that it defects, it is found that copy defects. And then there's philosophers. The worse ones don't know much about causality. They presumably have some sort of ill specified oracle that we don't know how to construct, which will tell them what is a 'consequence' and what is a 'cause' , and they'll only process the 'consequences' of the choice as the 'cause'. This weird oracle tells us that other agent's choice is not a 'consequence' of the decision, so it can not be processed. It's very silly and not worth spending brain cells on.
incogn20

The post scav made more or less represents my opinion here. Compatibilism, choice, free will and determinism are too many vague definitions for me to discuss with. For compatibilism to make any sort of sense to me, I would need a new definition of free will. It is already difficult to discuss how stuff is, without simultaneously having to discuss how to use and interpret words.

Trying to leave the problematic words out of this, my claim is that the only reason CDT ever gives a wrong answer in a Newcomb's problem is that you are feeding it the wrong model. h... (read more)

incogn-20

I think the barbering example is excellent - it illustrates that, while controlled experiments more or less is physics, and while physics is great, it is probably not going to bring a paradigm shift to barbering any time soon. One should not expect all domains to be equally well suited to a cut and dried scientific approach.

Where medicine lies on this continuum of suitedness is an open question - it is probably even a misleading question, with medicine being a collection of vastly different problems. However, it is not at all obvious that simply turning up... (read more)

0A1987dM
Huh? What evidence are homoeopathy and crystal healing and similar (assuming that's what Qiaochu_Yuan meant by “other kinds”) based on? EDIT: Apparently not.
incogn40

If you interpret evidence-based in the widest sense possible, the phrase sort of loses its meaning. Note that the very post you quote explains the intended contrast between systematic and statistical use of evidence versus intuition and traditional experience based human learning.

Besides, would you not say that astrologers figure out both how to be optimally vague, avoiding being wrong while exciting their readers, much the same way musicians figure out what sounds good?

0A1987dM
Yes, but “intuition and traditional experience based human learning” is probably much less reliable in medicine than it is in barbering, so the latter isn't a good example in a discussion about the former. :-) Something similar could be said about practitioners of alternative medicine, though.
incogn20

Ironically, this whole exchange might have been a bit more constructive with less taking of offense.

incogn-10

I think I agree, by and large, despite the length of this post.

Whether choice and predictability are mutually exclusive depends on what choice is supposed to mean. The word is not exactly well defined in this context. In some sense, if variable > threshold then A, else B is a choice.

I am not sure where you think I am conflating. As far as I can see, perfect prediction is obviously impossible unless the system in question is deterministic. On the other hand, determinism does not guarantee that perfect prediction is practical or feasible. The computationa... (read more)

0linas
Yes. I was confused, and perhaps added to the confusion.
incogn200

Only in the sense that the term "pro-life" implies than there exist people opposed to life.

7MugaSofer
Opposed to all life? No. Opposed to specific, nonsentient life when weighed against the mother's choice? Yes.
1CCC
A perusal of murder and suicide statistics - even the fact that such statistics exist - suggests the conclusion that there may, in fact, exist some people opposed to life; sometimes their own, sometimes that of others.
5RomeoStevens
pro-life is an intentional misuse of ontology.
incogn30

Maybe he means something along the lines of same cause, same effect is just a placeholder for as long as all the things which matter stay the same, you get the same effect. After all, some things, such as time since the man invented fire and position relative to Neptune and so on and so forth cannot possibly be the same for two different events. And this in turn sort of means things which matter -> same effect is a circular definition. Maybe he means to say that the law of causality is not the actually useful principle for making predictions, while there are indeed repeatable experiments and useful predictions to be made.

0B_For_Bandana
Hmm. Yeah, that makes sense. (And very nicely put!)
incogn100

(Thanks for discussing!)

I will address your last paragraph first. The only significant difference between my original example and the proper Newcomb's paradox is that, in Newcomb's paradox, Omega is made a predictor by fiat and without explanation. This allows perfect prediction and choice to sneak into the same paragraph without obvious contradiction. It seems, if I try to make the mode of prediction transparent, you protest there is no choice being made.

From Omega's point of view, its Newcomb subjects are not making choices in any substantial sense, they... (read more)

0private_messaging
Regarding illegal choices, the transparent variation makes it particularly clear, i.e. you can't take both boxes if you see a million in first box, and take 1 box otherwise. You can walk backwards from your decision to the point where a copy of you had been made, and then forward to the point where a copy is processed by the Omega, to find the relation of your decision to the box state causally.
0Creutzer
I probably wasn't expressing myself quite clearly. I think the difference is this: Newcomb subjects are making a choice from their own point of view. Your Johns aren't really make a choice even from their internal perspective: they just see if the cab arrives/if they're thirsty and then without deliberation follow what their policy for such cases prescribes. I think this difference is substantial enough intuitively so that the John cases can't be used as intuition pumps for anything relating to Newcomb's. I don't think it is, actually. It just seems so because it presupposes that your own choice is predetermined, which is kind of hard to reason with when you're right in the process of making the choice. But that's a problem with your reasoning, not with the scenario. In particular, the CDT agent has a problem with conceiving of his own choice as predetermined, and therefore has trouble formulating Newcomb's problem in a way that he can use - he has to choose between getting two-boxing as the solution or assuming backward causation, neither of which is attractive.
3MugaSofer
Not if you're a compatibilist, which Eliezer is last I checked.
-2linas
I'm with incogn on this one: either there is predictability or there is choice; one cannot have both. Incogn is right in saying that, from omega's point of view, the agent is purely deterministic, i.e. more or less equivalent to a computer program. Incogn is slightly off-the-mark in conflating determinism with predictability: a system can be deterministic, but still not predictable; this is the foundation of cryptography. Deterministic systems are either predictable or are not. Unless Newcombs problem explicitly allows the agent to be non-deterministic, but this is unclear. The only way a deterministic system becomes unpredictable is if it incorporates a source of randomness that is stronger than the ability of a given intelligence to predict. There are good reasons to believe that there exist rather simple sources of entropy that are beyond the predictive power of any fixed super-intelligence -- this is not just the foundation of cryptography, but is generically studied under the rubric of 'chaotic dynamical systems'. I suppose you also have to believe that P is not NP. Or maybe I should just mutter 'Turing Halting Problem'. (unless omega is taken to be a mythical comp-sci "oracle", in which case you've pushed decision theory into that branch of set theory that deals with cardinal numbers larger than the continuum, and I'm pretty sure you are not ready for the dragons that lie there.) If the agent incorporates such a source of non-determinism, then omega is unable to predict, and the whole paradox falls down. Either omega can predict, in which case EDT, else omega cannot predict, in which case CDT. Duhhh. I'm sort of flabbergasted, because these points seem obvious to me ... the Newcomb paradox, as given, seems poorly stated.
incogn00

I am not sure where our disagreement lies at the moment.

Are you using choice to signify strongly free will? Because that means the hypothetical Omega is impossible without backwards causation, leaving us at (b) but not (a) and the whole of Newcomb's paradox moot. Whereas, if you include in Newcomb's paradox, the choice of two-boxing will actually cause the big box to be empty, whereas the choice of one-boxing will actually cause the big box to contain a million dollars by a mechanism of backwards causation, then any CDT model will solve the problem.

Perhaps... (read more)

0Creutzer
I'm not entirely sure either. I was just saying that a causal decision theorist will not be moved by Wildberger's reasoning, because he'll say that Wildberger is plugging in the wrong probabilities: when calculating an expectation, CDT uses not conditional probability distributions but surgically altered probability distributions. You can make that result in one-boxing if you assume backwards causation. I think the point we're actually talking about (or around) might be the question of how CDT reasoning relates to you (a). I'm not sure that the causal decision theorist has to grant that he is in fact interpreting the problem as "not (a) but (b)". The problem specification only contains the information that so far, Omega has always made correct predictions. But the causal decision theorist is now in a position to spoil Omega's record, if you will. Omega has already made a prediction, and whatever the causal decision theorist does now isn't going to change that prediction. The fact that Omega's predictions have been absolutely correct so far doesn't enter into the picture. It just means that for all agents x that are not the causal decision theorist, P(x does A|Omega predicts that x does A) = 1 (and the same for B, and whatever value than 1 you might want for an imperfect predictor Omega). About the way you intend (a), the causal decision theorist would probably say that's backward causation and refuse to accept it. One way of putting it might be that the causal decision theorist simply has no way of reasoning with the information that his choice is predetermined, which is what I think you intend to convey with (a). Therefore, he has no way of (hypothetically) inferring Omega's prediction from his own (hypothetical) action (because he's only allowed to do surgery, not conditionalization). No, actually. Just the occurrence of a deliberation process whose outcome is not immediately obvious. In both your examples, that doesn't happen: John's behavior simply depends o
incogn00

Thanks for the link.

I like how he just brute forces the problem with (simple) mathematics, but I am not sure if it is a good thing to deal with a paradox without properly investigating why it seems to be a paradox in the first place. It is sort of like saying that this super convincing card trick you have seen, there is actually no real magic involved without taking time to address what seems to require magic and how it is done mundanely.

incogn50

I do not agree that a CDT must conclude that P(A)+P(B) = 1. The argument only holds if you assume the agent's decision is perfectly unpredictable, i.e. that there can be no correlation between the prediction and the decision. This contradicts one of the premises of Newcomb's Paradox, which assumes an entity with exactly the power to predict the agent's choice. Incidentally, this reduces to the (b) but not (a) from above.

By adopting my (a) but not (b) from above, i.e. Omega as a programmer and the agent as predictable code, you can easily see that P(A)+P(B)... (read more)

-1Creutzer
But that's not CDT reasoning. CDT uses surgery instead of conditionalization, that's the whole point. So it doesn't look at P(prediction = A|A), but at P(prediction = A|do(A)) = P(prediction = A). Your example with the cab doesn't really involve a choice at all, because John's going to work is effectively determined completely by the arrival of the cab.
incogn130

I don't really think Newcomb's problem or any of its variations belong in here. Newcomb's problem is not a decision theory problem, the real difficulty is translating the underspecified English into a payoff matrix.

The ambiguity comes from the the combination of the two claims, (a) Omega being a perfect predictor and (b) the subject being allowed to choose after Omega has made its prediction. Either these two are inconsistent, or they necessitate further unstated assumptions such as backwards causality.

First, let us assume (a) but not (b), which can be for... (read more)

0owencb
I think this is a very clear account of the issues with these problems. I like your explanations of how correct model choice leads to CDT getting it right all the time; similarly it seems correct model choice should let EDT get it right all the time. In this light CDT and EDT are really heuristics for how to make decisions with simplified models.
1patrickscottshields
Thanks for this post; it articulates many of the thoughts I've had on the apparent inconsistency of common decision-theoretic paradoxes such as Newcomb's problem. I'm not an expert in decision theory, but I have a computer science background and significant exposure to these topics, so let me give it a shot. The strategy I have been considering in my attempt to prove a paradox inconsistent is to prove a contradiction using the problem formulation. In Newcomb's problem, suppose each player uses a fair coin flip to decide whether to one-box or two-box. Then Omega could not have a sustained correct prediction rate above 50%. But the problem formulation says Omega does; therefore the problem must be inconsistent. Alternatively, Omega knew the outcome of the coin flip in advance; let's say Omega has access to all relevant information, including any supposed randomness used by the decision-maker. Then we can consider the decision to already have been made; the idea of a choice occurring after Omega has left is illusory (i.e. deterministic; anyone with enough information could have predicted it.) Admittedly, as you say quite eloquently: In this case of the all-knowing Omega, talking about what someone should choose after Omega has left seems mistaken. The agent is no longer free to make an arbitrary decision at run-time, since that would have backwards causal implications; we can, without restricting which algorithm is chosen, require the decision-making algorithm to be written down and provided to Omega prior to the whole simulation. Since Omega can predict the agent's decision, the agent's decision does determine what's in the box, despite the usual claim of no causality. Taking that into account, CDT doesn't fail after all. It really does seem to me like most of these supposed paradoxes of decision theory have these inconsistent setups. I see that wedrifid says of coin flips: I would love to hear from someone in further detail on these issues of consistency. Have t
1Amanojack
I agree; wherever there is paradox and endless debate, I have always found ambiguity in the initial posing of the question. An unorthodox mathematician named Norman Wildberger just released a new solution by unambiguously specifying what we know about Omega's predictive powers.