Constructively, (not ((not A) and (not B))) is weaker than (A or B). While you could call the former "A or B", you then have to come up with a new name for the latter.
I haven't been suggesting using (A or B) as a name for (not ((not A) and (not B))) in constructive logic where they aren't equivalent. Rather, I have been suggesting using classical logic (where the above sentences are equivalent) with a constructivist interpretation, i.e. not making difference between "true" and "theorem". But since it is possible for (A or B) to be a theorem and simultaneously for both A and B to be non-theorems, logical "or" would not have the same interpretation, namely it wouldn't match the common language "or" (for when we say "A or B is true", we mean that indeed one of them must be true).
Wouldn't it be still possible for a constructivist to embrace classical logic and the theoremhood of TND? The constructivist would just have to admit that (A or B) could be true even if neither A nor B is true. (A or B) would still not be meaningless, its truth would imply that there is proof for neither (not A) nor (not B), so this reinterpretation of "or" doesn't seem to be a big deal.
Newcomb's problem doesn't rely on existence of predictors who can predict any agent in any situation. It relies on existence of rational agents that can be predicted at least in certain situations including the scenario with boxes.
This was probably just me (how I read / what I think is interesting about Newcomb's problem). As I understand the responses most people think the main point of Newcomb's problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix. I emphazised in my post, that I take that as a given. I thought most about the question if you can successfully twobox at all, so this was the "point" of Newcomb's problem for me. To formalize this say I replaced the payoff matrix by 1000/1000 or even device A / device B where device A corresponds to $1000, device B corresponds to $1000 but device A + device B correspond to= $100000 (E.g. they have a combined function).
I still don't understand why would you be so much surprised if you saw Omega doing the trick hundred times, assuming no stage magic. Do you find it so improbable that out of the hundred people Omega has questioned not a single one had a quantum coin by him and a desire to toss it on the occasion? Even game-theoretical experiment volunteers usually don't carry quantum widgets.
Well, I thought about people actively resisting prediction, so some of them flipping a coin or using at least a mental process with severeal recursion levels (I think, that Omega thinks, that I think...). I am pretty though not absolutely sure that these processes are partly quantum random or at least chaotic enough to be computationally intractable for evrything within our universe. Though Omega would probably do much better than random (except if everyone flipps a coin, I am not sure if that is precictable with computational power levels realizable in our universe).
As I understand the responses most people think the main point of Newcomb's problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix.
I am no expert on Newomb's problem history, but I think it was specifically constructed as a counter-example to the common-sensical decision-theoretic principle that one should treat past events as independent of the decisions being made now. That's as well how it is most commonly interpreted on LW, although the concept of a near-omniscient predictor "Omega" is employed in wide range of different thought experiments here and it's possible that your objection can be relevant to some of them.
I am not sure whether it makes sense to call one-boxing cooperation. Newcomb isn't Prisoner's dilemma, at least in the original form.
ad 1: As I pointed out in my post twice, in this case he percommits to oneboxing and and that's it, since assuming atomic resolution scanning and practically infinite processing power he cannot hide his intention to cheat if he wants to twobox.
ad 2: You can, I did not, I suspect - as pointed out - that he could do that with his own brain too, but of course if so Omega woud know and still exclude him.
ad 3:
First of all I want to point out, that I would still one box after seeing Omega predicting 50 or 100 other people correctly, > since 50 to 100 bits of evidence are enough to ovecome (nearly) any prior I have about how the universe works.
This assumed that I could somehow rule out stage magic. Did not say that, my mistake.
On terminology: See my response to shiminux. Yes there is probably an aspect of fighting the hypo, but I think not primarily, since I think it is rather interesting to establish, that you can pervent to be perdicted in a newcomblike problem
OK, I understand now that your point was that one can in principle avoid being predicted. But to put it as an argument proving irrelevance or incoherence of the Newcomb's problem (not entirely sure that I understand correctly what you meant by "dissolve", though) is very confusing and prone to misinterpretation. Newcomb's problem doesn't rely on existence of predictors who can predict any agent in any situation. It relies on existence of rational agents that can be predicted at least in certain situations including the scenario with boxes.
I still don't understand why would you be so much surprised if you saw Omega doing the trick hundred times, assuming no stage magic. Do you find it so improbable that out of the hundred people Omega has questioned not a single one had a quantum coin by him and a desire to toss it on the occasion? Even game-theoretical experiment volunteers usually don't carry quantum widgets.
Holding that the efficacy of homeopathics can never be established with any reasonable certainty != assigning a success chance of 50%.
Tell that to the hypothetical obscurantist.
Edit: I find it mildly annoying when, answering a comment or post, people point out obvious things whose relevance to the comment / post is dubious without further explanation. If you think that the non-equivalence of the mentioned beliefs somehow inplies the impossibility to extrapolate obscurantist values, please elaborate. If you just thought that I might have commited a sloppy inference and it would be cool to correct me on it, please don't do that. It (1) derails the discussion to issues of uninteresting nitpickery and (2) motivates the commenters to clutter their comments with disclaimers in order to avoid being suspected of sloppy reasoning.
(1) Why would Joe intend to use the random process in his decision? I'd assume that he wants million dollars much more than to prove Omega's fallibility (and that only with 50% chance).
(2) Even if Joe for whatever reason prefers proving Omega's fallibility, you can stipulate that Omega gives the quest only to people without semitransparent mirrors at hand.
(3) How is this
First of all I want to point out, that I would still one box after seeing Omega predicting 50 or 100 other people correctly, since 50 to 100 bits of evidence are enough to ovecome (nearly) any prior I have about how the universe works.
compatible with this
So I would be very very VERY surprised if I saw Omega pull this trick 100 times in a row and I could somehow rule out Stage Magic (which I could not).
(emphasis mine)?
Note about terminology: on LW, dissolving a question usually refers to explaining that the question is confused (there is no answer to it as it is stated) together with pointing out the reasons why such a question seems sensible at the first sight. What you are doing is not dissolving the problem, it's rather fighting the hypo.
I think Matt's point is that under essentially all seriously proposed versions of induction currently in existence, the technique he described constitutes a valid inductive inference, therefore, in at least the cases where hypothesis testing works, we don't have to worry about resolving the different approaches.
Couldn't this be said about any inductive method, at least in cases when the method works?
Always good to be reminded that different people find different things obvious and, for exactly this reason, a little redundancy doesn't hurt in the first case!
To answer your second question: an obscurantist might want to act as if it did not know certain propositions, but CEV extrapolates desires on the basis of knowledge that might include those same propositions, the ignorance of which constitutes a core part of the obscurantist's identity.
There are obscurantists who wear their obscurantism as attire, proudly claiming that it is impossible to know whether God exists. It can be said, perhaps, that such an obscurantist has a preference for not knowing the answer to the question, for never storing a belief of "the God does (not) exist" in his brain. But still all the obscurantist's decisions are the same as if he believed that there is no God - the obscurantist belief bears no influence on other preferences. In such a case, you may well argue that the extrapolated volition of the obscurantist is to act as if he knew the answer and therefore the obscurantist beliefs are shattered. But this is also true for his non-extrapolated volition. If the non-extrapolated volition already ignores the obscurantist belief and can coexist with it, why is this possibility excluded for the extrapolated volition? Because of the "coherent" part? Does coherence of volition require that one is not mistaken about one's actual desires? (This is a honest question; I think that "volition" refers to the set of desires, which is to be made coherent by extrapolation in case of CEV, but that it doesn't refer to beliefs about the desires. But I haven't been interested in CEV that much and may be mistaken about this.)
The more interesting case is an obscurantist who holds obscurantism as a worldview with real consequences. Talking about things that are plausible (I am not sure whether this kind of obscurantists exist in non-negligible numbers), imagine a woman who holds that the efficacy of homoeopathics can never be established with any reasonable certainty. Now she may get cancer and have two possibilities for treatment: a conventional, with 10% chance of success, and a homoeopathic, with 0.1% chance (equal to that of a spontaneous remission). But, in accordance with her obscurantism, she believes that assigning anything except 50% for homoeopathy working would mean that we know the answer here, and since we can't know, homoeopathy has indeed success chance of 50%.
Acting on these beliefs, she decides for the homoeopathic treatment. One of her desires is to survive, which leads to choosing the conventional treatment upon extrapolation, thus creating conflict with the actual decision. But isn't it plausible that her another desire, namely to ever decide as if the chance of homoeopathy working were 50%, is enough strong to survive the extrapolation and take precedence upon the desire to survive? People have died for their beliefs many times.
First of all, is the existence of such an agent implausible? Not really, considering there are masochists out there and that, to some individuals, ignorance is bliss.
Why argue for plausibility of something when it clearly exists? I have personally met several people who fit your definition of obscurantist and I don't doubt that you have too.
How much, then, will be left of an obscurantist's identity upon coherently extrapolating their desires? The answers is probably not much, if anything at all.
Is there some argument for the probable answer? I don't find it obvious.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Is it just me, or is non-consensual sex obviously a bad thing? And by bad, I mean orders of magnitude worse than how good consensual sex is. It would take an awful lot of happy sex to make up for non-consensual sex, and I support social policies that prevent non-consensual sex more than whatever the ratio is of happy sex that is of equivalent utility (you can't just support preventing non-consensual sex, because "nobody has sex ever" prevents non-consensual sex).
Banning Dalits from going within 96 feet of Namboothiris has much more harm done to Dalits than Namboothiris' feelings of ritual pollution. This isn't the case with non-consensual sex. Furthermore, the feelings of ritual pollution can be avoided without Dalit cooperation, by the simple expediency of having Namboothiri-only isolated communities.
"Obviously bad" isn't a utilitarian justification.
To play the Devil's advocate:
(Disclaimer: I think that caste society is unjust and I don't actually wish to change our society to be more rape-tolerant. But I am no utilitarian. This comment is a warning against creating fake utilitarian explanations of moral judgements made on non-utilitarian grounds.)