my friends were ready to raise $100 so I would carry it out
Are you sure you want to call them "friends"? Willingness to pay to lower someone else's status isn't particularly friendly behaviour, even if the person "doesn't care" about status.
my friends were ready to raise $100 so I would carry it out
Are you sure you want to call them "friends"? Willingness to pay to lower someone else's status isn't particularly friendly behaviour, even if the person "doesn't care" about status.
There's no way to do this, given the parameters of the situation, that won't generate a health hazard for yourself and those around you.
Consequentialism fail.
The health hazard would probably be worth less (in absolute value) than the discussed reward of $200. The PR hazard, on the other hand, would justify your bottom line.
Constructively, (not ((not A) and (not B))) is weaker than (A or B). While you could call the former "A or B", you then have to come up with a new name for the latter.
I haven't been suggesting using (A or B) as a name for (not ((not A) and (not B))) in constructive logic where they aren't equivalent. Rather, I have been suggesting using classical logic (where the above sentences are equivalent) with a constructivist interpretation, i.e. not making difference between "true" and "theorem". But since it is possible for (A or B) to be a theorem and simultaneously for both A and B to be non-theorems, logical "or" would not have the same interpretation, namely it wouldn't match the common language "or" (for when we say "A or B is true", we mean that indeed one of them must be true).
Wouldn't it be still possible for a constructivist to embrace classical logic and the theoremhood of TND? The constructivist would just have to admit that (A or B) could be true even if neither A nor B is true. (A or B) would still not be meaningless, its truth would imply that there is proof for neither (not A) nor (not B), so this reinterpretation of "or" doesn't seem to be a big deal.
Newcomb's problem doesn't rely on existence of predictors who can predict any agent in any situation. It relies on existence of rational agents that can be predicted at least in certain situations including the scenario with boxes.
This was probably just me (how I read / what I think is interesting about Newcomb's problem). As I understand the responses most people think the main point of Newcomb's problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix. I emphazised in my post, that I take that as a given. I thought most about the question if you can successfully twobox at all, so this was the "point" of Newcomb's problem for me. To formalize this say I replaced the payoff matrix by 1000/1000 or even device A / device B where device A corresponds to $1000, device B corresponds to $1000 but device A + device B correspond to= $100000 (E.g. they have a combined function).
I still don't understand why would you be so much surprised if you saw Omega doing the trick hundred times, assuming no stage magic. Do you find it so improbable that out of the hundred people Omega has questioned not a single one had a quantum coin by him and a desire to toss it on the occasion? Even game-theoretical experiment volunteers usually don't carry quantum widgets.
Well, I thought about people actively resisting prediction, so some of them flipping a coin or using at least a mental process with severeal recursion levels (I think, that Omega thinks, that I think...). I am pretty though not absolutely sure that these processes are partly quantum random or at least chaotic enough to be computationally intractable for evrything within our universe. Though Omega would probably do much better than random (except if everyone flipps a coin, I am not sure if that is precictable with computational power levels realizable in our universe).
As I understand the responses most people think the main point of Newcomb's problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix.
I am no expert on Newomb's problem history, but I think it was specifically constructed as a counter-example to the common-sensical decision-theoretic principle that one should treat past events as independent of the decisions being made now. That's as well how it is most commonly interpreted on LW, although the concept of a near-omniscient predictor "Omega" is employed in wide range of different thought experiments here and it's possible that your objection can be relevant to some of them.
I am not sure whether it makes sense to call one-boxing cooperation. Newcomb isn't Prisoner's dilemma, at least in the original form.
ad 1: As I pointed out in my post twice, in this case he percommits to oneboxing and and that's it, since assuming atomic resolution scanning and practically infinite processing power he cannot hide his intention to cheat if he wants to twobox.
ad 2: You can, I did not, I suspect - as pointed out - that he could do that with his own brain too, but of course if so Omega woud know and still exclude him.
ad 3:
First of all I want to point out, that I would still one box after seeing Omega predicting 50 or 100 other people correctly, > since 50 to 100 bits of evidence are enough to ovecome (nearly) any prior I have about how the universe works.
This assumed that I could somehow rule out stage magic. Did not say that, my mistake.
On terminology: See my response to shiminux. Yes there is probably an aspect of fighting the hypo, but I think not primarily, since I think it is rather interesting to establish, that you can pervent to be perdicted in a newcomblike problem
OK, I understand now that your point was that one can in principle avoid being predicted. But to put it as an argument proving irrelevance or incoherence of the Newcomb's problem (not entirely sure that I understand correctly what you meant by "dissolve", though) is very confusing and prone to misinterpretation. Newcomb's problem doesn't rely on existence of predictors who can predict any agent in any situation. It relies on existence of rational agents that can be predicted at least in certain situations including the scenario with boxes.
I still don't understand why would you be so much surprised if you saw Omega doing the trick hundred times, assuming no stage magic. Do you find it so improbable that out of the hundred people Omega has questioned not a single one had a quantum coin by him and a desire to toss it on the occasion? Even game-theoretical experiment volunteers usually don't carry quantum widgets.
Holding that the efficacy of homeopathics can never be established with any reasonable certainty != assigning a success chance of 50%.
Tell that to the hypothetical obscurantist.
Edit: I find it mildly annoying when, answering a comment or post, people point out obvious things whose relevance to the comment / post is dubious without further explanation. If you think that the non-equivalence of the mentioned beliefs somehow inplies the impossibility to extrapolate obscurantist values, please elaborate. If you just thought that I might have commited a sloppy inference and it would be cool to correct me on it, please don't do that. It (1) derails the discussion to issues of uninteresting nitpickery and (2) motivates the commenters to clutter their comments with disclaimers in order to avoid being suspected of sloppy reasoning.
(1) Why would Joe intend to use the random process in his decision? I'd assume that he wants million dollars much more than to prove Omega's fallibility (and that only with 50% chance).
(2) Even if Joe for whatever reason prefers proving Omega's fallibility, you can stipulate that Omega gives the quest only to people without semitransparent mirrors at hand.
(3) How is this
First of all I want to point out, that I would still one box after seeing Omega predicting 50 or 100 other people correctly, since 50 to 100 bits of evidence are enough to ovecome (nearly) any prior I have about how the universe works.
compatible with this
So I would be very very VERY surprised if I saw Omega pull this trick 100 times in a row and I could somehow rule out Stage Magic (which I could not).
(emphasis mine)?
Note about terminology: on LW, dissolving a question usually refers to explaining that the question is confused (there is no answer to it as it is stated) together with pointing out the reasons why such a question seems sensible at the first sight. What you are doing is not dissolving the problem, it's rather fighting the hypo.
I think Matt's point is that under essentially all seriously proposed versions of induction currently in existence, the technique he described constitutes a valid inductive inference, therefore, in at least the cases where hypothesis testing works, we don't have to worry about resolving the different approaches.
Couldn't this be said about any inductive method, at least in cases when the method works?
Is it just me, or is non-consensual sex obviously a bad thing? And by bad, I mean orders of magnitude worse than how good consensual sex is. It would take an awful lot of happy sex to make up for non-consensual sex, and I support social policies that prevent non-consensual sex more than whatever the ratio is of happy sex that is of equivalent utility (you can't just support preventing non-consensual sex, because "nobody has sex ever" prevents non-consensual sex).
Banning Dalits from going within 96 feet of Namboothiris has much more harm done to Dalits than Namboothiris' feelings of ritual pollution. This isn't the case with non-consensual sex. Furthermore, the feelings of ritual pollution can be avoided without Dalit cooperation, by the simple expediency of having Namboothiri-only isolated communities.
"Obviously bad" isn't a utilitarian justification.
To play the Devil's advocate:
(Disclaimer: I think that caste society is unjust and I don't actually wish to change our society to be more rape-tolerant. But I am no utilitarian. This comment is a warning against creating fake utilitarian explanations of moral judgements made on non-utilitarian grounds.)