Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

My initial reaction is to find that aggravating and to try to come up with another experiment that would allow me to poke at the universe by exploiting the Predictor, but it seems likely that this too would be sidestepped using the same tactic. So we could generalize to say that any experiment you come up with that involves the Predictor and gives evidence regarding the temporal direction of causation will be sidestepped so as to give you no new information.

But intuitively, it seems like this condition itself gives new information in the paradox, yet I haven't yet wrapped my head around what evidence can be drawn from it.

On another note, even if causality flows always forward, it is possible that humans might be insufficiently affected by nondeterministic phenomena to produce significantly nondeterministic behavior, at least at the time scale we're talking about. If that is the case, then it could potentially be the case that human reasoning has approximate t-symmetry over short time scales, and that this can be exploited to "violate causality" with respect to humans without actually violating causality with respect to the universe at large.

Which means that I have a more general hypothesis, "human reasoning causality can be violated" for which the violation of causality in general would be strong evidence, but the non-violation of causality would only be weak counter-evidence. And in learning of the Predictor's success, I have observed evidence strongly recommending this hypothesis.

So upon further consideration, I think that one-boxing is probably the way to go regardless, and it must simply be accepted that if you actually observe the Predictor, you can no longer rely on CDT if you know that such an entity might be involved.

The only part of the paradox that still bugs me then is the hand-waving that goes into "assume you believe the Predictor's claims". It is actually hard for me to imagine what evidence I could observe for that which would both clearly distinguish "the Predictor is honest" hypothesis from the "I'm being cleverly deceived" and "I've gone crazy" hypotheses, and also does not directly tip the Predictor's hand as to whether human reasoning causality can be violated.

To me, the fact that I have been told to assume that I believe the Predictor seems extremely relevant. If we assume that I am able to believe that, then it would likely be the single most important fact that I had ever observed, and to say that it would cause a significant update on my beliefs regarding causality would be an understatement. On the basis that I would have strong reason to believe that causality could flow backwards, I would likely choose the one box.

If you tell me that somehow, I still also believe that causality always flows forward with respect to time, then I must strain to accept the premises - really, nobody has tried to trip the Predictor up by choosing according to a source of quantum randomness? - but in that case, I would either choose two boxes, or choose randomly myself, depending on how certain I felt about causality.

Just to clarify, the soylent we're talking about here is not the original recipe. It is a more frugal version made from soybeans, rice and oil.

The whole point of dimensional analysis as a method of error checking is that fudging the units doesn't work. If you have to use an arbitrary constant with no justification besides "making the units check out", then that is a very bad sign.

If I say "you can measure speed by dividing force by area", and you point out that that gives you a unit of pressure rather than speed, then I can't just accuse you of nitpicking and say "well obviously you have to multiply by a constant of 1 m²s/kg". You wouldn't have to tell me why that operation isn't allowed. I would have to explain why it's justified.

Yes, sort of, but a) a linear classifier is not a Turing-complete model of computation, and b) there is a clear resemblance that can be seen by merely glancing at the equations.

It's interesting to me that the proper linear model example is essentially a stripped down version of a very simple neural network with a linear activation function.

mistercow140

I salute your ability to troll all of these groups in a post about what kind of groups are easy to troll. I almost started to argue on some of these points before I saw your game.

mistercow120

Surely you aren't implying that a desire to prolong one's lifespan can only be motivated by fear.

I think it was on This American Life that I heard the guy's story. They even contacted a physicist to look at his "theory", who tried to explain to him that the units didn't work out. The guy's response was "OK, but besides that …"

He really seemed to think that this was just a minor nitpick that scientists were using as an excuse to dismiss him.

mistercow140

This raises a good point, but there are circumstances where the "someone would have noticed" argument is useful. Specifically, if the hypothesis is readily testable, if the consequences, if true, would be difficult to ignore, and if the hypothesis is, in fact, regularly tested by many of the same people who have told you that the hypothesis is false, then "somebody would have noticed" is reasonable evidence.

For example, "there is no God who reliably answers prayers" is a testable hypothesis, but it is easy for the religious to ignore the fact that it is true by a variety of rationalizations.

On the other hand, I heard a while back of a man who, after trying to teach himself physics, became convinced that "e = mc²" was wrong, and that the correct formula was in fact "e = mc". In this case, physicists who regularly use this formula would constantly be running into problems they could not ignore. If nothing else, they'd always be getting the wrong units from their calculations. It's unreasonable to think that if this hypothesis were true, scientists would have just waved their hands at it, and yet we'd still have working nuclear reactors.

Load More