Posts

Sorted by New

Wiki Contributions

Comments

Based on our rational approach we are at a disadvantage for discovering these truths.

Is that a bad thing?

Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets. That puts them at a disadvantage for winning the lottery. But it gives than an overall advantage in having more money, so I don't see it as a problem.

The situation you're describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you're at a disadvantage in knowing about unlikely-but-true facts that have yet to become mainstream. But you're also not paying the opportunity cost of trying out many unlikely ideas, most of which don't pan out. Overall, you're better off, because you have more time to pursue more promising ways to satisfy your goals.

(And if you're not better off overall, there's a different problem. Are you consistently underestimating how useful unlikely fringe beliefs that take lots of effort to test might be, if they were true? Then yes, that's a problem that can be solved by trying out more fringe beliefs that take lots of effort to test. But it's a separate problem from the problem of "you don't try things that look like they aren't worth the opportunity cost.")

When lacking evidence, the testing process is difficult, weird and lengthy - and in light of the 'saturation' mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.

And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.

From the point of view of someone who has a true claim but doesn't have evidence for it and can't easily convince someone else, you're right that this approach is frustrating. But if I were to relax my standards, the odds are that I wouldn't start with your true claim, but start working my way through a bunch of other false claims instead.

Evidence, in the general sense of "some way of filtering out the false claims", can take on many forms. For example, I can choose to try out lucid dreaming, not because I've found scientific evidence that it works, but because it's presented to me by someone from a community with a good track record of finding weird things that work. Or maybe the person explaining lucid dreaming to me is scrupulously honest and knows me very well, so that when they tell me "this is a real effect and has effects you'll find worth the cost of trying it out", I believe them.

Rational assessment can be misleading when dealing with experiential knowledge that is not yet scientifically proven, has no obvious external function but is, nevertheless, experientially accessible.

So, uh, is the typical claim that has an equal lack of scientific evidence true, or false? (Maybe if we condition on how difficult it is to prove.)

If true - then the rational assessment would be to believe such claims, and not wait for them to be scientifically proven.

If false - then the rational assessment would be to disbelieve such claims. But for most such claims, this is the right thing to do! It's true that person A has actually got hold of a true claim that there's no evidence for. But there's many more people making false claims with equal evidence; why should B believe A, and not believe those other people?

(More precisely, we'd want to do a cost-benefit analysis of believing/disbelieving a true claim vs. a comparably difficult-to-test false claim.)

I think that in the interests of being fair to the creators of the video, you should link to http://www.nottingham.ac.uk/~ppzap4/response.html, the explanation written by (at least one of) the creators of the video, which addresses some of the complaints.

In particular, let me quote the final paragraph:

There is an enduring debate about how far we should deviate from the rigorous academic approach in order to engage the wider public. From what I can tell, our video has engaged huge numbers of people, with and without mathematical backgrounds, and got them debating divergent sums in internet forums and in the office. That cannot be a bad thing and I'm sure the simplicity of the presentation contributed enormously to that. In fact, if I may return to the original question, "what do we get if we sum the natural numbers?", I think another answer might be the following: we get people talking about Mathematics.

In light of this paragraph, I think a cynical answer to the litmus test is this. Faced with such a ridiculous claim, it's wrong to engage with it only on the subject level, where your options are "Yes, I will accept this mathematical fact, even though I don't understand it" or "No, I will not accept this fact, because it flies in the face of everything I know." Instead, you have to at least consider the goals of the person making the claim. Why are they saying something that seems obviously false? What reaction are they hoping to get?

No, I think I meant what I said. I think that this song lyric can in fact only make a difference given a large pre-existing weight, and I think the distribution of being weirded out by Solstices is bimodal: there are not people that are moderately weirded out but not enough to leave.

Extremely unlikely that people exist that aren't weirded out by Solstices in general but one song lyric is the straw that breaks the camel's back.

Not quite. I outlined the things that have to be going on for me to be making a decision.

In the classic problem, Omega cannot influence my decision; it can only figure out what it is before I do. It is as though I am solving a math problem, and Omega solves it first; the only confusing bit is that the problem in question is self-referential.

If there is a gene that determines what my decision is, then I am not making the decision at all. Any true attempt to figure out what to do is going to depend on my understanding of logic, my familiarity with common mistakes in similar problems, my experience with all the arguments made about Newcomb's problem, and so on; if, despite all that, the box I choose has been determined since my birth, then none of these things (none of the things that make up me!) are a factor at all. Either my reasoning process is overridden in one specific case, or it is irreparably flawed to begin with.

Let's assume that every test has the same probability of returning the correct result, regardless of what it is (e.g., if + is correct, then Pr[A returns +] = 12/20, and if - is correct, then Pr[A returns +] = 8/20).

The key statistic for each test is the ratio Pr[X is positive|disease] : Pr[X is positive|healthy]. This ratio is 3:2 for test A, 4:1 for test B, and 5:3 for test C. If we assume independence, we can multiply these together, getting a ratio of 10:1.

If your prior is Pr[disease]=1/20, then Pr[disease] : Pr[healthy] = 1:19, so your posterior odds are 10:19. This means that Pr[disease|+++] = 10/29, just over 1/3.

You may have obtained 1/2 by a double confusion between odds and probabilities. If your prior had been Pr[disease]=1/21, then we'd have prior odds of 1:20 and posterior odds of 1:2 (which is a probability of 1/3, not of 1/2).

If you're looking for high-risk activities that pay well, why are you limiting yourself to legal options?

Load More