Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Erfeyah 25 January 2017 06:34:12PM 0 points [-]

And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.

From the point of view of someone who has a true claim but doesn't have evidence for it and can't easily convince someone else, you're right that this approach is frustrating. But if I were to relax my standards, the odds are that I wouldn't start with your true claim, but start working my way through a bunch of other false claims instead.

Exactly, that is why I am pointing towards the problem. Based on our rational approach we are at a disadvantage for discovering these truths. I want to use this post as a reference to the issue as it can become important in other subjects.

I can choose to try out lucid dreaming, not because I've found scientific evidence that it works, but because it's presented to me by someone from a community with a good track record of finding weird things that work. Or maybe the person explaining lucid dreaming to me is scrupulously honest and knows me very well, so that when they tell me "this is a real effect and has effects you'll find worth the cost of trying it out", I believe them.

Yes, that is the other way in. Trust and respect. Unfortunately, I feel we tend to surround ourselves with people that are similar to us and thus selecting our acquaintances in the same way we select ideas to focus on. In my experience (which is not necessarily indicative), people tend to just blank out unfamiliar information or consider it a bit of an eccentricity. In addition, as stated, if a subject requires substantial effort before you can confirm its validity it becomes exponentially harder to communicate even in these circumstances.

Comment author: Kindly 25 January 2017 10:02:06PM 1 point [-]

Based on our rational approach we are at a disadvantage for discovering these truths.

Is that a bad thing?

Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets. That puts them at a disadvantage for winning the lottery. But it gives than an overall advantage in having more money, so I don't see it as a problem.

The situation you're describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you're at a disadvantage in knowing about unlikely-but-true facts that have yet to become mainstream. But you're also not paying the opportunity cost of trying out many unlikely ideas, most of which don't pan out. Overall, you're better off, because you have more time to pursue more promising ways to satisfy your goals.

(And if you're not better off overall, there's a different problem. Are you consistently underestimating how useful unlikely fringe beliefs that take lots of effort to test might be, if they were true? Then yes, that's a problem that can be solved by trying out more fringe beliefs that take lots of effort to test. But it's a separate problem from the problem of "you don't try things that look like they aren't worth the opportunity cost.")

Comment author: Erfeyah 25 January 2017 11:37:56AM *  0 points [-]

So, uh, is the typical claim that has an equal lack of scientific evidence true, or false?

[5.1] As ProofOfLogic indicates with his example of shamanistic scammers the space of claims about subjective experiences is saturated with demonstrably false claims.

[5.2] This actually causes us to adjust and have a rule of ignoring all strange sounding claims that require subjective evidence (except if it is trivial to test).

You are right that if the claim is true an idealised rational assessment should be to believe the claim. But how do you make a rational assessment when you lack evidence?

(More precisely, we'd want to do a cost-benefit analysis of believing/disbelieving a true claim vs. a comparably difficult-to-test false claim.)

When lacking evidence, the testing process is difficult, weird and lengthy - and in light of the 'saturation' mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.

Comment author: Kindly 25 January 2017 05:40:17PM 1 point [-]

When lacking evidence, the testing process is difficult, weird and lengthy - and in light of the 'saturation' mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.

And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.

From the point of view of someone who has a true claim but doesn't have evidence for it and can't easily convince someone else, you're right that this approach is frustrating. But if I were to relax my standards, the odds are that I wouldn't start with your true claim, but start working my way through a bunch of other false claims instead.

Evidence, in the general sense of "some way of filtering out the false claims", can take on many forms. For example, I can choose to try out lucid dreaming, not because I've found scientific evidence that it works, but because it's presented to me by someone from a community with a good track record of finding weird things that work. Or maybe the person explaining lucid dreaming to me is scrupulously honest and knows me very well, so that when they tell me "this is a real effect and has effects you'll find worth the cost of trying it out", I believe them.

Comment author: Kindly 25 January 2017 05:35:48AM *  2 points [-]

Rational assessment can be misleading when dealing with experiential knowledge that is not yet scientifically proven, has no obvious external function but is, nevertheless, experientially accessible.

So, uh, is the typical claim that has an equal lack of scientific evidence true, or false? (Maybe if we condition on how difficult it is to prove.)

If true - then the rational assessment would be to believe such claims, and not wait for them to be scientifically proven.

If false - then the rational assessment would be to disbelieve such claims. But for most such claims, this is the right thing to do! It's true that person A has actually got hold of a true claim that there's no evidence for. But there's many more people making false claims with equal evidence; why should B believe A, and not believe those other people?

(More precisely, we'd want to do a cost-benefit analysis of believing/disbelieving a true claim vs. a comparably difficult-to-test false claim.)

Comment author: Kindly 23 January 2017 04:31:43AM 1 point [-]

I think that in the interests of being fair to the creators of the video, you should link to http://www.nottingham.ac.uk/~ppzap4/response.html, the explanation written by (at least one of) the creators of the video, which addresses some of the complaints.

In particular, let me quote the final paragraph:

There is an enduring debate about how far we should deviate from the rigorous academic approach in order to engage the wider public. From what I can tell, our video has engaged huge numbers of people, with and without mathematical backgrounds, and got them debating divergent sums in internet forums and in the office. That cannot be a bad thing and I'm sure the simplicity of the presentation contributed enormously to that. In fact, if I may return to the original question, "what do we get if we sum the natural numbers?", I think another answer might be the following: we get people talking about Mathematics.

In light of this paragraph, I think a cynical answer to the litmus test is this. Faced with such a ridiculous claim, it's wrong to engage with it only on the subject level, where your options are "Yes, I will accept this mathematical fact, even though I don't understand it" or "No, I will not accept this fact, because it flies in the face of everything I know." Instead, you have to at least consider the goals of the person making the claim. Why are they saying something that seems obviously false? What reaction are they hoping to get?

Comment author: itaibn0 04 January 2017 01:09:14AM 1 point [-]

"Straw that breaks the camel's back" implies the existence of a large pre-existing weight, so your claim is a tautology.

Comment author: Kindly 07 January 2017 04:49:53PM 1 point [-]

No, I think I meant what I said. I think that this song lyric can in fact only make a difference given a large pre-existing weight, and I think the distribution of being weirded out by Solstices is bimodal: there are not people that are moderately weirded out but not enough to leave.

Comment author: Kindly 22 December 2016 02:33:57AM 7 points [-]

Extremely unlikely that people exist that aren't weirded out by Solstices in general but one song lyric is the straw that breaks the camel's back.

Comment author: Unknowns 30 June 2015 03:47:36AM *  1 point [-]

This is like saying "if my brain determines my decision, then I am not making the decision at all."

Comment author: Kindly 30 June 2015 05:08:29AM 0 points [-]

Not quite. I outlined the things that have to be going on for me to be making a decision.

Comment author: Kindly 29 June 2015 07:21:54PM 0 points [-]

In the classic problem, Omega cannot influence my decision; it can only figure out what it is before I do. It is as though I am solving a math problem, and Omega solves it first; the only confusing bit is that the problem in question is self-referential.

If there is a gene that determines what my decision is, then I am not making the decision at all. Any true attempt to figure out what to do is going to depend on my understanding of logic, my familiarity with common mistakes in similar problems, my experience with all the arguments made about Newcomb's problem, and so on; if, despite all that, the box I choose has been determined since my birth, then none of these things (none of the things that make up me!) are a factor at all. Either my reasoning process is overridden in one specific case, or it is irreparably flawed to begin with.

Comment author: Bound_up 12 June 2015 06:03:06PM *  1 point [-]

Did I use Bayes' formula correctly here?

Prior: 1/20

12/20 chance that test A returns correctly +

16/20 chance that test B returns correctly +

12.5/20 chance that test C returns correctly +

Odds of correct diagnosis?

I got 1/2

Comment author: Kindly 12 June 2015 11:53:52PM *  3 points [-]

Let's assume that every test has the same probability of returning the correct result, regardless of what it is (e.g., if + is correct, then Pr[A returns +] = 12/20, and if - is correct, then Pr[A returns +] = 8/20).

The key statistic for each test is the ratio Pr[X is positive|disease] : Pr[X is positive|healthy]. This ratio is 3:2 for test A, 4:1 for test B, and 5:3 for test C. If we assume independence, we can multiply these together, getting a ratio of 10:1.

If your prior is Pr[disease]=1/20, then Pr[disease] : Pr[healthy] = 1:19, so your posterior odds are 10:19. This means that Pr[disease|+++] = 10/29, just over 1/3.

You may have obtained 1/2 by a double confusion between odds and probabilities. If your prior had been Pr[disease]=1/21, then we'd have prior odds of 1:20 and posterior odds of 1:2 (which is a probability of 1/3, not of 1/2).

Comment author: Ishaan 20 May 2015 03:51:38AM *  3 points [-]

Realistically I'd probably wrap up my affairs and prepare my loved ones, but broadly I think the comparative advantage is in performing high-risk services. The first thought that came to mind is volunteering for useful dangerous experiments that need live human subjects, but there's probably a lot of bureaucratic barriers there.

I wonder if there are any legally feasible high risk and helpful services that also pay really well...

Comment author: Kindly 20 May 2015 04:11:47AM 2 points [-]

If you're looking for high-risk activities that pay well, why are you limiting yourself to legal options?

View more: Next