Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: byrnema 08 May 2015 12:50:21PM *  1 point [-]

what observation can distinguish those which actually are loving?

I think, evidence that the universe was designed with some degree of attention to our well-being. If the universe is unexpectedly kind to us, or if we are especially well taken care of, would be evidence of a loving God.

I'm conflicted about which universe we're in. Things could certainly be worse, but it's also not very good. Is life more tolerable to us than we'd expect by random chance?

But for sure, just look at outcome. It only muddles to consider intention for three reasons:

(1) it is the outcome that we're concerned with, "pretending" versus "sincere" has no meaning if there's no distinguishing effect on observation

(2) asking about pretending is really asking about whether the evidence could be 'tricking' us; it is always a possibility that the evidence leads us to the wrong conclusion with some probability, or that induction over time doesn't apply

(2) even if the creator is non-sentient, we can still ask if the universe is 'us-loving' or not

Comment author: NancyLebovitz 29 August 2014 06:21:54PM 3 points [-]

The link is making a different argument-- it says the problem isn't with the journalists or with their bosses, it's that the public isn't paying attention to the stories journalists are risking their necks to get.

Comment author: byrnema 30 August 2014 04:00:03PM 1 point [-]

True. I linked the article as an example of the idealistic journalist, one that is disappointed that his motives are distrusted by the public.

Comment author: Lumifer 26 August 2014 05:32:48PM 11 points [-]

My first suggestion would be to look at the incentives of people who write for the media. Their motivations are NOT to "get the best message out". That's not what they're paid for. Nowadays their principal goal is to attract eyeballs and hopefully monetize them by shoving ads into your face. The critical thing to recognize is that their goals and criteria of what constitutes a successful piece do not match your goals and your criteria of what constitutes a successful piece.

The second suggestion would be to consider that writers write for a particular audience and, I think, most of the time you will not be a member of that particular audience. Mass media doesn't write for people like you.

Comment author: byrnema 29 August 2014 04:21:43PM 1 point [-]

Your comment is well-received. I'm continuing to to think about it and what this means for finding reliable media sources.

My impression of journalists has always been that they would be fairly idealistic about information and communicating that information to be attracted to their profession. I also imagine that their goals are constantly antagonized by the goals of their bosses, that do want to make money, and probably it is the case that the most successful sell-out or find a good trade-off that is not entirely ideal for them or the critical reader.

I'll link this article by Michael Volkmann, a disillusioned journalist.

Comment author: byrnema 26 August 2014 04:00:44PM *  6 points [-]

I might need some recalibration, but I'm not sure.

I research topics of interest in the media, and I feel frustrated, angry and annoyed about the half-truths and misleading statements that I encounter frequently. The problem is not the feelings, but whether I am 'wrong'. I figure there are two ways that I might be wrong:

(i) Maybe I'm wrong about these half-truths and misleading statements not being necessary. Maybe authors have already considered telling the facts straight and that didn't get the best message out.

(ii) Maybe I'm actually wrong about whether these are half-truths or really all that misleading. Maybe I am focused on questions of fact and the meanings of particular phrases that are overly subtle.

The reason why I think I might need re-calibration is because I don't consider it likely that I am much less pragmatic, smarter or more accurate than all these writers I am critical of (some of them, inevitably, but not all of them -- also these issues are not that difficult intellectually).

Here are some concrete examples, all regarding my latest interest in the Ebola outbreak:

  • Harvard poll: Most recently, the HSPH-SSRS poll with headlines, "Poll finds US lack knowledge about ebola" or, "Many Americans harbor unfounded fears about Ebola". But when you look at the poll questions, they ask whether Americans are "concerned" about the risk, not what they believe the risk to be, and whether they think Ebola is spread 'easily'. The poll didn't appear to be about American's knowledge of Ebola, but how they felt about the knowledge they had. The question about whether Ebola transmits easily especially irks me, since everyone knows (don't they??) that whether something is 'easy' is subjective?

  • "Bush meat": I've seen many places that people need to stop consuming bush meat in outbreak areas (for example). I don't know that much about how Ebola is spreading through this route, but wouldn't it be the job of the media and epidemiologists to report on the rate of transmission from eating bats (I think there has only been one ground zero patient in West Africa who potentially contracted Ebola from a bat) and weigh this with the role of local meat as an important food source (again, don't know, media to blame)? Just telling people to stop eating would be ridiculous, hopefully it's not so extreme. Also, what about cooking rather than drying local meat sources? This seems a very good example of the media unable to nuance a message in a reasonable way, but I allow I could be wrong.

  • Media reports "Ebola Continues to spread in Nigeria" when the increase in Ebola cases were at that time due to contact with the same person and had already been in quarantine. This seemed to hype up the outbreak when in fact the Nigerians were successfully containing it. Perhaps this is an example of being too particular and over-analyzing something subtle?

  • Ever using the phrase 'in the air' to describe how Ebola does or doesn't transmit, because this is a phrase that can mean completely different things to anyone using or hearing the phrase. Ebola is not airborne but can transmit within coughing distance.

  • The apparent internal inconsistency of a case of Ebola might come to the US, but an outbreak cannot happen here. Some relative risk numbers would be helpful here.

All of these examples upset me to various degrees since I feel like it is evidence that people -- even writers and the scientists they are quoting -- are unable to think critically and message coherently about issues. How should I update my view so that I am less surprised, less argumentative or less crazy-pedantic-fringe person?

Comment author: palladias 05 August 2014 03:35:49PM 2 points [-]

TL;DR: Ebola is very hard to transmit person to person. Don't think flu, think STDs.

Ebola isn't airborne, so breathing the same air, being on the same plane as an Ebola case will not give you Ebola. It doesn't spread quite like STDs, but it does require getting an infected person's bodily fluids (urine, semen, blood, and vomit) mixed up in your bodily fluids or in contact with a mucous membrane.

So, don't sex up your recently returned Peace Corps friend who's been feeling a little fluish, and you should be a-ok.

Comment author: byrnema 15 August 2014 04:07:46PM 2 points [-]

A person infected with Ebola is very contagious during the period they are showing symptoms. The CDC recommends casual contact and droplet precautions.

Note the following description of (casual) contact:

Casual contact is defined as a) being within approximately 3 feet (1 meter) or within the room or care area for a prolonged period of time (e.g., healthcare personnel, household members) while not wearing recommended personal protective equipment (i.e., droplet and contact precautions–see Infection Prevention and Control Recommendations); or b) having direct brief contact (e.g., shaking hands) with an EVD case while not wearing recommended personal protective equipment (i.e., droplet and contact precautions–see Infection Prevention and Control Recommendations). At this time, brief interactions, such as walking by a person or moving through a hospital, do not constitute casual contact.

(Much more contagious than an STD.)

But Lumifer is also correct. People without symptoms are not contagious, and people with symptoms are conspicuous (e.g. Patrick Sawyer was very conspicuous when he infected staff and healthcare workers in Nigeria) and unlikely to be ambulatory. The probability of a given person in West Africa being infected is very small (2000 cases divided by approximately 20 million people in Guinea, Sierra Leone and Liberia) and the probability of a given person outside this area being infected is truly negligible. If we cannot contain the virus in the area, there will be a lot of time between the observation of a burning 'ember' (or 10 or 20) and any change in these probabilities -- plenty of time to handle and douse out any further hotspots that form.

The worst case scenario in my mind is that it continues unchecked in West Africa or takes hold in more underdeveloped countries. This scenario would mean more unacceptable suffering and would also mean the outbreak gets harder and harder to squash and contain, increasing the risk to all countries.

We need to douse it while it is relatively small -- I feel so frustrated when I hear there are hospitals in these regions without supplies such as protective gear. What is the problem? Rich countries should be dropping supplies already.

Comment author: Khoth 05 August 2014 10:05:28AM 4 points [-]
Comment author: byrnema 10 August 2014 03:51:58PM *  0 points [-]

Sorry, realized I don't feel comfortable commenting on such a high-profile topic. Will wait a few minutes and then delete this comment (just to make sure there are no replies.)

Comment author: philh 28 July 2014 12:25:20PM *  1 point [-]

We'd still have the effect even if there were plenty of points at all values.

Are you talking about relative sample sizes, or absolute? The effect requires that as you go from +4sd to +3sd to +2sd, your population increases sufficiently fast. As long as that holds, it doesn't go away if the total population grows. (But that's because if you get lots of points at +4sd, then you have a smaller number at +5sd. So you don't have "plenty of points at all values".)

If you have equal numbers at +4 and +3 and +2, then most of the +4 still may not be the best, but the best is likely to be +4.

(Warning: I did not actually do the math.)

Comment author: byrnema 30 July 2014 03:55:06PM 1 point [-]

I don't believe we disagree on anything. For example, I agree with this:

If you have equal numbers at +4 and +3 and +2, then most of the +4 still may not be the best, but the best is likely to be +4.

Are you talking about relative sample sizes, or absolute?

By 'plenty of points'... I was imagining that we are taking a finite sample from a theoretically infinite population. A person decides on a density that represents 'plenty of points' and then keeps adding to the sample until they have that density up to a certain specified sd.

Comment author: byrnema 27 July 2014 09:42:41AM *  2 points [-]

Interesting post. Well thought out, with an original angle.

In the direction of constructive feedback, consider that the concept of sample size -- while it seems to help with the heuristic explanation -- likely just muddies the water. (We'd still have the effect even if there were plenty of points at all values.)

For example, suppose there were so many people with extreme height some of them also had extreme agility (with infinite sample size, we'd even reliably have that the best players we're also the tallest.) So: some of the tallest people are also the best basketball players. However, as you argued, most of the tallest won't be the most agile also, so most of the tallest are not the best (contrary to what would be predicted by their height alone).

In contrast, if average height correlates with average basketball ability, the other necessary condition for a basketball player with average height to have average ability is to have average agility -- but this is easy to satisfy. So most people with average height fit the prediction of average ability.

Likewise, the shortest people aren't likely to have the lowest agility, so the correlation prediction fails at that tail too.

Some of the 'math' is that it is easy to be average in all variables ( say, (.65)^n where n is the number of variables) but the probability of being standard deviations extreme in all variables is hard (say, (.05)^n to be in the top 5 percent.) Other math can be used to find the theoretic shape for these assumptions (e. g., is it an ellipse?).

Comment author: byrnema 13 July 2014 03:04:11AM *  1 point [-]

(I realize I'm confused about something and am thinking it through for a moment.)

Comment author: byrnema 14 July 2014 06:54:39PM 2 points [-]

I see. I was confused for a while, but in the hypothetical examples I was considering, a link between MMR and autism might be missed (a false negative with 5% probability) but isn't going to found unless it was there (low false positive). Then Vanviver explains, above, that the canonical null-hypothesis framework assumes that random chance will make it look like there is an effect with some probability -- so it is the false positive rate you can tune with your sample size.

I marginally understand this. For example, I can't really zoom out and see why you can't define your test so that the false positive rate is low instead. That's OK. I do understand your example and see that it is relevant for the null-hypothesis framework. (My background in statistics is not strong and I do not have much time to dedicate to this right now.)

Comment author: Douglas_Knight 13 July 2014 01:55:49AM 5 points [-]

There is an asymmetry that makes it implausible that the null hypothesis would be that there is an effect. The null hypothesis has to be a definite value. The null hypothesis can be zero, which is what we think it is, or it could be some specific value, like a 10% increase in autism. But the null hypothesis cannot be "there is some effect of unspecified magnitude." There is no data that can disprove that hypothesis, because it includes effects arbitrarily close to zero. But that can be the positive hypothesis, because it is possible to disprove the complementary null hypothesis, namely zero.

Another more symmetric way of phrasing it is that we do the study and compute a confidence interval, that we are 95% confident that the effect size is in that interval. That step does not depend on the choice of hypothesis. But what do we do with this interval? We reject every hypothesis not in the interval. If zero is not in the interval, we reject it. If a 10% increase is not in the interval, we can reject that. But we cannot reject all nonzero effect sizes at once.

Comment author: byrnema 13 July 2014 03:04:11AM *  1 point [-]

(I realize I'm confused about something and am thinking it through for a moment.)

View more: Next