Wiki Contributions

Comments

what observation can distinguish those which actually are loving?

I think, evidence that the universe was designed with some degree of attention to our well-being. If the universe is unexpectedly kind to us, or if we are especially well taken care of, would be evidence of a loving God.

I'm conflicted about which universe we're in. Things could certainly be worse, but it's also not very good. Is life more tolerable to us than we'd expect by random chance?

But for sure, just look at outcome. It only muddles to consider intention for three reasons:

(1) it is the outcome that we're concerned with, "pretending" versus "sincere" has no meaning if there's no distinguishing effect on observation

(2) asking about pretending is really asking about whether the evidence could be 'tricking' us; it is always a possibility that the evidence leads us to the wrong conclusion with some probability, or that induction over time doesn't apply

(2) even if the creator is non-sentient, we can still ask if the universe is 'us-loving' or not

True. I linked the article as an example of the idealistic journalist, one that is disappointed that his motives are distrusted by the public.

Your comment is well-received. I'm continuing to to think about it and what this means for finding reliable media sources.

My impression of journalists has always been that they would be fairly idealistic about information and communicating that information to be attracted to their profession. I also imagine that their goals are constantly antagonized by the goals of their bosses, that do want to make money, and probably it is the case that the most successful sell-out or find a good trade-off that is not entirely ideal for them or the critical reader.

I'll link this article by Michael Volkmann, a disillusioned journalist.

I might need some recalibration, but I'm not sure.

I research topics of interest in the media, and I feel frustrated, angry and annoyed about the half-truths and misleading statements that I encounter frequently. The problem is not the feelings, but whether I am 'wrong'. I figure there are two ways that I might be wrong:

(i) Maybe I'm wrong about these half-truths and misleading statements not being necessary. Maybe authors have already considered telling the facts straight and that didn't get the best message out.

(ii) Maybe I'm actually wrong about whether these are half-truths or really all that misleading. Maybe I am focused on questions of fact and the meanings of particular phrases that are overly subtle.

The reason why I think I might need re-calibration is because I don't consider it likely that I am much less pragmatic, smarter or more accurate than all these writers I am critical of (some of them, inevitably, but not all of them -- also these issues are not that difficult intellectually).

Here are some concrete examples, all regarding my latest interest in the Ebola outbreak:

  • Harvard poll: Most recently, the HSPH-SSRS poll with headlines, "Poll finds US lack knowledge about ebola" or, "Many Americans harbor unfounded fears about Ebola". But when you look at the poll questions, they ask whether Americans are "concerned" about the risk, not what they believe the risk to be, and whether they think Ebola is spread 'easily'. The poll didn't appear to be about American's knowledge of Ebola, but how they felt about the knowledge they had. The question about whether Ebola transmits easily especially irks me, since everyone knows (don't they??) that whether something is 'easy' is subjective?

  • "Bush meat": I've seen many places that people need to stop consuming bush meat in outbreak areas (for example). I don't know that much about how Ebola is spreading through this route, but wouldn't it be the job of the media and epidemiologists to report on the rate of transmission from eating bats (I think there has only been one ground zero patient in West Africa who potentially contracted Ebola from a bat) and weigh this with the role of local meat as an important food source (again, don't know, media to blame)? Just telling people to stop eating would be ridiculous, hopefully it's not so extreme. Also, what about cooking rather than drying local meat sources? This seems a very good example of the media unable to nuance a message in a reasonable way, but I allow I could be wrong.

  • Media reports "Ebola Continues to spread in Nigeria" when the increase in Ebola cases were at that time due to contact with the same person and had already been in quarantine. This seemed to hype up the outbreak when in fact the Nigerians were successfully containing it. Perhaps this is an example of being too particular and over-analyzing something subtle?

  • Ever using the phrase 'in the air' to describe how Ebola does or doesn't transmit, because this is a phrase that can mean completely different things to anyone using or hearing the phrase. Ebola is not airborne but can transmit within coughing distance.

  • The apparent internal inconsistency of a case of Ebola might come to the US, but an outbreak cannot happen here. Some relative risk numbers would be helpful here.

All of these examples upset me to various degrees since I feel like it is evidence that people -- even writers and the scientists they are quoting -- are unable to think critically and message coherently about issues. How should I update my view so that I am less surprised, less argumentative or less crazy-pedantic-fringe person?

A person infected with Ebola is very contagious during the period they are showing symptoms. The CDC recommends casual contact and droplet precautions.

Note the following description of (casual) contact:

Casual contact is defined as a) being within approximately 3 feet (1 meter) or within the room or care area for a prolonged period of time (e.g., healthcare personnel, household members) while not wearing recommended personal protective equipment (i.e., droplet and contact precautions–see Infection Prevention and Control Recommendations); or b) having direct brief contact (e.g., shaking hands) with an EVD case while not wearing recommended personal protective equipment (i.e., droplet and contact precautions–see Infection Prevention and Control Recommendations). At this time, brief interactions, such as walking by a person or moving through a hospital, do not constitute casual contact.

(Much more contagious than an STD.)

But Lumifer is also correct. People without symptoms are not contagious, and people with symptoms are conspicuous (e.g. Patrick Sawyer was very conspicuous when he infected staff and healthcare workers in Nigeria) and unlikely to be ambulatory. The probability of a given person in West Africa being infected is very small (2000 cases divided by approximately 20 million people in Guinea, Sierra Leone and Liberia) and the probability of a given person outside this area being infected is truly negligible. If we cannot contain the virus in the area, there will be a lot of time between the observation of a burning 'ember' (or 10 or 20) and any change in these probabilities -- plenty of time to handle and douse out any further hotspots that form.

The worst case scenario in my mind is that it continues unchecked in West Africa or takes hold in more underdeveloped countries. This scenario would mean more unacceptable suffering and would also mean the outbreak gets harder and harder to squash and contain, increasing the risk to all countries.

We need to douse it while it is relatively small -- I feel so frustrated when I hear there are hospitals in these regions without supplies such as protective gear. What is the problem? Rich countries should be dropping supplies already.

Sorry, realized I don't feel comfortable commenting on such a high-profile topic. Will wait a few minutes and then delete this comment (just to make sure there are no replies.)

[This comment is no longer endorsed by its author]Reply

I don't believe we disagree on anything. For example, I agree with this:

If you have equal numbers at +4 and +3 and +2, then most of the +4 still may not be the best, but the best is likely to be +4.

Are you talking about relative sample sizes, or absolute?

By 'plenty of points'... I was imagining that we are taking a finite sample from a theoretically infinite population. A person decides on a density that represents 'plenty of points' and then keeps adding to the sample until they have that density up to a certain specified sd.

Interesting post. Well thought out, with an original angle.

In the direction of constructive feedback, consider that the concept of sample size -- while it seems to help with the heuristic explanation -- likely just muddies the water. (We'd still have the effect even if there were plenty of points at all values.)

For example, suppose there were so many people with extreme height some of them also had extreme agility (with infinite sample size, we'd even reliably have that the best players we're also the tallest.) So: some of the tallest people are also the best basketball players. However, as you argued, most of the tallest won't be the most agile also, so most of the tallest are not the best (contrary to what would be predicted by their height alone).

In contrast, if average height correlates with average basketball ability, the other necessary condition for a basketball player with average height to have average ability is to have average agility -- but this is easy to satisfy. So most people with average height fit the prediction of average ability.

Likewise, the shortest people aren't likely to have the lowest agility, so the correlation prediction fails at that tail too.

Some of the 'math' is that it is easy to be average in all variables ( say, (.65)^n where n is the number of variables) but the probability of being standard deviations extreme in all variables is hard (say, (.05)^n to be in the top 5 percent.) Other math can be used to find the theoretic shape for these assumptions (e. g., is it an ellipse?).

I see. I was confused for a while, but in the hypothetical examples I was considering, a link between MMR and autism might be missed (a false negative with 5% probability) but isn't going to found unless it was there (low false positive). Then Vanviver explains, above, that the canonical null-hypothesis framework assumes that random chance will make it look like there is an effect with some probability -- so it is the false positive rate you can tune with your sample size.

I marginally understand this. For example, I can't really zoom out and see why you can't define your test so that the false positive rate is low instead. That's OK. I do understand your example and see that it is relevant for the null-hypothesis framework. (My background in statistics is not strong and I do not have much time to dedicate to this right now.)

(I realize I'm confused about something and am thinking it through for a moment.)

Load More