jacob_cannell comments on Open Thread, September, 2010-- part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (858)
I will indulge your curiosity in a moment, but I'm curious why you use the politically loaded term "denialism".
As far as I can tell, it's sole purpose is to derail rational dicussion by associating one's opponent with a morally perverse stance - specially invoking the association of Holocaust Denialism. Politics is the mind-killer. In regards to that, I have just been spectating your thread with Vladimir M, and I concur wholeheartedly with his well-written related post.
There is no rational use of that appelation, so please desist from that entire mode of thought.
Firstly, I don't think nearly as many quality researchers believe HIV == AIDS as you claim, at least not internally. The theory has gone well past the level of political mindkill and into the territory of an instituitionalied religion, where skeptics and detractors are publically shamed as evil people. I hope we can avoid that here. Actually, I think most intelligent researchers, if they could afford to be honest, would admit that HIV is the major indirect causitive factor, but this is not the same as saying HIV == AIDS. Likewise, I think most would admit that HIV is not quite an STD, not really at all.
Finally, even though I just said what I think is "wrong [with] the cognitive apparatus of all those medical and research professionals", namely that it is more an issue of politically charged public positions; I should also point out that even by the standard of your implied criteria - which seems to consist of counting up scientists for or against, it is even less clear that HIV == AIDS can be supported on that criteria. (which I do not favor as a rational criteria regardless). There are a large number of skeptics on public record for that hypothesis even considering the huge social stigma associated with adopting such a position in public. The HIV==AIDS hypothesis has far more skeptics on record than String Theory, for comparison.
But regardless, counting scientists is not a good rational criteria.
If you want to get into a discussion about rationality and reasoning in highly politicized issues such as this, that is an interesting side topic. But otherwise don't stoop to the moral high ground of politicized orthodoxy - just provide your hypothesis.
This is Less Wrong, not Mere Mainstream.
Actually, though you may not believe me, Holocaust denialism hadn't even occurred to me. In the portion of the blogosphere that I follow, it applies most frequently to AGW denialism, with the AIDS denialists second, evolution denialists third, and the anti-vaccination crowd getting an honorable mention.
The wikipedia article on HIV that you reference has a section entitled "AIDS Denialism".
But now that you mention it, why do you consider Holocaust denialism morally perverse? I thought that questioning PC conventional wisdom was considered a Good Thing here.
No, I don't believe I do. I wouldn't want to further offend you.
My hypothesis is pretty simple. You are using the wrong numbers.
When I Googled, the first few hits I found suggested 0.3% per coital act as a lower bound on heterosexual transmissibility with the risks increasing by 1-2 orders of magnitude in case of genital ulcers and/or high viral loads. I don't think that it is particularly difficult to understand the epidemic spreading in Africa as an STD when these higher numbers are used.
I did look at this study providing smaller numbers and this paper critiquing it, as well as this abstract mentioned in the wikipedia article. It was pretty clear to me that the kinds of low numbers you were using to argue against HIV being an STD are actually based on monogamous couples who are regularly examined by physicians and have been instructed in the use of condoms to prevent transmission. Those numbers don't apply to the most common cases of transmission, in which ulcers and other factors make transmission much more likely.
That is the hypothesis I was going to offer. When you suggested that you only had a 20-30% level of doubt of the orthodox position, I simply had no idea that it was such a strong and assured 30%.
Please see my response below concerning the perjorative "Denialist", and why such perjoratives have no place here.
You haven't offended me.
The google hits you mention are just websites, not research papers - not relevant. There is no reason apriori to view the ~0.3% per coital act transmission rate as a lower bound, it could just as easily be an upper bound. You need to show considerably more evidence for that point.
The data on wikipedia comes from the official data from the CDC1, which in turn comes from a compilation of numerous studies. I take that to be the 'best' data from the majoritive position, and overrides any other lesser studies for a variety of reasons.
This may be 'clear to you', but the Wikipedia data comes from a large CDC sponsored review considering aggregates of other studies to get overall measures of transmission. This is the orthodox data! I highly doubt it has the simple methodological errors you claim. And even if you did prove that it does have those errors, then you are only helping the skeptic case - by showing methodological errors in the orthodox position, and the next set of data should then come from the heterodox camp.
If I can take the liberty of butting in...
Here's the table of data I believe you're referring to:
It cites references 76, 77 & 79, all of which turn out to be publicly available online. That's good, because now I can check the validity of Perplexed's claim that the studies backing your CDC data used samples of relatively healthy, well-off people who lack some risk factors.
I took ref. 76 first. It reports data from the "European Study Group on Heterosexual Transmission of HIV", which recruited 563 HIV+ people, and their opposite-sex partners, from clinics and other health centres in 9 European countries. (Potential sampling bias no. 1: HIV+ people in European countries are more likely to have access to adequate healthcare than many Africans and Americans.) It also says:
That fits Perplexed's claim that the study's couples were "regularly examined by physicians" and "instructed in the use of condoms to prevent transmission". It's not clear whether they were "monogamous", but the study did exclude "[c]ontacts reporting other risks of HIV infection and those with other heterosexual partners with major risks for HIV infection". I also see another sampling bias: if the man & the woman in a couple disagreed on their questionnaires, that couple was blocked from the study. I would think that such couples have a higher risk of transmitting HIV (as I'd guess they're more likely to be couples where someone's lying about their sexual history); if so, the study's more likely to lowball HIV transmission risks.
What about refs. 77 & 79? Where did their data came from? It's pretty clear that they used the same European Study Group data. From 77:
And 79:
To be explicit: the three studies you're citing (via the CDC) are based on one data set, and Perplexed's characterization of that data is essentially accurate. That adds credence to his claim that the transmission rates you're citing don't represent HIV transmission rates in other situations.
Incidentally, tables 2 & 3 of ref. 76 suggest that HIV transmission risk is not only associated with type of sex act, but also with the HIV+ partner's infection stage, especially (I did not expect this!) for male partners of HIV+ women. Maybe more evidence that there's more to HIV transmission risk than who's putting which organ in which orifice?
By all means. While you were writing this I was reading 76 and writing my own reply.
Ah I hope he wasn't claiming this, because they certainly were anything but healthy, with around 20% being hardcore IV drug users, 23% transfusion recipients, 10% hemophiliacs, and overall high rates of STD's.
I noticed the discrepancy about sexual history part but I didn't see how to factor it. They are relying on self-reporting to make any of these links.
Err, no, because his characterization is based on the idea that these couples were using condoms frequently, but the study specifically shows that is not the case - see my reply to Perplexed.
If you want to characterize this study, fine, but don't pretend that these are "regular couples regularly using condoms" - that is not what the study claims.
And finally this is the orthodox data. I mean if you want to reject ... ok .. and then we move to searching for other data which supports our relative positions . . . to the extent we have positions.
Perhaps it would be best to agree what the ideal study would be and precommit to that ideal in a sense? And then we could look for other studies that may be closer. Of course in the real world they will rarely be clear cut.
I quite agree that they weren't healthy in absolute terms; I just meant that they were relatively healthy for people with HIV. Compared to HIV+ people in much of the rest of the world, especially sub-Saharan Africa, I'd expect this European study's subjects to have (on average) better nourishment, better healthcare, stronger immune systems, less exposure to infectious disease, much less exposure to parasites, and a far lower rate of promiscuity & prostitution. I should've been more explicit that that was the sort of comparison I had in mind.
Looking at your reply, I think we disagree about whether or not Perplexed was hinting that the couples were consistently using condoms. I didn't think Perplexed was implying anything more than that most cases of HIV transmission involve people who weren't regularly reminded to use condoms. So I took his statement at face value, in which case it's surely true (unless European doctors have come up with a way of counselling couples "about the risk of HIV infection and safer sex practices" that doesn't involve advising condom use!).
I don't believe Perplexed or I are pretending that these are normal couples who continually use condoms. I think it goes without saying that these weren't "regular couples" — after all, "regular couples" don't visit hospitals and clinics to get HIV infections checked out. Whom are you quoting?
I reject your interpretation of the data, rather than the data. The study probably gives a fair idea of transmission risks among faithful Western couples living in the 1980s/90s who regularly see doctors...but for that reason (among others) it's likely to underestimate transmission risks in other demographics.
I think you're probably right on that point. I suspect that looking at per-sex act transmission risks isn't going to be very enlightening about whether or not HIV causes AIDS. It would be better to have data from
(each bullet point getting more restrictive). I don't know if there are such data, but it would get us closer to the original question than big-picture arguments about transmission risks.
I generally agree with most of this except perhaps the last part - I doubt that promiscuity and prostitution varies that much from 3rd and 1st world.
I think he was implying that the reminders led to condom use, but this was in fact not the case according to the study itself (possibly because they excluded many of the condom users, some aspects of the study's design are not all that clear to me).
Not quoting, just paraphrasing. He was implying that the heterosexual couples receiving counseling were not indicative of a typical HIV hetero population, but the study designers of course realized that and were at least attempting to gather representative data.
Ok, whether HIV causes AIDS is a larger topic. My original point was just about the orthodox claim that HIV is sexually transmitted, which I believe is rather obviously bogus according to the orthodox's own data. I hope you can see how the orthodox could go wrong there and some of the political factors at work.
As to the larger HIV == AIDS issue, I largely agree with your ideal data criteria, but one potential issue is whether we are comparing HIV to the null hypothesis or to some other hypothesis. I don't think any reasonable skeptic claims that HIV is not at least correlated with AIDS - Richard Gallo may be many things, but he is probably not stupid nor a charlatan.
So it would be better to compare the orthodox HIV hypothesis vs the Drug/Lifestyle Hypothesis (which predated HIV). Some immediate concerns are that one must take care to then define AIDS reasonably without circular reference to HIV (which precludes some data)
The next concern would be that either way, previously healthy people don't get HIV or AIDS, in reality or according to either theory. The risk groups are all unhealthy in various ways.
All that being said, Duesberg does indeed provide data very close to what you are proposing. There are some groups of HIV+ who, for whatever reason, have refused mainstream treatment. There aren't many such people, but he cites a study about a group in Germany - they are called long term non-progressors (which is kind of funny when you think about it - AIDS is progress?).
Anyway, this study is small, only 30-40 people IIRMC, but it is long lasting and only a handful have died. He calculates their death rate as measurably lower than the death rate of HIV+ treated patients, and uses this as a major piece of evidence.
here .. that part is on page 402 (it's a large journal excerpt or something - not really that long)
An interesting read overall, would like to read a good rebuttal.
Quite possibly, but note that I was comparing the subjects in the European study with the rest of the world, rather than all of the 1st world. The study's screening procedures probably cut out quite a few people who have a lot of sex.
I think I do. (I hope I do!) Still, I do see the orthodox belief that HIV can be transmitted sexually as being compatible with the CDC numbers. The CDC transmission rates are surely below the average real-world HIV transmission rate (due to the nature of the European study sample), and there are features of the data that are easier to explain if we acknowledge that HIV's sexually transmitted: the condom-using couples had lower HIV transmission rates than the non-condom users, men who (claimed to have) had period sex with HIV+ women were at higher risk of transmission than men who (claimed to have) avoided period sex, and so forth. So I continue to disagree that the HIV-is-an-STI view is "obviously bogus according to the orthodox's own data".
It might even preclude the Duesberg/Koehnlein data you link. Page 402 says the study's of "AIDS patients", and it's not clear from the immediate context what definition of "AIDS" was used for the study. All 36 of the patients (you were right about the study's size!) are listed as being HIV+, which suggests to me that the AIDS diagnoses were made (at least partly) based on HIV+ status, as is standard practice.
I would have thought that healthy people are capable of getting HIV? Getting pricked with an HIV-infected needle works, as does sexual transmission. A lot of people in high-risk groups are unhealthy, of course, but there are surely unlucky people who get HIV without prior major illness.
I looked up long-term nonprogressors on Wikipedia (not the most reliable source, but anyway), and it looks like many long-term non-progressors have genetic traits that make them better able to resist HIV, or have a weaker form of HIV.
I also saw that the group in Germany Duesberg's talking about all come from Kiel, a relatively small city (population about 240,000). I'm wondering whether the people living there could be more likely to have HIV-resistant genes. Or maybe the form of HIV circulating there is less virulent? (Or both?)
I should say upfront that there's no way I'm rebutting all 30 pages of the article (I really doubt the game's worth the candle), but I can comment a bit more on the little German study.
The first thing that jumps out at me is the lack of detail. I'm curious about how Koehnlein discovered the subjects for the study (personal contact?) and whether they included all of the eligible patients they found. I also wonder how Koehnlein followed up patients, and how regularly. How rigorously do they track the patients to make sure they're staying off HIV drugs & illicit drugs? How often do they check on them to see whether they're still alive? When was the last follow-up? The article's dated mid-2003, but it looks like Koehnlein's added no new subjects to the study since 2000, and the latest update is from 2001 (when the 3 dead patients died). It would be very interesting to know how many of the remaining 33 patients are still alive 7-9 years on. I looked for later publications by Koehnlein on his study and didn't find any (which is a bit of a red flag in itself).
I'm also not sure that some of these "AIDS patients" had AIDS in the first place. This looks like the CDC's definition of AIDS: typically, you have to HIV, and either a CD4 count below 200 ("or a CD4+ T-cell percentage of total lymphocytes of less than 15%") or one of a list of AIDS-defining illnesses. (You might dispute using HIV+ status as part of the definition of AIDS, but it makes no difference with Koehnlein's subjects because they all had HIV.) The table doesn't offer enough information about CD4 T-cell percentages to check whether they're less than 15%, but it does give CD4 counts and list what appear to be the patients' "initial AIDS-indicator symptoms".
So I look at case 1. His CD4 count is 256. His initial symptom was "Herpes zoster". The Wikipedia/CDC list of AIDS-defining diseases does not include herpes zoster, only chronic ulcers due to herpes simplex. It's not clear that the patient actually had AIDS when Koehnlein included him in the study. Moving on to case 2, she's marked as asymptomatic and no CD4 count is given. What was the basis for her AIDS diagnosis?
I sorted the 36 cases into 3 groups: a "questionable diagnosis" group (patient was asymptomatic/had symptoms clearly not on the AIDS-defining illness list, and their CD4 count was explicitly given as >200), a "definite AIDS" group (patient had an illness clearly on the AIDS-defining list, and/or a CD4 count explicitly <200), and an "unsure" group (cases that didn't fit the other two groups). I put cases 1, 8, 9, 17, 19, 20, 23, 24, 25, 27 & 35 in the "questionable" group; cases 3, 4, 6, 12, 13, 21, 30, 31, 32 & 36 in the "definite AIDS" group; and cases 2, 5, 7, 10, 11, 14 (they had pneumonia, but only recurrent pneumonia and/or PJP is AIDS-defining), 15, 16 (they had toxoplasmosis, but it's not said whether it was in the brain), 18, 22, 26, 28, 29, 33 & 34 in the "unsure" group.
So the Koehnlein's study's effective sample size & death rate seems to be sensitive to how rigorously one defines AIDS. As I see it, only 10 of the 36 cases unambiguously have AIDS, and counting deaths in that subgroup leads to a death rate of 20% as opposed to "only 8%". I think Koehnlein's data are interesting, but there are a multitude of reasons not to take Duesberg's 8% vs. 63% comparison at face value.
I would have given more creedence to this view at the beginning of this whole inquiry, but in another branch several other posters found some large meta-analysis studies, and low and behold they confirm and agree with the old CDC European study. I discuss that here
Of note is that the infection rate in 1st world countires agrees with the original CDC European Study, and the infection rate in Africa/3rd world appears to be 3-6 times higher. Metastudies which mix 1st and 3rd world results get rates somewhere in between.
Some of these metastudies were of thousands of individual studies, and say what we will about them, I think they nail down the real world transmission rates, and the 1st world rates are just as low as I originally quoted (or lower)
Effects like this surely can increase transmission rates in specific instances, but for epidemilogical modelling we are interested in the average rates - and note as I analyzed elsewhere, the original CDC European study does attempt to control for condom use - it intends to show infection rates for unprotected sex. I don't think you can so easily dismiss all these studies and the work that has gone into computing these transmission rates.
This is certainly a possibility and fits what we know with viruses - variable genetic resistance is to be expected.
However, what is important is how one samples and when. If you take a sampling of survivors years later, then sure you can expect to be finding survivors due to genetic resistance.
But if you sample a subset based only on the criteria that they refuse medication after testing seropositive, then that is a very different sampling, and you should expect it to be largely uncorrelated from genetic resistance (unless you want to argue that people with genetic resistance are strongly expected to resist medication!, but I hope you won't take that route)
You do bring up a potentially valid criticism:
Possibly, but I don't find a reason why we should expect this without specific evidence - from what I understand the HIV-1 virus variants spread diffusely in specific at-risk subgroups. It would help the case if the study had more widely distributed patients, and maybe there are other such studies, but it isn't strong evidence against. We can't expect many patients to have resisted medicating, and those that did would tend to be clustered geographically in regions where some cluster of doctors were allowed to hold that view and resist medication for a long period of time and study the patients. From what I understand, this was not allowed to happen in the states.
You raise some further methodological questions:
I don't know, and yes these are interesting questions, and it would be useful if there was a meta-study of all long-term survivors/non-progressors.
Yes, this would be interesting, but note that we shouldn't expect these people to have full life expectancy, in either theory - as seropositive status is clearly a marker for ill-health. The bigger question is does refusing medication increase lifespan? That is the central point.
Even if they all died after 12 years on average, that still may be better than typical, for example.
A follow up would be interesting, but lack thereof isn't necessarily a red-flag. They are going to die at some point, and probably much earlier than seronegatives. The question is one of statistics.
As to your questioning of whether these are "AIDS patients", I find that is rather irrelevant - we are only concerned with the fact that they tested positive for HIV. If HIV doesn't strongly cause AIDS, but medication does, then of course we shouldn't expect these medication refusers to progress into AIDS and become AIDS-patients, which is exactly what the study is showing. So I dont' understand why you are trying to show that they are not AIDS patients - that's the whole point! You may be unknowling arguing for the opposition (or perhaps I am confused on your position or you have none).
All of this is consistent with the CDC statistics underestimating the general transmission rate. You write that the rate estimated from the European study "agrees with" meta-analyses of 1st world data, and that the 3rd world rate estimated by meta-analysis is higher still. So pooling the two meta-analytic results gives a global average rate greater than the 1st world average rates, i.e. averages greater than the CDC rates.
I don't think I am dismissing these studies and the work. The bit of my comment you're quoting refers, after all, to secondary analyses in one of those studies. The point I'm trying to make by drawing attention to those analyses isn't something like "look, the transmission rates are higher if you don't use condoms, clearly they're high enough for HIV to spread through the population", but instead "associations between condom use and transmission rates, and between sex during menses and transmission rates, have a far higher likelihood in a model where HIV is an STI than in a model where it's not". It's much easier for me to explain why having sex with a woman at particular times in her menstrual cycle would correlate with HIV transmission if I presume HIV's sexually transmitted, which I interpret as evidence for [edited: I had a brain fart and originally wrote "against"] the view that HIV's an STI.
Don't worry, I'm not. I'm suggesting that because the sampled people all come from the same small geographic region, it's possible that genetic resistance and/or weaker HIV variants are more common among them.
The specific evidence I have in mind is the geographic restriction of the sample. A group of people from one place will tend to be more genetically similar than a worldwide sample, and will be more likely to share strains of a disease. I expect HIV-1 variants do spread diffusely in subgroups, but I don't think that rules out my point. Particular alleles of genes spread throughout humanity, but spatial proximity still correlates with genetic similarity among people. Sure, geographic restriction is hardly strong evidence of these things — a sample of people who live on the same street could quite easily contain just as much variety in genes that affect HIV resistance (or variety in HIV substrains) as a wider sample. But with geographic restrictions, the variance is likely to be less. (Notice also that the sample seems to be relatively racially homogeneous — only one of the 36 cases is described as black. That's more evidence of less genetic variance, though very weak evidence, as racial groupings don't represent much genetic variance.)
Yes, but you originally presented the study as "data very close to what [I am] proposing", and part of my proposal was that the study's subjects "are followed up regularly" for 20+ years. Koehnlein's study started in 1985, most of the subjects entered it in the 1990s or later, the latest update is from 2001, and the published report is from 2003. So most subjects don't seem to have had anything like a 20-year (or more) follow-up.
The bigger question we're looking at is whether HIV causes the complex of conditions we recognize as AIDS (and, before that, HIV transmission rates).
True, but the question is how much better than average their lifespan was, and the causes of death also matter. If the patients lived for many post-HIV years more than average, but most of them died of Kaposi's sarcoma, I would strongly suspect AIDS.
It doesn't mean the study is somehow wrong, but I see it as a warning sign. It's very unusual for someone to spend 16+ years on a unique, systematic study of untreated HIV patients, and then not publish it anywhere except as a one-page summary in the middle of a review article that I suspect was mostly written by someone else. I have a hunch that Koehnlein's unable to get the study published in full.
I can think of two reasons why it's very relevant. First off, if most of the subjects didn't have AIDS, that might well explain why their death rate's less than that of AIDS patients (and Duesberg & Koehnlein quite explicitly compare the sample's death rate to that of "German AIDS patients") — one dies of AIDS instead of HIV per se, and it normally takes years to go from being HIV+ to having AIDS. Secondly, Duesberg & Koehnlein say the study is of "AIDS patients"; if it turns out that there are people in the study who didn't have AIDS, D&K have made a specious comparison, and a false claim about the nature of the study. That would raise questions about how much I should trust their report of it.
Agreed, with the proviso that one would have to wait a long time to be sure that HIV didn't eventually progress to AIDS.
Disagreed. If you're agreeing with my suspicion that some of the people in Koehnlein's study didn't have AIDS, you're implicitly accepting my guess that the clinic symptoms and CD4 counts in the table are those observed for each subject when they entered the study, because that forms the basis for my suspicion. And if you believe that, it follows that you can only infer whether a subject had AIDS when they entered the study, and not whether they later developed AIDS.
Well, it's possible I am. But see above!
Well the best cure for doubt is to actually read the papers referenced. For example, following the links from your reference to the abstract of the actual paper which generated the numbers brought me to this abstract. I think you should read it.
The issue isn't methodological errors in the studies - the studies clearly describe the methodologies used and their limits. The issue is trying to use the numbers in ways that they are not designed to be used. It is not the orthodox folks that are doing that. It is you that is doing that.
Ah, so do the numbers come with little instruction manuals that say "CAN ONLY BE USED TO SUPPORT ORTHODOX POSITION"? Haha sorry, couldn't resist.
OK, I'm game, I will now look into the CDC studies, but let's be clear on the trace ..
it starts with the wikipedia chart which has the ref note 80 linked here, which points to this, which in turn lists refs 76, 77, and 79 for P/V sex, which are (in order):
76: Comparison of female to male and male to female transmission of HIV in 563 stable couples
77: Reducing the risk of sexual HIV transmission: quantifying the per-act risk for HIV on the basis of choice of partner, sex act, and condom use
79: European Study Group on Heterosexual Transmission of HIV. Heterosexual transmission of HIV: variability of infectivity throughout the course of infection
I'll comment more after I have read these.
You will find that #77, the Varghese et al paper, can be found online by Googling the title, and that it gets its 0.1% number for heterosexual transmission from the paper whose abstract I recommended.
I'm pretty sure you will find that all of these papers involve monogamous couples. If you give it some thought, you will realize that there is just about no other way to come up with a solid empirically-based number. And I again urge you to read that abstract - particularly the bottom third.
Haha ok this is kinda funny, but the abstract you linked to which is the source of the data in Varghese(77), is just 76! - the couple comparison from the European Study Group which I linked to and have been trying to parse. 79 appears to be another chapter from that same book, but I haven't looked at it yet.
So before we get into 76 - the source of the stat you don't like in 77, I need to backup and remember your original claim about the data:
Your implied point appears to be that couples in this study use condoms frequently. Surprisingly, this is not the case - only a surprisingly small number of couples reported consistent condom use (out of 500+),
Contraceptive behaviour:
No regular contraceptive 12 (10/86) 20 (43/212) Oral contraceptive 18 (7/40) 23 (26/114) Intrauterine device 10 (1/10) 28 (7/25) Condom* 0(0/11) 18 (6/33)
These people were using other methods of birth control more than condoms.
and they further removed consistent condom users from the data:
No, it is not. The reference leading to the abstract is the absolute risk described in the first paragraph of page 40 of Varghese. It is reference 28 of Varghese.
You are apparently following the references (21) appearing in Table 1 of Varghese. But these are relative risks (relative to felatio). Not at all what I meant.
My point about condom use came from an earlier reference that I had supplied which discusses a study that took place in Uganda in 2005. And I didn't say that they used condoms frequently, I said that they were monogamous couples who got regular medical inspections and had been counseled regarding condoms. And in this study, as I recall, there was no exclusion of condom users.
Ah no I haven't even read Varghese(77) yet. You looked at that one and posted a link to an abstract - this abstract, which comes from the same European Study Group and has the same numbers (563 couples) as 76. So the abstract you wanted me to look at is just 76, it's all the same source.
Satt also pointed this out in another reply here.
I'm not really concerned (at the moment) with what may or may not be happening in Uganda. The CDC data comes from this European Study Group, that is the original data in question - (the data you questioned).
I'm confused - the table says "assuming no condom use". So you're talking about other data, or they were able to filter the data.
What table in what document says that?
I'm talking about the data I said I am talking about: this paper and this piece of primary research which states
It didn't occur to me either, and seemed strange. That word does have strong negative connotations in my mind, but only because I associate it with stupid people denying true things and refusing to update on evidence. I thought the comment that referred to was incorrect, but it seemed more like honest confusion of the sort that clarification would dispel than denialism.
Some history then of exactly why the word conjures strong negative correlations is in order.
Look at the wikipedia entry for "denialism". It originates with holocaust denialism, was then applied to skeptics of HIV==AIDS, and then later to other areas.
Peter Duesberg, the leading HIV==AIDS skeptic, is a German of non-Jewish descent raised in Nazi-era Germany, so it's use against him and his followers adds extra moral angst. It is just about the deepest insulting connotation one can use. It is a signal of stooping to the ultimate low, that, in running out of any remaining rational argument, one must invoke deep moral revulsion to stigmatize one's opponent.
In my view, the term is a serious Crime of Irrationality, it is an empty ad-hominem and should be seen as a sign of great failure when one stoops to using it as a name-calling tactic against one's opponents.
That being said, I don't think Perplexed has this view, and that wasn't his intention. I am just giving background on why the word should not be used here.
Those who don't subscribe to HIV==AIDS, should just be called skeptics.
Do we call proponents of quantum loop gravity String Theory Denialists? It's ridiculous.
Should we call those who subscribe to HIV==AIDS to be Inquisitors, Mcaurthy-ists, or Witch-hunters?
I do.
Oh please. Stop trying to pretend you have the rationalist high ground here. You don't.