Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

MetaMed: Evidence-Based Healthcare

81 Post author: Eliezer_Yudkowsky 05 March 2013 01:16PM

In a world where 85% of doctors can't solve simple Bayesian word problems...

In a world where only 20.9% of reported results that a pharmaceutical company tries to investigate for development purposes, fully replicate...

In a world where "p-values" are anything the author wants them to be...

...and where there are all sorts of amazing technologies and techniques which nobody at your hospital has ever heard of...

...there's also MetaMed.  Instead of just having “evidence-based medicine” in journals that doctors don't actually read, MetaMed will provide you with actual evidence-based healthcare.  Their Chairman and CTO is Jaan Tallinn (cofounder of Skype, major funder of xrisk-related endeavors), one of their major VCs is Peter Thiel (major funder of MIRI), their management includes some names LWers will find familiar, and their researchers know math and stats and in many cases have also read LessWrong.  If you have a sufficiently serious problem and can afford their service, MetaMed will (a) put someone on reading the relevant research literature who understands real statistics and can tell whether the paper is trustworthy; and (b) refer you to a cooperative doctor in their network who can carry out the therapies they find.

MetaMed was partially inspired by the case of a woman who had her fingertip chopped off, was told by the hospital that she was screwed, and then read through an awful lot of literature on her own until she found someone working on an advanced regenerative therapy that let her actually grow the fingertip back.  The idea behind MetaMed isn't just that they will scour the literature to find how the best experimentally supported treatment differs from the average wisdom - people who regularly read LW will be aware that this is often a pretty large divergence - but that they will also look for this sort of very recent technology that most hospitals won't have heard about.

This is a new service and it has to interact with the existing medical system, so they are currently expensive, starting at $5,000 for a research report.  (Keeping in mind that a basic report involves a lot of work by people who must be good at math.)  If you have a sick friend who can afford it - especially if the regular system is failing them, and they want (or you want) their next step to be more science instead of "alternative medicine" or whatever - please do refer them to MetaMed immediately.  We can’t all have nice things like this someday unless somebody pays for it while it’s still new and expensive.  And the regular healthcare system really is bad enough at science (especially in the US, but science is difficult everywhere) that there's no point in condemning anyone to it when they can afford better.


I also got my hands on a copy of MetaMed's standard list of citations that they use to support points to reporters.  What follows isn't nearly everything on MetaMed's list, just the items I found most interesting.


90% of preclinical cancer studies could not be replicated:
http://www.nature.com/nature/journal/v483/n7391/full/483531a.html

"It is frequently stated that it takes an average of 17 years for research evidence to reach clinical practice. Balas and Bohen, Grant, and Wratschko all estimated a time lag of 17 years measuring different points of the process." - http://www.jrsm.rsmjournals.com/content/104/12/510.full

"The authors estimated the volume of medical literature potentially relevant to primary care published in a month and the time required for physicians trained in medical epidemiology to evaluate it for updating a clinical knowledgebase.... Average time per article was 2.89 minutes, if this outlier was excluded. Extrapolating this estimate to 7,287 articles per month, this effort would require 627.5 hours per month, or about 29 hours per weekday." 

One-third of hospital patients are harmed by their stay in the hospital, and 7% of patients are either permanently harmed or die: http://www.ama-assn.org/amednews/2011/04/18/prl20418.htm

(I emailed MetaMed to ask for the actual bibliography for the following citations, since that wasn't included in the copy of the list I saw.  I already recognize some of the citations having to do with Bayesian reasoning, which makes me fairly confident of the others.)

Statistical Illiteracy

Doctors often confuse sensitivity and specificity (Gigerenzer 2002); most physicians do not understand how to compute the positive predictive value of a test (Hoffrage and Gigerenzer 1998); a third overestimate benefits if they are expressed as positive risk reductions (Gigerenzer et al 2007).
Physicians think a procedure is more effective if the benefits are described as a relative risk reduction rather than as an absolute risk reduction (Naylor et al 1992).
Only 3 out of 140 reviewers of four breast cancer screening proposals noticed that all four were identical proposals with the risks represented differently (Fahey et al 1995).
60% of gynecologists do not understand what the sensitivity and specificity of a test are (Gigerenzer at al 2007).
95% of physicians overestimated the probability of breast cancer given a positive mammogram by an order of magnitude (Eddy 1982).
When physicians receive prostate cancer screening information in terms of five-year survival rates, 78% think screening is effective; when the same information is given in terms of mortality rates, 5% believe it is effective (Wegwarth et al, submitted).
Only one out of 21 obstetricians could estimate the probability that an unborn child had Down syndrome given a positive test (Bramwell, West, and Salmon 2006).
Sixteen out of twenty HIV counselors said that there was no such thing as a false positive HIV test (Gigerenzer et all 1998).
Only 3% of questions in the certification exam for the American Board of Internal Medicine cover clinical epidemiology or medical statistics, and risk communication is not addressed (Gigerenzer et al 2007).
British GPs rarely change their prescribing patterns and when they do it’s rarely in response to evidence (Armstrong et al 1996).

Drug Advertising

Direct-to-customer advertising by pharmaceutical companies, which is intended to sell drugs rather than to educate, often does not contain information about a drug's success rate (only 9% did), alternative methods of treatment (29%), behavioral changes (24%), or the treatment duration (9%) (Bell et al 2000).
Patients are more likely to request advertised drugs and doctors to prescribe them, regardless of their misgivings (Gilbody et al 2005).

Medical Errors

44,000 to 98,000 patients are killed in US hospitals each year by documented, preventable medical errors (Kohn et al 2000).
Despite proven effectiveness of simple checklists in reducing infections in hospitals (Provonost et al 2006), most ICU physicians do not use them.
Simple diagnostic tools which may even ignore some data give measurably better outcomes in areas such as deciding whether to put a new admission in a coronary care bed (Green and Mehr 1997).
Tort law often actively penalizes physicians who practice evidence-based medicine instead of the medicine that is customary in their area (Monahan 2007).
Out of 175 law schools, only one requires a basic course in statistics or research methods (Faigman 1999), so many judges, jurors, and lawyers are misled by nontransparent statistics.
93% of surgeons, obstreticians, and other health care professionals at high risk for malpractice suits report practicing defensive medicine (Studdert et al 2005).

Regional Variations in Health Care

Tonsillectomies vary twelvefold between the counties in Vermont with the highest and lowest rates of the procedure (Wennberg and Gittelsohn 1973).
Fivefold variations in one-year survival from cancer across different regions have been observed (Quam and Smith 2005).
Fiftyfold variations in people receiving drug treatment for dementia has been reported (Prescribing Observatory for Mental Health 2007).
Rates of certain surgical procedures vary tenfold to fifteenfold between regions (McPherson et al 1982).
Clinicians are more likely to consult their colleagues than medical journals or the library, partially explaining regional differences (Shaughnessy et al 1994).

Research

Researchers may report only favorable trials, only report favorable data (Angell 2004), or cherry-pick data to only report favorable variables or subgroups (Rennie 1997).
Of 50 systematic reviews and meta-analyses on asthma treatment 40 had serious or extensive flaws, including all 6 associated with industry (Jadad et al 2000).
Less high-tech knowledge and applications tend to be considered less innovative and ignored (Shi and Singh 2008).

Poor Use of Statistics In Research

Only about 7% of major-journal trials report results using transparent statistics (Nuovo, Melnivov and Chang 2002).
Data are often reported in biased ways: for instance, benefits are often reported as relative risks (“reduces the risk by half”) and harms as absolute risks (“an increase of 5 in 1000”); absolute risks seem smaller even when the risk is the same (Gigerenzer et al 2007).
Half of trials inappropriately use significance tests for baseline comparison; 2/3 present subgroup findings, a sign of possible data fishing, often without appropriate tests for interaction (Assman et al 2000).
One third of studies use mismatched framing, where benefits are reported one way (usually relative risk reduction, which makes them look bigger) and harms another (usually absolute risk reduction, which makes them look smaller) (Sedrakyan and Shih 2007).

Positive Publication Bias

Positive publication bias overstates the effects of treatment by up to one-third (Schultz et al 1995).
More than 50% of research is unpublished or unreported (Mathieu et al 2009).
In ten high-impact medical journals, only 45.5% of trials were adequately registered before testing began; of these 31% show discrepancies between outcomes measured and published (Mathieu et al 2009).

Pharmaceutical Company Induced Bias

Studies funded by the pharmaceutical industry are more likely to report results favorable to the sponsoring company (Lexchin et al 2003).
There is a significant association between industry sponsorship and both pro-industry outcomes and poor methodology (Bekelman and Kronmal 2008).
In manufacturer-supported trials of non-steroidal anti-inflammatory drugs, half the time the data presented did not match claims made within the article (Rochon et al 1994).
68% of US health research is funded by industry (Research!America 2008), which means that research that leads to profits to the health care industry tends to be prioritized.
71 out of 78 drugs approved by the FDA in 2002 are “me too” drugs that are more profitable because of the patent but not substantially different from existing medication (Angell 2004).
“Seeding trials” by pharmaceutical companies promote treatments instead of testing hypotheses (Hill et al 2008).
Even accurate research may be misreported by pharmaceutical company advertising, including ads in medical journals (Villanueva et al 2003).
In 92% of cases, pharmaceutical leaflets distributed to doctors have data summaries that either cannot be verified or inaccurately summarize available data (Kaiser et al 2004).



I don't plan on becoming seriously sick, but if I do, I think I'll check in with MetaMed just to make sure nobody is ignoring the research results showing that you shouldn't feed the patient rat poison.

Comments (176)

Comment author: DataPacRat 05 March 2013 06:45:26PM 31 points [-]

Once MetaMed has been paid for and done a literature search on a given item, will that information only be communicated to the individual who hired them, or will it be made more widely available?

Comment author: wedrifid 05 March 2013 08:46:20PM 15 points [-]

Once MetaMed has been paid for and done a literature search on a given item, will that information only be communicated to the individual who hired them, or will it be made more widely available?

A related question: Assuming that the information remains private (as seems to be the most viable business model) will the company attempt to place restrictions on what the clients may do with the information? That is, is the client free to publish it?

Comment author: alyssavance 05 March 2013 08:59:33PM 31 points [-]

Clients are free to publish whatever they like, but we are very strict about patient confidentiality, and do not release any patient information without express written consent.

Comment author: Viliam_Bur 06 March 2013 10:21:05AM 9 points [-]

I like the idea of clients being free to publish anything... but what will you do if they misrepresent what you said, and claim they got the information from you? If could be a honest mistake (omiting part of information that did not seem important to them, but which in fact changes the results critically), oversimplification for sake of popularity ("5 things you should do if you have cancer" for a popular blog), or outright fraud or mental illness. For example someone could use your services and in addition try some homeopatic treatment, and at the end they would publish your advice edited to include the recommendation for homeopathy.

So there should be a rule like: "either publish everything verbatim... or don't mention our name". (I guess you probably already have it, but I say this for the case you don't.)

Comment author: pinyaka 06 March 2013 02:27:10PM 5 points [-]

I assume that means that you won't be publishing your findings stripped of the clients identifying information?

Comment author: wedrifid 05 March 2013 09:03:16PM 4 points [-]

Clients are free to publish whatever they like, but we are very strict about patient confidentiality, and do not release any patient information without express written consent.

Oh, Tom is involved too!

Thankyou for responding to our questions. I was curious.

Comment author: MichaelVassar 07 March 2013 07:08:06AM 9 points [-]

We won't publish anything, but clients are free to publish whatever they wish to in any manner that they wish.

Comment author: freyley 05 March 2013 06:40:10PM *  24 points [-]

caveats: they're new; it's hard to do what they're doing; they have to look serious; this is valuable the more it's taken seriously.

They have really wonderful site design/marketing...except that it doesn't give me the impression that they will ever be making the world better for anyone other than their clients. Here's what I'd see as ideal:

  • They've either paid the $5k themselves, a drop in the bucket of their funding apparently, and put up one report as both a sample and proof of their intent to publish reports for everyone, or (better) gotten a client who's had a report to agree to allow them to release it.
  • This report, above, is linked to from their news section and there's a prominent search field on the news section (ok), or there's a separate reports section (better)
  • The news section has RSS (or the reports section has RSS, or both, best)

On a more profiteering viewpoint, they could offer a report for either $5k for a private report, or $3k for a public report, with a promise to charge $50 for the public report until they reach $5k (or $6k, or an internal number that isn't unreasonable) and then release it.

Most people who are seriously sick tend to get into a pretty idealistic mode, is my experience, and would actually be further convinced by putting their $5k both to help themselves and to help others, and while sure, they could release the report themselves, metamed has a central, more trustable platform. If they want me to believe that they're interested in doing that kind of thing, it'd be nice if they had something up there to show me that they hope to.

On preview, I realize that the easy objection is that these are personalized reports, and data confidentiality is important. They obviously will only be able to publish pieces of reports that are not personal, and this is obviously a more costly thing than just tossing a pdf up on a website. Hm.

All of that said, they look like a really exciting company, I really hope they do well (and then take my advice =).

Comment author: ChristianKl 06 March 2013 03:34:23AM 7 points [-]

A patient might profit from open publishing of the report. If MetaMed starts getting a reputation for good reports it will get read by medical experts. If an experts reads something that's wrong in the report it would be great if there a way for that expert write a comment under the report. That comment could be very helpful to the patient.

Comment author: NancyLebovitz 05 March 2013 09:05:31PM 4 points [-]

I'm not sure that's the best scheme, but I'm hoping MetaMed finds some way of taking their findings public.

Comment author: ShannonFriedman 06 March 2013 05:20:59PM 1 point [-]

They're all hardcore x-risk reductionists and good people. I would be very surprised if they didn't do everything they could to help people in any way they could as soon as it made sense, and they weren't sacrificing long term goals for shorter term ones.

Comment author: Grif 06 March 2013 05:54:22PM *  2 points [-]

I suspect that later, when they have more presence in the public and expert view, they will open up new payment options to increase visibility of their reports, but only after they have employed significantly more researchers and run them through rigorous epistemic ethics training. Otherwise, there's little stopping a Big Pharma company from hiring Metamed for a $3,000 report, and then posting a biased summary of the report on their news page, along with an "APPROVED BY METAMED" sticker. Even worse if Metamed considers the "approval sticker" to be useful to spreading awareness of evidence-based medicine. The potential for corruption is just too high.

Comment author: xv15 05 March 2013 06:10:09PM *  14 points [-]

Only one out of 21 obstetricians could estimate the probability that an unborn child had Down syndrome given a positive test

Say the doctor knows false positive/negative rates of the test, and also the overall probability of Down syndrome, but doesn't know how to combine these into the probability of Down syndrome given a positive test result.

Okay, so to the extent that it's possible, why doesn't someone just tell them the results of the Bayesian updating in advance? I assume a doctor is told the false positive and negative rates of a test. But what matters to the doctor is the probability that the patient has the disorder. So instead of telling a doctor, "Here is the probability that a patient with Down syndrome will have a negative test result," why not just directly say, "When the test is positive, here is the probability of the patient actually having Down syndrome. When the test is negative, here is the probability that the patient has Down syndrome."

Bayes theorem is a general tool that would let doctors manipulate the information they're given into the probabilities that they care about. But am I crazy to think that we could circumvent much of their need for Bayes theorem by simply giving them different (not necessarily much more) information?

There are counterpoints to consider. But it seems to me that many examples of Bayesian failure in medicine are analogously simple to the above, and could be as simply fixed. The statistical illiteracy of doctors can be offset so long as there are statistically literate people upstream.

Comment author: CCC 05 March 2013 06:32:07PM 7 points [-]

This stops working in the case where some of the people upstream can't be trusted. Consider the following statement:

"The previous test, if you have a positive result, means that the baby has a 25% chance of having Down syndrome, according to the manufacturer. But my patented test will return a positive result in 99% of cases in which the baby has Down syndrome."

Comment author: xv15 05 March 2013 07:15:56PM 1 point [-]

"False positive rate" and "False negative rate" have strict definitions and presumably it is standard to report these numbers as an outcome of clinical trials. Could we similarly define a rigid term to describe the probability of having a disorder given a positive test result, and require that to be reported right along with false positive rates?

Seems worth an honest try, though it might be too hard to define it in such a way as to forestall weaseling.

Comment author: Michaelos 05 March 2013 07:29:14PM *  5 points [-]

If I understand the following Wikipedia page correctly:

http://en.wikipedia.org/wiki/Positive_predictive_value

The term you are requesting is Positive predictive value and Negative predictive value is the term for not having a disorder given a negative test result.

It also points out that these are not solely dependent on the test, and also require a prevalence percentage.

But that being said, you could require each test to be reported with multiple different prevalence percentages:

For instance, using the above example of Downs Syndrome, you could report the results by using the prevalence of Downs Syndrome at several different given maternal ages. (Since prevalence of Down's Syndrome is significantly related to maternal age.)

Comment author: xv15 06 March 2013 02:54:34AM 0 points [-]

thanks, PPV is exactly what I'm after.

The alternative to giving a doctor positive & negative predictive values for each maternal age is to give false positive & negative rates for the test plus the prevalence rate for each maternal age. Not much difference in terms of the information load.

One concern I didn't consider before is that many doctors would probably resist reporting PPV's to their patients because they are currently recommending tests that, if they actually admitted the PPV's, would look ridiculous! (e.g. breast cancer screening).

Comment author: buybuydandavis 06 March 2013 01:06:09PM 3 points [-]

Okay, so to the extent that it's possible, why doesn't someone just tell them the results of the Bayesian updating in advance?

Because then they would be assuming they had all relevant prior information for that particular patient. They don't.

For example, age of mother, age of father, their genes, when they've lived where, what chemicals they've been exposed to, etc., are many factors the manufacturer has no knowledge of, but the doctor might. Naturally, it would be helpful for the company to make an online diagnostic model of all known relevant factors available online, updated as new information comes in, but given the regulatory and legal climate (at least here in the US), something so sensible is likely completely infeasible.

Comment author: xv15 06 March 2013 07:22:51PM 2 points [-]

Another alternative is to provide doctors with a simple, easy-to-use program called Dr. Bayes. The program would take as input: *the doctor's initial estimate of the chance the patient has the disorder (taking into account whatever the doctor knows about various risk factors) *the false positive and false negative rates of a test.

The program would spit out the probability of having the disorder given positive and negative test results.

Obviously there are already tools on the internet that will implement Bayes theorem for you. But maybe it could be sold to doctors if the interface were designed specifically for them. I could see a smart person in charge of a hospital telling all the doctors at the hospital to incorporate such a program into their diagnostic procedure.

Failing this, another possibility is to solicit the relevant information from the doctor and then do the math yourself. (Being sure to get the doctor's prior before any test results are in). Not every doctor would be cooperative...but come to think of it, refusal to give you a number is a good sign that maybe you shouldn't trust that particular doctor anyway.

Comment author: prase 09 March 2013 09:12:54PM 1 point [-]

The incidence of the disease may be different for different populations while the test manufacturer may not know where and on which patients the test is going to be used.

Also, serious diseases are often tested multiple times by different tests. What would a Bayes-ignorant doctor do with positives from tests A and B which are accompanied with information: "when test A is positive, the patient has 90% chance of having the syndrome" and "when test B is positive, the patient has 75% chance of having the syndrome"? I'd guess most statistically illiterate doctors would go with the estimate of the test done last.

Comment author: thomblake 08 March 2013 04:18:58PM 11 points [-]

Am I correct in thinking this is a continuation of the vanished company Personalized Medicine?

What's the story there?

Comment author: Larks 01 April 2013 12:29:29AM 1 point [-]

companies often go under one name pre-launch, then adopt a new one so they can have a 'clean slate', publicity-wise.

Comment author: AlexSchell 05 March 2013 01:05:49AM 8 points [-]

you shouldn't feed the patient rat poison

Are you referring to warfarin here or am I imagining things?

Comment author: Eliezer_Yudkowsky 05 March 2013 01:16:08PM 0 points [-]

(Blinks.)

Hadn't thought of that. Actually, from what I understand, the status of warfarin is mostly okay now because they test for unusual sensitivity to it before they administer it?

Comment author: rhollerith_dot_com 05 March 2013 02:28:45PM *  5 points [-]

Prescription warfarin -- actually they might use related molecules these days with the same basic mechanism of action -- kills 10s of thousands per year: they die from loss of blood because the warfarin-like molecules have inhibited the clotting mechanism more than intended. So for example someone I was friends with died (in his sleep) this way.

Nevertheless, warfarin-like molecules have positive expected global utility because clots cause so much negative utility. So for example I was on it for about 6 years. You're supposed to get a blood test every 2 weeks for as long as you're on it.

Since Alex was a grad student in pharmacy, he'll probably correct any untruths in the above in the unlikely event there are any.

ADDED. "Unusual sensitivity" is the wrong way to describe it.

Comment author: Kawoomba 05 March 2013 06:01:23PM 2 points [-]

Of note, 23andme tests for genetically determined warfarin tolerance. I can copypasta their references on demand.

Comment author: J_C 06 March 2013 12:43:08PM 4 points [-]

This is actually not relevant as warfarin dosage is determined by regular testing and dose adjustment. Your inborn metabolic rate is a very small effect compared to, for example, dietary preferences. (for those who are unfamiliar with the agent, warfarin antagonises the effects of vitamin K and so must be adjusted against dietary intake)

Unfortunately there are many people in the health sector offering tests that, whilst factually correct, are irrelevant to a patient's care.

Comment author: Kawoomba 06 March 2013 02:03:45PM 2 points [-]

Someone tell the NHS, which is sponsoring a large trial to explore just that question.

The influence of the genotype varies from "typical sensitivity" to "may require greatly decreased warfarin dose". A range that is all but irrelevant, regular testing or no (think for example of the initial dosage).

Comment author: AlexSchell 05 March 2013 07:24:56PM *  0 points [-]

There is the option of being tested for polymorphisms of 1-2 of the most relevant metabolic enzymes, which account for some of the bleeding risk. My impression is that genotyping is not routinely done. Also, warfarin is risky in normal metabolizers (many many drug/food/disease interactions). I agree with Richard on the overall cost-benefit. (ETA: Though there are new expensive drugs approved for some of the same indications -- rivaroxaban and dabigatran -- that show some promise of being safer.)

Comment author: Michaelos 05 March 2013 01:51:43PM 7 points [-]

I am under the impression that IBM's Watson is being tested in a few hospitals for something which seems at least somewhat similiar to Metamed, but I don't know enough about either to really judge well. Sample link to what I am referring to:

http://www.forbes.com/sites/bruceupbin/2013/02/08/ibms-watson-gets-its-first-piece-of-business-in-healthcare/

Is anyone familiar enough with both Metamed and Watson to help me compare and contrast the support provided by the two of them?

Comment author: Vaniver 06 March 2013 01:06:31AM *  5 points [-]

My view:

MetaMed is designed to extract the maximal amount of information out of the medical research community that exists today. Much of their value-add involves 'meta' evidence that is difficult for others to collect or interpret. (A doctor may be skilled at understanding how a part of the body works, but not how the medical research community actually works.) If you have a condition that is serious, rare, or strange enough that investing $5k in making medical attention more effective seems like a good idea, then you should talk to MetaMed. MetaMed is in no way a substitute for doctors; it's a way to find which doctors you should be talking to, and about what.

Watson can be a substitute for doctors. The key enabler for Watson is massive amounts of data on patients, and the statistical knowledge to make good use of that data. One of the things to remember here is that expert diagnosis systems have been around for a long time, but that if you're expert enough to prepare the relevant information for the computer you're probably expert enough to make an okay guess yourself, at which point using the computer doesn't seem very high priority. Eventually, Watson will enable patients and nurses to input most of the necessary information using natural language. It doesn't look like Watson is a substitute for medical research, but is rather a complement to it- if you have all the patient data together, you can build great models, and great models allow for superior discoveries. (Watson might eventually be able to automate parts of the hypothesis-generating and testing aspects of medical research, but I expect humans to have strong to moderate comparative advantage here for at least two decades.)

The short version: MetaMed makes better use of existing evidence that anyone else; Watson will generate a river of new evidence that will dramatically alter all parts of medicine.

Comment author: Kenny 31 March 2013 11:09:02PM *  1 point [-]

[Emphasis mine]

Eventually, Watson will enable patients and nurses to input most of the necessary information using natural language.

Dear Bayes, I hope not! I'd hope there's much more precise info that could be input instead.

Comment author: Vaniver 01 April 2013 01:20:53AM 1 point [-]

Dear Bayes, I hope not! I'd hope there's much more precise info that could be input instead.

The question is not what's most useful for the system, but what's most useful for the user.

Comment author: atucker 05 March 2013 05:38:32PM 8 points [-]

From what I understand, Watson is more supposed to do machine learning and question answering in order to do something like make medical diagnoses based on the literature.

MetaMed tries to evaluate the evidence itself, in order to come up with models for treatment for a patient that are based on good data and an understanding of their personal health.

They both involve reviewing literature, but MetaMed is actually trying to ignore and discard parts of the literature that aren't statistically/logically valid.

Comment author: tgb 05 March 2013 04:03:11PM 17 points [-]

Shouldn't there be a disclosure of some sorts that MetaMed shares some sponsors with MIRI?

Simple diagnostic tools which may even ignore some data give measurably better outcomes in areas such as deciding whether to put a new admission in a coronary care bed (Green and Mehr 1997).

Better outcomes than what? Typical doctors' diagnostics, I assume?

Comment author: Eliezer_Yudkowsky 05 March 2013 04:20:46PM 16 points [-]

Shouldn't there be a disclosure of some sorts that MetaMed shares some sponsors with MIRI?

I thought that was obvious by listing Jaan Tallinn as an x-risk funder and Peter Thiel, but yes, you're very correct that this should be explicitly stated on general principles. Will edit.

Comment author: tgb 05 March 2013 08:24:11PM 6 points [-]

Obvious to LW readers, perhaps, but this is the kind of article that would be good to share! Thanks for adding it.

Comment author: DanielLC 06 March 2013 02:00:55AM 0 points [-]

Shouldn't there be a disclosure of some sorts that MetaMed shares some sponsors with MIRI?

Why? That doesn't sound like much of a conflict of interest. Am I missing something?

If Metamed sponsored MIRI, that would definitely be an issue.

Comment author: Kaj_Sotala 06 March 2013 12:32:48PM *  15 points [-]

"Hey, MIRI folks, we're giving you a lot of money, how about you said a couple of nice words about this other company of ours?"

"Hey, our sponsors are funding another company, maybe if we helped promote that company and it ended up doing well, our sponsors would have more money to give us."

""Hey, our sponsors are funding another company, maybe if we helped promote that company our sponsors would be grateful and give us some extra money."

Comment author: EHeller 07 March 2013 04:14:28AM *  15 points [-]

I'm overall not impressed- looking at their reports, what do they offer that up-to-date (uptodate.com) doesn't? Sure, they advertise at patients, and up to date is aimed at institutions- but most hospitals I'm familiar with (and hence almost any specialist physician) are going to have access already to up-to-date. Also, in general, I'm willing to bet most doctors are in a better position to digest a research report than the average patient.

Sure, its a good idea, but its already being done in a very comprehensive fashion by a company that already has something like 90+% market penetration for academic hospitals. What is metamed's comparative advantage?

Comment author: Zian 10 March 2013 05:08:16AM 2 points [-]

I get the impression that Metamed also figures out likely diagnoses, which would be a pre-requisite for using Up to Date.

Comment author: EHeller 11 March 2013 05:54:50PM *  9 points [-]

That seems a very tricky proposition- for $5k you get a team of medical students and phd students doing a literature search for 24 hours. Without a diagnosis to start with and without an ability to order and receive test results (even if you suggest a test, will the results be back in 24 hours?) my prior would be that diagnosis would be extremely unlikely. WIthout a diagnosis, I'm not even sure how informative such a short literature search can be.

In the case of symptoms-just-started/no diagnosis, doesn't an experienced doctor at a hospital (with all the support staff a hospital implies-. labs,etc) have a pretty high competitive advantage? Apriori, an experienced physician with diagnostic equipment and several days should outperform some medical students with journal access and 24 hours.

Also, this whole thread I find myself shilling for the status-quo, but I should make it clear- hospitals scare the hell out of me. I've done statistical work for internal performance reviews for a large carrier in Southern California and found tons of alarming medical mistakes. I just don't see how Metamed solves any of the actual problems. Most mistakes are of the form transfer orders go through, but patient isn't moved (thus being on a floor under no one's care for X hours), or pharmacy doesn't deliver a necessary medication to a patient in a timely fashion, pharmacy compounded medication in the wrong fluid, ICU doctor refuses to admit ICU level patient because he wants to go home early (leaving a critical patient in a DOU). Total misdiagnosis/mismanagement DOES happen,but its not a leading-order type of mistake, and its usually not because of a lack of access to relevant evidence-based-medication, but rather despite a lot of access to info. Also, at least at one large hospital group in Southern California- this mistreatment tends to be Bayesian in nature- most patients are things like heart attack/stroke and so if you present with symptoms that fit even loosely into one of those large categories, you get treated for them. Such a system does a lot of good for the typical patient ,but if you have a rare disease, it can send you totally down the wrong path. Trying to 'fix' this problem can do more harm than good (save the occasional rare illness patient at the expense of dozens of more typical patients)

The best solution is to find a competent doctor (or even a competent ICU nurse) and pay them to be the point of contact all the hospital doctors have to go through before they are allowed to treat you, but hardly anyone can afford a concierge doctor.

Comment author: NancyLebovitz 13 March 2013 03:58:59PM 4 points [-]

This matches my feeling that a lot of what's wrong with (American?) medicine is the result of patients being viewed as low status.

What you've been seeing is what can go wrong at the hospital. I've heard a fair amount of anecdotes about sloppy diagnosis-- patients' symptoms being ignored for months or years of doctor visits. My impression is that doctors who listen and think are not terribly common.

Comment author: shminux 13 March 2013 06:40:45PM 3 points [-]

A typical family doctor's appointment is scheduled every 15 min where I am (except for annual checkups). This includes the time between patients for any necessary paperwork. So, not much you can do for people with rare symptoms in that setting. This is where MetaMed can help, since they spend 100 to 1000 times more time than that on each case and are looking specifically for edge cases and individualized treatment.

Comment author: NancyLebovitz 13 March 2013 08:01:09PM 4 points [-]

I agree that fifteen minutes minus paperwork is shockingly short. Still, there are doctors who do reasonably well at paying attention.

Most of my information is from the fat acceptance community, where there are a great many stories about doctors who just tell fat patients to lose weight*, regardless of symptoms. The typical stories seem to be either "I had to go to three or four doctors to find one who would listen" or "I must be lucky, I have a great doctor". I can't derive a strong opinion about the proportion of attentive doctors from this, though I wouldn't be surprised to find that it's under half.

*I've also seen a few stories from unusually thin people who were simply told to gain weight, and one from a man who (as far as I could tell) was lean and muscular, but was told to lose weight by a doctor who literally only looked at his BMI.

Comment author: NancyLebovitz 03 June 2013 11:37:40AM 1 point [-]

International list of fat-friendly medical professionals

A fat friendly professional does not necessarily avoid mentioning a client's weight, but he or she avoids making an issue of it, avoids lectures and humiliation, and respects the client's wishes with regard to weight discussions.

If a client asks not to be weighed, the request is acknowledged without complaint and taken into account automatically on future visits. (Note: there are a few cases where weighing is necessary, for example, when administering certain medications, chemotherapy, or anesthesia.)

If weight sometimes contributes to a problem, the professional may mention this, but also considers other diagnoses and recommends tests to determine the actual diagnosis if appropriate. If weight loss is a recommended treatment for a problem, the fat friendly professional may mention this, but at minimum will also recommend and prescribe other treatments. A fat friendly professional accepts a client's wish not to use weight loss as a treatment.

Ideally, the professional's office has available armless chairs, large blood pressure cuffs, large examination gowns, and other equipment suitable for fat people. If not, the office acknowledges the importance of such items when told.

Some fat friendly professionals believe that fat is not unhealthy. Others may believe that fat is unhealthy, but may acknowledge that weight loss doesn't work or is dangerous and/or that the client has a right to direct his or her own treatment.

Comment author: EHeller 18 March 2013 06:05:43AM *  3 points [-]

Sure- but if you have some rare symptom, any decent family doctor should say "go see a specialist" and refer you. You certainly aren't going to contact metamed every time you get sick, and for chronic conditions, a specialist (with journal and up-to-date access) is going to be the managing physician. Anything other than routine sniffles, vaccinations and check-ups and you probably have exceeded your family doctor's expertise.

The big problem for misdiagnosis at the family-med level are the hordes of relatively rare diseases with common symptoms, but this is a very hard problem to solve. Having spent some time dealing with this as a statistical problem, even if you have a rare cluster of common symptoms, its usually the case that you are more likely to have a rare presentation of a common disease than it is that you have a rare disease.

Comment author: Kenny 01 April 2013 05:25:32PM 0 points [-]

I think MetaMed is intended to supplement the treatment advice you'd otherwise receive from specialists.

Comment author: Error 07 March 2013 04:07:46PM 0 points [-]

Presumably what you're paying for is for someone smarter than you to do the literature research for you; if I'm reading uptodate's product page correctly, they make the information available but it's up to you (or your physician) to sort through it and figure out what applies to you.

Comment author: EHeller 07 March 2013 11:49:12PM *  11 points [-]

UpToDate provides a summary of research based on disease. i.e. for this disease, these treatments are recommended because of study A,B and C and physiological facts D, E and F. There are counter-indications and risks from these treatments because of x,y, and z. Unfortunately, I can't reproduce one of their reports here, but its not just a huge literature dump, its summarized and treatment options are graded.

Looking at the metamed concierge report on Gout (linked to elsewhere in this thread) its formatting appears to be very much like an UpToDate report- the most recent literature is digested and summarized at a decently high level, but it doesn't strike me as better than (or even different from!) the UpToDate recommendations. Given that 90% of academic hospitals already have paid for UpToDate, and honestly in most cases it will be better for the physician to interpret the report, I can't see very much for metamed to bring to the table.

Also worth pointing out- the people who write the summaries for UpToDate are most often researchers in the field of the illness. Near as I can tell from their webpage metamed's researchers are often medical students, or non-medical phd students (the point being that with metamed you are paying for something general called "expertise" and in many cases not actual field-relevant medical expertise).

Comment author: drethelin 09 March 2013 09:35:26PM 1 point [-]

What's the price difference between metamed and going out of your way to go to an academic hospitals? Am I wrong in thinking most hospitals are not academic hospitals?

Comment author: EHeller 09 March 2013 11:24:46PM 1 point [-]

Probably depends on your insurance (i.e. if you are with an HMO, you'll be locked in to the network unless you get a referral). Outside of HMOs, I'm not aware of an insurance that has a copay difference between going to an academic or community hospital. If you go to a community hospital with something complicated, you'll almost certainly end up transferred to an academic referral center anyway.

Comment author: KnaveOfAllTrades 06 March 2013 07:34:30PM *  5 points [-]

Have enough people at MetaMed been influenced sufficiently by (meatspace) LessWrong/think 'similarly enough' to LW rationality that we should precommit to updating by prespecified amounts<edit>on the effectiveness of LW rationality</edit> in response to its successes and failures?

Comment author: gwern 08 March 2013 04:58:47AM 11 points [-]

At a first glance, I'm not sure humans can update by prespecified amounts, much less prespecified amounts of the right quantity in this case: something like >95% of all startups fail for various reasons, so even if LW-think could double the standard odds (let's not dicker around with merely increasing effectiveness by 50% or something, let's go all the way to +100%!), you're trying to see the difference between... a 5% success rate and a 10% success rate. One observation just isn't going to count for much here.

Comment author: MichaelVassar 07 March 2013 10:02:05AM 4 points [-]

Definitely, though others must decide the update size.

Comment author: dreeves 07 March 2013 09:09:34PM *  2 points [-]

Interesting question! Since it's an especially interesting question for those not fully in the in-crowd I thought it might be worth rephrasing in less technical language:

Is MetaMed comprised of LessWrong folks or significantly influenced by LessWrong folks, or that style of thinking? If so, this sounds like a great test of the real-world efficacy of LessWrong ideas. In other words, if MetaMed succeeds that's some powerful evidence that this rationality shit works! (And to be intellectually honest we have to also precommit to admitting that -- should MetaMed fail -- it's evidence that it doesn't.)

PS: Since Michael Vassar is involved it's safe to say the answer to the first part is yes!

Comment author: alicey 04 July 2014 06:51:14PM 0 points [-]

But, either way, not much evidence at all.

Comment author: ema 05 March 2013 06:08:05PM *  5 points [-]

According to their site Jaan Tallinn is not the CEO but chairman of the board. Zvi Mowshowitz is the CEO.

Comment author: thescoundrel 06 March 2013 03:57:20PM 4 points [-]

Wow- that is former MTG Pro Zvi, one of the best innovators in the game during his time. Awesome to see him involved in something like this.

Comment author: Mycroft65536 05 March 2013 10:52:08PM 1 point [-]

Jaan is also the CTO, I'm not sure if that's on the website.

Comment author: shminux 05 March 2013 05:13:06PM 9 points [-]

This is a new service and it has to interact with the existing medical system, so they are currently expensive, starting at $5,000 for a research report. (Keeping in mind that a basic report involves a lot of work by people who must be good at math.) If you have a sick friend who can afford it - especially if the regular system is failing them, and they want (or you want) their next step to be more science instead of "alternative medicine" or whatever - please do refer them to MetaMed immediately.

A friend of mine suffers from debilitating effects of fibromyalgia, to the degree that she had to quit her job. She has tried all possible conventional and alternative medicine, with little success. She would certainly be prepared to pay $5000 or more for a near-certain relief, but not for yet another literature search of undetermined benefit. I'm guessing she is not the target audience for MetaMed?

Comment author: Michaelos 06 March 2013 03:03:13PM *  7 points [-]

Well, according to their FAQ, they offer a trial service. So your friend would not have to continue to a larger report if the trial seemed to be indicating low benefits of further research.

http://metamed.com/faqs

Can I try MetaMed before committing to a large purchase? Yes. If your case has a larger budget, we can start with a smaller, trial report to ensure quality of service, and confirm that MetaMed is the right choice for you.

And they also offer Financial aid - There is almost no information about this posted, other than that it exists. I guess you would have to call to determine more about how it worked. If your friend did qualify, that would be a substantial boon:

http://metamed.com/financial-aid

Once you have consulted with our medical team, if you need financial aid to help with the cost of your MetaMed service, we will email you an application right away.

And it looks like overall there are at least three tiers of potential research:

http://metamed.com/personalized-research-for-individuals

And here are examples of reports at each tier:

http://metamed.com/static/Meta_Sleep.pdf (Standard)

http://metamed.com/static/Meta_H_Pylori.pdf (Plus)

http://metamed.com/static/Meta_Gout.pdf (Concierge)

Hopefully that information will help with deciding whether or not to contact them for more information.

Comment author: EHeller 07 March 2013 04:10:43AM 15 points [-]

Those results do not impress me as to the value of their research. There is nothing there that isn't covered by Up-to-Date (http://www.uptodate.com/home) and every hospital I've done stats work for (several, all over the country both community and academic) has provided their physicians with up to date access.

Your best bet (apparently) would be simply asking your physician for the up-to-date report for your diagnosis. If your physician does not have up-to-date access, get a referral to the nearest academic center.

Comment author: EHeller 07 March 2013 08:07:58AM *  7 points [-]

I'm unsure why I've been voted down here- if shminux's friend wants a version of a report similar in quality to what metamed can provide she can ask her physician if the physician has up-to-date (or an other research aggregator) access. If the physician doesn't, its potentially a measure of quality (which can otherwise be hard to judge), and she should get a referral to an academic medical center, which will definitely have something like up-to-date available. This seems to me like decently practical advice for those who have insurance, but don't have the money for metamed. I'm still relatively new so I'm requesting explicit feedback to improve the quality of my posts.

Edited to Add: According to the company, > 90% of American academic hospitals already have subscribed to Up-to-Date, so getting a referral to an academic center has a great chance of getting you to someone with already-paid-for-access to this sort of report.

Comment author: Michaelos 07 March 2013 02:17:34PM *  1 point [-]

I'm glad you linked competition, so we can compare similar industries for reference (I asked a very similar thing about Watson.) One item that stood out on their site:

Dr. Gordon Guyatt from McMaster University, who coined the term "evidence-based medicine," makes 6-8 visits to UpToDate per year to work with UpToDate editors. Watch him in action with our physician editors as they review and analyze medical evidence that informs UpToDate graded recommendations.

http://www.uptodate.com/home/editorial

And I was also able to find their methods for that grading as well, in case anyone wants to compare Meta Research Methods across Meta Research Organizations:

http://www.uptodate.com/home/grading-guide

Comment author: Scottbert 07 March 2013 04:00:25PM 4 points [-]

Sleep apnea seems like something regular doctors should be able to figure out, and I know gout has at least been known for a long time. Are these meant to be examples of what the reports look like more than examples of how Metamed can find obscure treatments? $5000 seems a bit much for someone to be told 'get checked for sleep apnea and lose weight'.

Comment author: khafra 05 March 2013 08:39:21PM 4 points [-]

If she's tried all possible conventional and alternative medicine, MetaMed will not help. If she missed something (1) obscure but promising, (2) cutting edge and promising, or (3) unique about her particular body that makes an unusual treatment promising; MetaMed might be able to help.

So, if $5000 is what certain relief is worth to her, MetaMed isn't for her. If certain relief is worth $10000 to her, she should estimate how likely it is that paid, reasonably savvy researchers can find something she's missed; and go for it if she feels it's over 50% likely.

Comment author: shminux 05 March 2013 08:52:15PM 3 points [-]

she should estimate how likely it is that paid, reasonably savvy researchers can find something she's missed; and go for it if she feels it's over 50% likely.

No, it's much worse than that: "how likely it is that paid, reasonably savvy researchers can find something she's missed" AND it has a near certainty of helping. Current prior: nothing has helped so far, so the odds of something she missed ended up being useful is pretty low. If the estimate of helpfulness is, say, 1% (that's pretty optimistic), and the odds that MetaMed will find something new is 50%, then certain relief has to be worth $100k.

Comment author: Elithrion 06 March 2013 12:52:23AM 6 points [-]

You meant $1 mil, right?

Comment author: shminux 06 March 2013 02:01:22AM 3 points [-]

Right, sorry.

Comment author: MichaelVassar 07 March 2013 09:57:24AM 2 points [-]

So about what do you think it IS worth? FYI, I think, based on experience with people whom have tried everything, that a 1% chance of finding something is unrealistically low. 20% with the first $5K and a further 30% with the next 35K would fit my past experience.

Comment author: EHeller 08 March 2013 06:37:19AM *  3 points [-]

Define tried everything? Your prior is that there is a 1/5 chance a handful of researchers can find something helpful in 24 hours that isn't listed in something like an Up-To-Date report on the diagnosis (a decent definition of 'everything')?

Does Metamed do patient tracking to see if their recommendations lead to relief? Or do they deliver a report and move on?

Comment author: shokwave 08 March 2013 07:20:40AM *  2 points [-]

Your prior is that there is a 1/5 chance a handful of researchers can find something helpful in 24 hours that isn't listed in something like an Up-To-Date report on the diagnosis (a decent definition of 'everything')?

From the body of the main post (source):

"It is frequently stated that it takes an average of 17 years for research evidence to reach clinical practice. Balas and Bohen, Grant, and Wratschko all estimated a time lag of 17 years measuring different points of the process."

Granted, I know very little about Up-To-Date, but I would be surprised if they completely eliminated that 17-year lag, especially in the more obscure conditions. They do, after all, have to cover all the conditions, and their return on investment is obviously going to be much higher on common conditions than on obscure ones. In fact, if they put out a fantastically detailed report on Stage III Boneitis (fictional) and nobody suffers a case that year, they've wasted their money. I strongly suspect Up-To-Date is aware of this, though I obviously have no way of knowing whether it affects their decisions.

MetaMed's offer is, as far as I understand it, "pay us 5k and we'll eliminate the 17-year lag for your particular case". This lets them plausibly offer value that Up-To-Date can't, in some cases.

Disclaimer: I am not associated with MetaMed, but I do think they're cool.

Comment author: EHeller 08 March 2013 11:53:23PM 4 points [-]

Have you actually read the metamed sample reports/what do you think metamed actually does? As far as I can tell, their core product is to have a team of medical and phd students do a literature search for about 1 working day (compare to Up-to-date, where actual researchers in various fields write the reports and clinicians edit the treatment plans). This seems highly unlikely to move that 17 year lag even a little bit.

I have no horse in this race, but I have worked as a statistician for hospital researchers and for health insurance companies. I just happen to think metamed's boosters here are dramatically underestimating the availability of evidence-based-medicine literature surveys in the clinical hospital setting.

Comment author: Wakarimahen 06 March 2013 06:41:26AM *  4 points [-]

Current prior: nothing has helped so far, so the odds of something she missed ended up being useful is pretty low.

This assumes she's good at sifting through the massive expanse of information available, and good at implementing the suggestions therein. These are two extremely questionable assumptions. Knowing nothing about her except that she has severe fibromyalgia and that she's the friend of a frequent poster on LW--two factors that hardly seem very relevant, and I'd put the likelihood of those two assumptions holding up to be very low. Quite bluntly, most people have no idea what's really out there. The Internet is a vast space.

Comment author: Decius 07 March 2013 08:53:22AM 1 point [-]

No, because -$100k is much more than 20 times worse than -$5k.

Comment author: wedrifid 07 March 2013 10:52:05AM *  0 points [-]

If the estimate of helpfulness is, say, 1% (that's pretty optimistic)

Saying 1% is extremely optimistic... about the quality and competence of the medical profession as encountered by average people with mildly unusual conditions.

Comment author: pinyaka 06 March 2013 02:38:30PM 0 points [-]

That assumes they find only one thing that she hasn't tried. I have a sister with fibro and some cursory googling on my part suggests that there are so many theories out about what's wrong and what can bring relief that it's difficult to believe that a systematic search by good researchers will only turn up a single thing that she hasn't tried. That said, you're right that 1% helpfulness is probably pretty optimistic.

Comment author: Wakarimahen 06 March 2013 06:28:16AM *  5 points [-]

All possible conventional and alternative medicine? I doubt it. This is a mind-destroying sentence if I ever saw one. I'd suggest re-wording it to "she's tried a ton of different approaches both from conventional and alternative medicine".

First thing to be said: Fibromyalgia is one of those health issues where there are no widely adopted hypotheses for the base mechanism at work. This means, quite simply, that there is little hope for targeting it specifically. It's not a case where e.g. your lips are chapped and your knuckles are splitting, and one of the first places you look is hydration--more water, more trace minerals, etc. Instead it's a health issue where you have nothing to target, and your only real hope is to do whatever you can to improve your general health, and hope whatever the yet-to-be-discovered underlying cause is taken out by fortunate accident.

Look to the other symptoms. What other symptoms does she have? It doesn't matter whether they're considered to be related. Constipation, headaches, splitting nails, PMS, dry skin, cold extremities, dandruff, frequent colds, dizziness upon standing too quickly, acne... anything at all. Note it, target it, fix it. Keep doing this for years. Make a checklist. Anything to be considered a symptom. Notice it, treat it, move on. Do this for a long enough time, and either the fibromyalgia will go away or get better, or it won't. But at least you tried, and believe me: Her life will be better either way. Well, unless she doesn't like hard work.

Potential leads I found through a few brief Google searches:

http://www.jonbarron.org/article/fibromyalgia-goes-pharmaceutical

http://evolutionarypsychiatry.blogspot.com/2012/02/magnesium-deficiency-and-fibromyalgia.html

http://www.westonaprice.org/miscellaneous/fibromyalgia

http://paleohacks.com/questions/107562/fibromyalgia-can-this-paleo-diet-help-me-with-fibromyalgia#axzz2Mjjvmzq6

http://paleohacks.com/questions/133865/what-are-the-causes-of-fibromyalgia#axzz2Mjjvmzq6

http://paleohacks.com/questions/1990/paleo-and-fibromyalgia#axzz2Mjjvmzq6

Good luck.

Comment author: shminux 06 March 2013 04:55:36PM 3 points [-]

Not sure why the parent is upvoted so much. Trivial and rather useless advice, some platitudes, a few rather suspect google hits (paleohacks? really?), and a veiled insult "unless she doesn't like hard work".

Comment author: Wakarimahen 06 March 2013 07:37:24PM *  3 points [-]

I'm surprised as well. I expected to be downvoted to -2 or so pretty quickly, and stay around there.

As for your disagreements, I should stress that what I said is perhaps the absolute most important thing for the average person with a health issue like that to hear. All too many people get hung up on trying to target the problem specifically, when they're dealing with an issue where doing so is not practical. Day after day, they ask, "What causes fibromyalgia? What are the new treatments suggested for it?" They remain fixated on these questions, while they sweep all sorts of other symptoms under the rug--random symptoms like headaches or splitting nails, which may be coming from the same source.

As for the Google hits, I'm not sure why you're calling them suspect. Jon Barron is one of the best alternative health writers out there, the Weston A. Price Foundation has a huge following, PaleoHacks is perhaps the best forum on paleo (which is a diet and lifestyle with a massive following), and the other link is a blog that I've seen cited a bunch of times in paleo circles as being someone who is less likely than average to fall for various forms of silliness.

Is this enough evidence to suggest you should read the links and take them seriously? No idea. They have a lot of links within them though. My goal was to as quickly as possible find some articles that put the conditions for 'tab explosion' in place in a way I thought would be beneficial. Generally when conventional medicine doesn't have the answer, the best place to look is where people are talking about paleo. Even stereotypically non-paleo things like raw vegan juicing, such as the Gerson Diet, will come up in paleo circles--quite simply because it seems to work.

Comment author: buybuydandavis 06 March 2013 01:14:56PM *  2 points [-]

I recommend looking into low dose naltrexone. Cheap, safe, and with reported success for fibromyalgia (I haven't looked into that use in particular). Generally appropriate for pain issues, as it is an opioid receptor antagonist with over night use modeled (and perhaps verified) as upregulating opioid receptors, and thereby pain relief, during the day. I believe D-Phenylalanine limits breakdown of opioids, and would be another cheap, safe, and effective addition to this treatment.

Also, the stimulation of gnrh release is likely generally helpful in people past 30.

Recent small study out of Stanford on LDN for Fibromyalgia: http://www.ncbi.nlm.nih.gov/pubmed/23359310

If you're interested, message me and I'll send you where I get it cheap through a compounding pharmacy in NYC.

Comment author: shminux 06 March 2013 05:21:58PM 2 points [-]

I have glanced at the abstract, and the study appears to be deeply flawed. They see significant self-reported pain reduction in about 30% of the patients vs 20% for placebo. Whoa, big deal. What are the odds that another study would replicate these results? Moreover, they did not compare it with the mainline painkillers like acetaminophen or ibuprofen, or anything else cheap currently on the market, or with the classic fibro drugs, like pregabalin. To sum up, there is zero reason to try it specifically, except maybe as one of the many random things to try in desperation.

Comment author: buybuydandavis 06 March 2013 06:41:07PM *  4 points [-]

They see significant self-reported pain reduction in about 30% of the patients vs 20% for placebo.

That's not how I read the abstract. It's not % of patients, but % of pain reduction, whatever that means. I assume they're referring to some self reported numeric pain scale.

The percentage of responders for "significant pain reduction" was 32% naltrexone vs 11% placebo , which strikes me as significant. If I had serious pain, and someone offered me a "widely available, inexpensive, safe, and well-tolerated" treatment with a 30% chance of "significant pain reduction" I'd be all over it. Your mileage may vary.

To sum up, there is zero reason to try it specifically, except maybe as one of the many random things to try in desperation.

Really? "Zero reason"? So you predict equal efficacy as a random treatment, such as spinning around and squawking like a chicken?

Plenty of reasons. You don't have to like them or know of them. I wasn't attempting or claiming to prove anything, just trying to point you to some information I thought would be helpful. I hope she finds something.

Comment author: John_Maxwell_IV 06 March 2013 07:42:16AM 0 points [-]

I've read that some of the pain in fibromyalgia typically comes from trigger points; has she researched those?

Comment author: AndrewH 05 March 2013 06:33:06PM *  -3 points [-]

If you haven't done the search yet (in the manner MetaMed would do a search), how can you guarantee you'll find something before you do the search? :)

Comment author: Qiaochu_Yuan 05 March 2013 12:49:32AM *  35 points [-]

It is fairly terrifying that the term "evidence-based medicine" exists because that implies that there are other kinds.

Comment author: ChristianKl 05 March 2013 11:29:47PM 27 points [-]

LessWrong is a non-evidence-based method of teaching rationality. We don't have good evidence that someone will get more rational after reading the sequences.

You can make a reasonable theoretic argument that people will get more rational. You don't have the kind of evidence that you need for a EBM-treatment. In most domains where we make choices in our lives you don't follow pratices that are supported by evidence from peer-reviewed trials.

You don't get a haircut from a barber who practices evidence-based barbering. Even the people who pay a lot of money for their haircuts don't. Reading scientific papers just isn't the only way to gather useful knowledge.

The term evidence-based medicine comes from a published in 1992.

It says:

Evidence-based medicine de-emphasizes intuition, unsystematic clinical experience, and pathophysiologic rationale as sufficient grounds for clinical decision making and stresses the examination of evidence from clinical research.

I wouldn't want someone to practice open-heart surgery on me based on his intuition but I don't see a problem with taking a message from someone who read no scientific papers but who let's themselves be guided by his intuition and who has a positive track record with other patients.

Comment author: Qiaochu_Yuan 06 March 2013 12:59:53AM 6 points [-]

Sure, but the idea that one should explicitly go about trying to teach rationality because there are these things called biases is much younger than the idea of medicine. Doctors have had a much longer time than LessWrong to get their act together.

Comment author: ChristianKl 06 March 2013 03:52:26PM 8 points [-]

Doctors have had a much longer time than LessWrong to get their act together.

The idea of teaching people to think better isn't new. Aristoteles also tried to teach a form of rationality. But even if the idea would be radically new, why would that matter?

Why should newer ideas be subject to a lower standard of evidence? Fairness? If you want to know the truth fairness has no place.

Let's look at another example: Romantic courtship. Do you practice evidence-based courtship, when you seek a fulfilling relationship with a woman? Would you say there no other form of courtship besides evidence-based courtship?

Most humans don't practice evidence-based courtship. Sometimes courtship doesn't work out. You can blame it on couple not being familiar with the scientific papers that are published on the subject of human courtship.

Nobody has shown that given the couple those scientific papers improves their relationship changes. Nobody has shown with EBM like evidence standards that doctors who are in touch with the scientific literature archieve better health outcomes for their patients.

That doesn't mean to me that EBM has no place, but I don't see a reason to reject any approach to increase my health that isn't backed by EBM.

There's some scientific evidence that suggests that vitamin D is good. In the blogosphere there are people who found that taking vitamin D supplements first thing in the morning is better than taking them in the evening. There's no trial for the timing of vitamin D supplements. I still take them first thing in the morning.

Comment author: Strange7 10 March 2013 06:06:19AM 2 points [-]

Most humans don't practice evidence-based courtship.

I think there's some sort of rule against discussing PUA here.

Comment author: Nornagest 10 March 2013 06:28:27AM 3 points [-]

Not so much a rule against it as an understanding that it consistently leads to low-quality discussion.

Comment author: wedrifid 10 March 2013 06:27:23AM 0 points [-]

I think there's some sort of rule against discussing PUA here.

(Which ChristianKI didn't do. His kind of general observation isn't the kind that brings on the notorious failure mode of courtship moralizing.)

Comment author: army1987 09 March 2013 12:11:17PM *  -1 points [-]

You don't get a haircut from a barber who practices evidence-based barbering.

That's not exactly 100.00% true -- I once overheard a barber priding himself with the fact that someone once got laid the night after getting a haircut from him.

Jokes aside, barbering is evidence-based -- given that it works at all, then barbers either have knowledge of how to do that hard-coded in their DNA (unlikely) or have learned to do that -- using evidence (even though not in a systematized way). You can immediately see that if you use this cutting technique then your client's hair will look this way. OTOH, a practitioner of non-evidence-based medicine cannot immediately see that giving a patient this substance diluted in 10^20 times as much water or sticking a needle in this particular spot or whatever will help cure the patient. (Likewise, musicians are normally evidence-based musicians to some extent, but astrologists are not evidence-based astrologists; can you find more examples?)

Comment author: incogn 09 March 2013 01:03:01PM *  4 points [-]

If you interpret evidence-based in the widest sense possible, the phrase sort of loses its meaning. Note that the very post you quote explains the intended contrast between systematic and statistical use of evidence versus intuition and traditional experience based human learning.

Besides, would you not say that astrologers figure out both how to be optimally vague, avoiding being wrong while exciting their readers, much the same way musicians figure out what sounds good?

Comment author: army1987 09 March 2013 01:09:20PM 0 points [-]

If you interpret evidence-based in the widest sense possible, the phrase sort of loses its meaning. Note that the very post you quote explains the intended contrast between systematic and statistical use of evidence versus intuition and traditional experience based human learning.

Yes, but “intuition and traditional experience based human learning” is probably much less reliable in medicine than it is in barbering, so the latter isn't a good example in a discussion about the former.

Besides, would you not say that astrologers figure out both how to be optimally vague, avoiding being wrong while exciting their readers, much the same way musicians figure out what sounds good.

:-)

Something similar could be said about practitioners of alternative medicine, though.

Comment author: ChristianKl 09 March 2013 04:33:02PM 0 points [-]

Yes, but “intuition and traditional experience based human learning” is probably much less reliable in medicine than it is in barbering, so the latter isn't a good example in a discussion about the former.

The goal of barbering is to create haircuts that increase the attractiveness of the client to people besides the barber and the client.
A barber might think: "All my clients look really great", when in reality his haircuts reduce the attractiveness of the clients.

Comment author: army1987 09 March 2013 04:43:55PM 0 points [-]

Surely, judging someone's attractiveness using your System 1 alone is less hard than judging someone's health using your System 1 alone, for most people in most situations?

Comment author: ChristianKl 09 March 2013 05:36:23PM 3 points [-]

A professional barber is likely to notice a lot of things about a haircut that the average person doesn't see. It could be that he creates haircuts that look impressive to other barbers but don't look good to the average person of the opposing sex who isn't a barber.

I do think that you can get a decent assessment of someone's backpain by asking them whether it has gotten better. Actually that's even how most scientific studies who measure pain do it. They let the person rate their pain subjectively and when the subjective rating gets better through the drug they see it as a win.

For a lot of serious health issues it's easy to see when a person gets better.

Most homeopathists spend more time interviewing their patients and getting a good understanding of their condition than the average mainstream doctor who takes 5 minutes per patient.

Comment author: incogn 09 March 2013 03:06:18PM 0 points [-]

I think the barbering example is excellent - it illustrates that, while controlled experiments more or less is physics, and while physics is great, it is probably not going to bring a paradigm shift to barbering any time soon. One should not expect all domains to be equally well suited to a cut and dried scientific approach.

Where medicine lies on this continuum of suitedness is an open question - it is probably even a misleading question, with medicine being a collection of vastly different problems. However, it is not at all obvious that simply turning up the scientificness dial is going to make things better. It is for instance conceivable that there are already people treating medicine as a hard science, and that the current balance of intuition and evidence in medicine reflects how effective these two approaches are.

I am not trying to argue whether astrology is evidence-based or not. I am saying that the very inclusive definition of evidence-based which encompasses barbering is, (a) nearly useless because it includes every possible way of doing medicine and (b) probably not the one intended by the others using the term.

Comment author: army1987 09 March 2013 03:28:42PM *  0 points [-]

nearly useless because it includes every possible way of doing medicine

Huh? What evidence are homoeopathy and crystal healing and similar (assuming that's what Qiaochu_Yuan meant by “other kinds”) based on?

EDIT: Apparently not.

Comment author: ChristianKl 09 March 2013 05:48:19PM 2 points [-]

There are even dozens of scientific studies that support homeopathy. According to a report titled "Effectiveness, Safety and Cost-Effectiveness of Homeopathy in General Practice – Summarized Health Technology Assessment" commissioned by the Swiss government:

Many high-quality investigations of pre-clinical basic research proved homeopathic high-potencies inducing regulative and specific changes in cells or living organisms. 20 of 22 systematic reviews detected at least a trend in favor of homeopathy. In our estimation 5 studies yielded results indicating clear evidence for homeopathic therapy.

There are plenty people out there who can explain you why all those homeopathy studies are flawed, but on the other hand how many double blind controlled trials do you know that show that barbers can create haircuts that increase someone's chances with the opposing sex?

But in general people do buy homeopathic medicine not because they read the report of the Swiss government and belief it. They buy it based on anecdotal evidence. They hear that some friend had success with homeopathy and then the go out and buy it.

The fact that you are ideologically opposed to homeopathy and crystal healing working, doesn't mean that it fails to produce anecdotal evidence.

Comment author: army1987 10 March 2013 03:03:21PM 0 points [-]

increase someone's chances with the opposing sex

<nitpick>If that was the only point of barbers, then already-married people, prepubescent children, homosexuals, etc. would never go to the barber's.</nitpick>

Comment author: Qiaochu_Yuan 09 March 2013 06:58:05PM 0 points [-]

"Other kinds" meant "whatever mainstream medicine does that doesn't fall under the evidence-based label," not alternative medicine. I should've been clearer.

Comment author: army1987 10 March 2013 02:55:22PM 0 points [-]

Yes, I realized that later, while reading another branch of the thread (see my edit).

Comment author: ChristianKl 09 March 2013 07:01:27PM 0 points [-]

What do you mean with "mainstream medicine" in that context?

Comment author: Qiaochu_Yuan 09 March 2013 07:37:13PM 0 points [-]

What ambiguity is there in what I mean by "mainstream medicine" here?

Comment author: ChristianKl 09 March 2013 04:21:44PM 1 point [-]

OTOH, a practitioner of non-evidence-based medicine cannot immediately see that giving a patient this substance diluted in 10^20 times as much water or sticking a needle in this particular spot or whatever will help cure the patient.

That's wrong. If a acupuncturist puts needles in 10 people and 5 of them lose their back pain than he has "unsystematic clinical experience" that provides evidence for his treatment.

The core of evidence-based medicine is the belief that you shouldn't use that kind of evidence for clinical decision making but that doctors should read medicial journals that report clinical trials that show whether or not a treatment works.

Likewise, musicians are normally evidence-based musicians to some extent, but astrologists are not evidence-based astrologists; can you find more examples?

Actually musicians and astrologists are very similar. Both make money with providing entertaining performances for their clients. Members of those professions who ignore evidence about what entertains their clients go out of business.

Comment author: army1987 09 March 2013 04:35:19PM 1 point [-]

If a acupuncturist puts needles in 10 people and 5 of them lose their back pain than he has "unsystematic clinical experience" that provides evidence for his treatment.

Maybe some of those 5 would have lost their pain even without needles. Whereas the barber knows what his client would have looked like without the hair cut.

Comment author: Yosarian2 12 May 2013 06:05:57PM 0 points [-]

Maybe some of those 5 would have lost their pain even without needles

Right, that's why it's unsystematic.

In the Bayesian sense of the word, "I stuck a needle in this person and the amount of pain he reported went down" would have to be considered to be evidence that would increase the Bayesian possibility that your hypothesis that acupuncture helps back pain is correct. However, it's not systematic, scientific evidence. To get that kind of evidence, you would have to do systematic studies of a large number of people, give some of them acupuncture and give some of them asprin, and see what the statistical result is.

I think that's what bogging this discussion down here, is that the word "evidence" is being used in two different ways. If we were perfectly rational beings, we would be able to use either kind of evidence, but the problem is that the first kind of evidence (individual unsystematic personal experiences) tends to be warped by all kinds of biases (selection bias, especially) making it hard to use in any kind of reliable way. You use it if it's all you have, but systematic evidence is very much preferable.

Comment author: army1987 10 March 2013 03:09:10PM *  0 points [-]

Actually musicians and astrologists are very similar. Both make money with providing entertaining performances for their clients. Members of those professions who ignore evidence about what entertains their clients go out of business.

OK, if you consider the point of astrology to be “making money”, as opposed to “predicting people's personalities and future events”, then it is evidence-based -- but then again, if you consider the point of alternative medicine to be “making money”, as opposed to “improving people's health”, then it is evidence-based as well. (But now that Qiaochu_Yuan has made clear that it's not alternative medicine that he was talking about, this is kind of moot, so I'll tap out now.)

Comment author: ChristianKl 10 March 2013 03:56:09PM 0 points [-]

OK, if you consider the point of astrology to be “making money”, as opposed to “predicting people's personalities and future events”

I didn't. I advocated another goal, entertainment. I don't know that much about astrology but I think a fair percentage of the people who do pay a astrologists do it for entertainement purposes.

Letting someone stick needles inside you, when you go to a acupuncturist is less about getting entertainement.

The kind of people who like astrology often also like other personality tests that they find in magazines. People enjoy going through those tests.

If an astrologer would tell people something about their personality that's accurate but that those people aren't willing to accept, I doubt he would stay long in business.

A bit like the musician who only plays music that he himself considers to be good, but that's "too advanced" for his audience. If the musician only sees his own opinion of his work he's not different than an astrologer who only sees whether his horoscope is good. If you call that musician "evidence-based" than the astrologer who goes after his own judgement of his work is also "evidence-based".

But now that Qiaochu_Yuan has made clear that it's not alternative medicine that he was talking about, this is kind of moot, so I'll tap out now.

Why does that matter to the question whether barbers can be meaningfully to be said to practice evidence-based barbering?

Comment author: army1987 10 March 2013 04:10:14PM 0 points [-]

Why does that matter to the question whether barbers can be meaningfully to be said to practice evidence-based barbering?

I was claiming that barbering is more evidence-based than alternative medicine, but if alternative medicine is not what's being discussed, then even if I turned out to be right it still wouldn't be relevant.

Comment author: incogn 05 March 2013 01:53:40AM 12 points [-]

Only in the sense that the term "pro-life" implies than there exist people opposed to life.

Comment author: MugaSofer 06 March 2013 09:36:43AM *  5 points [-]

Opposed to all life? No. Opposed to specific, nonsentient life when weighed against the mother's choice? Yes.

Comment author: RomeoStevens 05 March 2013 02:32:18AM 6 points [-]

pro-life is an intentional misuse of ontology.

Comment author: CCC 05 March 2013 12:06:14PM 1 point [-]

A perusal of murder and suicide statistics - even the fact that such statistics exist - suggests the conclusion that there may, in fact, exist some people opposed to life; sometimes their own, sometimes that of others.

Comment author: Qiaochu_Yuan 05 March 2013 05:53:39PM *  5 points [-]

That's irrelevant to the point that incogn is making, though, which is that you can't make that inference from the fact that a label called "pro-life" exists because it's rhetoric. I'm willing to believe that the label "evidence-based medicine" is also rhetoric, but I don't actually know that yet; I would first have to know what doctors were doing before EBM became a thing.

Comment author: Eugine_Nier 06 March 2013 05:29:47AM 2 points [-]

I would first have to know what doctors were doing before EBM became a thing.

And how good the followers of EBM are at actually being evidence based as opposed Straw Vulcan.

Comment author: buybuydandavis 06 March 2013 01:50:06PM *  1 point [-]

On the positive side, evidence-based medicine promotes greater measurement of patient outcomes and sharing of that information to weigh treatment options.

On the negative side, it is largely about denying patients coverage and access to treatments based on officially approved disease models and preference trade offs that ignores evidence that was not used in the model and overrules patient preferences. It's evidence ignoring and patient controlling medicine.

Comment author: KnaveOfAllTrades 06 March 2013 06:31:24PM 1 point [-]

I think you unknowingly {submitted this comment prematurely}? :)

Comment author: buybuydandavis 06 March 2013 06:48:16PM *  1 point [-]

Thanks. I edited around and left that last line when I should have deleted it.

All tidy now.

Comment author: TraderJoe 05 March 2013 10:49:04AM 1 point [-]

You mean like acupuncture?

Comment author: Qiaochu_Yuan 05 March 2013 05:39:20PM *  7 points [-]

Wikipedia informs me that evidence-based medicine is a movement in the health care community that really only got underways in the 90s. I am not sure I want to know what the health care community was doing before the 90s. I'm not talking about alternative medicine, I'm talking about whatever mainstream medicine was and is doing that doesn't fall under this label.

Comment author: pinyaka 06 March 2013 03:04:16PM *  6 points [-]

Well, until the mid-80's doctors believed that infants either a) didn't feel pain or b) wouldn't remember it anyway (mostly because of this study from the 40's), so they didn't use anesthesia for infants when performing heart surgery until someone collected evidence that babies were more likely to live through the surgery if given something to knock them out.

EDIT: Removed extraneous word

Comment author: satt 19 May 2013 10:09:08PM *  1 point [-]

they didn't use anesthesia for infants when performing heart surgery until someone collected evidence that babies were more likely to live through the surgery if given something to knock them out

Really? I notice (with some relief) that the control babies in the linked study still got anaesthesia; it's just that they got nitrous oxide instead of nitrous oxide and fentanyl.

Comment author: satt 09 March 2013 06:51:40PM 4 points [-]

I am not sure I want to know what the health care community was doing before the 90s.

On the bright side, some of it was just evidence-based medicine without the branding.

For example, the UK Medical Research Council put randomized trials on the map in 1948 with its randomized trial of streptomycin, which had been discovered only a few years before. The massive 1954 trials of the famous Salk polio vaccine also included a randomized trial comprising over 700,000 children. (That said, the non-randomized trial was even larger; the origin of this odd, hybrid study design is an interesting bit of history.)

Comment author: handoflixue 05 March 2013 08:30:58PM 5 points [-]

Quoth said Wikipedia article, in the "criticisms":

"EBM applies to groups of people but this does not preclude clinicians from using their personal experience in deciding how to treat each patient. One author advises that "the knowledge gained from clinical research does not directly answer the primary clinical question of what is best for the patient at hand" and suggests that evidence-based medicine should not discount the value of clinical experience.[26] Another author stated that "the practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research".[1]"

Which suggests that the precursor to EBM is a combination of Education and Intuition. Sorry if I'm not framing it terribly well - there's an intuitive category in my head for this method, but I've never really had to refer to it explicitly. It's the same technique I use to troubleshoot computer problems - I get a hunch as to what is causing it, and then proceed through a mixture of "safe, generalized advice" (try rebooting!) and "advice specific to the problem I think it is" (aha, you must not have your DNS configured correctly). If both of those fail, THEN I'll resort to actually collecting data, analyzing it, and seeing where that leads me - "have you had other problems?", "hmm, let me look up this error code..."

I've generally observed this path as the default human behavior, with "call someone else" occurring when they hit the limit of their abilities.

Comment author: pinyaka 06 March 2013 03:06:39PM 4 points [-]

I've generally observed this path as the default human behavior, with "call someone else" occurring when they hit the limit of their abilities.

Not a bad plan if you know the limits of your abilities and aren't trained to act confident even when you're not.

Comment author: Yosarian2 12 May 2013 06:11:20PM 1 point [-]

For a lot of the medical advances we had earlier in the 20th century, you didn't really need to do large-scale clinical studies to see if it was working. You gave someone an antibiotic, and they suddenly got much better. You gave people a polio vaccine, and they didn't get polio. You took someone's appendix out, and they didn't die.

It was really later in the 20th century, when medicine got more and more focused on treating and preventing long-term degenerative illnesses like cancer or heart attacks or high blood pressure, that it became more vital to measure the difference in a large-scale statistical way between how effective different types of treatment were over a long period of time.

Comment author: satt 19 May 2013 10:05:06PM *  0 points [-]

For a lot of the medical advances we had earlier in the 20th century, you didn't really need to do large-scale clinical studies to see if it was working. You gave someone an antibiotic, and they suddenly got much better. You gave people a polio vaccine, and they didn't get polio. You took someone's appendix out, and they didn't die.

Not the best examples, although you're right about appendectomies! I nonetheless agree with the broader point that decades ago there was less need for fine-grained, systematic medical studies (you were just unlucky in your choice of examples).

Comment author: MugaSofer 06 March 2013 09:55:31AM 0 points [-]

IIRC, acupuncture has some limited use, probably as a combination of placebo and endorfin release. Unless you knew about those, the evidence would suggest you were on to something.

Comment author: Rukifellth 02 June 2013 04:09:43PM 3 points [-]

Seeing these statistics has got me thinking.

I've checked the undergraduate course requirements as my local university's medical faculty, and there's nothing listed for probability and statistics. I'm considering setting up an appointment with somebody about this, assuming doctors not being able to uncover test results properly is a serious problem.

Would this be worth it, or am I wasting time?

Comment author: Eliezer_Yudkowsky 02 June 2013 07:32:25PM 2 points [-]

I hate to say it, but my guess is that you're wasting time unless your universe^H^H^H university has unusually good undergraduate statistics courses.

Comment author: gwern 03 June 2013 01:12:59AM 5 points [-]

I don't think it's a waste of time. If you pay attention in your introductory courses, you'll learn a good chunk of how to abuse NHST and what the criticisms of it mean. I have learned very little Bayesian statistics, but for trying to understand the very large existing medical/psychological research corpus, I have never regretted focusing my reading on frequentist material.

Comment author: Eliezer_Yudkowsky 03 June 2013 01:20:09AM 3 points [-]

I defer to your superior domain knowledge of universities.

Comment author: gwern 03 June 2013 01:27:48AM 3 points [-]

You don't have to; you can see CMU's "Probability & Statistics" for yourself, for example.

Comment author: Rukifellth 04 June 2013 02:52:11AM *  0 points [-]

Oh not for me- I'm doing CS, but it seems like we could get very large returns in hospital performance for the effort expended in teaching med students the proper stats training.

I'm not sure what to expect here, except that at best they'll flat out say that the program is difficult enough as it is, and at worst shrug with some kind of vague "corporate-representative-being-questioned" answer. In my wildest dreams they could come up with some new-fangled "Life Stats" course, streamlined so only the parts related to diagnostics and prognostics are taught.

Comment author: Vaniver 02 June 2013 10:43:07PM 2 points [-]

your universe

I can't decide if this is a typo or not.

Comment author: NancyLebovitz 06 March 2013 05:30:40AM 2 points [-]

This is a new service and it has to interact with the existing medical system, so they are currently expensive, starting at $5,000 for a research report. (Keeping in mind that a basic report involves a lot of work by people who must be good at math.) If you have a sick friend who can afford it - especially if the regular system is failing them, and they want (or you want) their next step to be more science instead of "alternative medicine" or whatever - please do refer them to MetaMed immediately.

What might it be worth to people to find out that some or all of the usual procedures are so dangerous and/or ineffective as to be not worth doing?

Comment author: Mycroft65536 06 March 2013 05:57:21AM 3 points [-]

Likely more than the list price of those procedures. People who have expensive potentially harmful procedures being done on them would get great benefits having MetaMed review those procedures.

Comment author: CCC 05 March 2013 12:15:01PM 2 points [-]

This seems like a really good idea. Especially given the impossibility of a single doctor keeping up with all the literature...

Moreover, I rather expect Metamed to be able, ater a while, to suggest profitable research opportunities to people looking to do medical research.

Comment author: MichaelHoward 05 March 2013 02:30:25AM 12 points [-]

For the sake of humanity, cute kittens, whatever it takes to get past your qualms about this being advertising...

Please promote this immediately to the front page so it can get as much attention as possible.

Comment author: shminux 21 March 2013 05:44:20PM 3 points [-]

Hanson recently commented on MetaMed on OB, but not here, so might as well quote some of it:

I wrote this post because I know several of the folks involved, and they asked me to write a post endorsing MetaMed. And I can certainly endorse the general idea of second opinions; the high rate and cost of errors justifies a lot more checking and caution. But on what basis could I recommend MetaMed in particular? Many in the rationalist community think you should trust MetaMed more because they are inside the community, and therefore should be presumed to be more rational.

But any effect of this sort is likely to be pretty weak, I think. Whatever are the social pressures than tend to corrupt the usual medical authorities, I expect them to eventually corrupt successful new medical firms as well. I can’t see that being self-avowed rationalists offers much protection there. Even so, I would very much like to see a much stronger habit of getting second opinions, and a much larger industry to support that habit. I thus hope that MetaMed succeeds.

This is a rather lukewarm and highly qualified endorsement, if you can call it that. The rest of the post is also a worthwhile read, as a critical assessment of LW cultishness:

As with religion, the main problem comes when a self-described rationalist community starts to believe that they are in fact much more rational than outsiders, and thus should greatly prefer the beliefs of insiders.

[...]

I’ve noticed a substantial tendency of folks in this rationalist community to prefer beliefs by insiders, even when those claims are quite contrarian to most outsiders. Some say that since most outsiders are quite irrational, one should mostly ignore their beliefs. They also sometimes refer to the fact that high status insiders tend to have high IQ and math skills. Now I happen to share some of their contrarian beliefs, but disagree with many others, so overall I think they are too willing to believe their insiders, at least for the goal of belief accuracy.

Comment author: Kawoomba 21 March 2013 06:26:53PM 2 points [-]

It is indeed a kind of poisoned 'endorsement' that MetaMed could have done without, a Nessus' tunic. It's telling that he obviously didn't run it by MetaMed before publishing it, or didn't straight out decline to give a recommendation, instead of wrapping it in "I expect it will be corrupted like all the others". It's surprising after reading his recent elogy on Yvain, who's involved with MetaMed.

Comment author: Dr_Manhattan 05 March 2013 12:44:07AM *  3 points [-]

it's spam, but it's our spam. upvoted. (I don't mean I work for Meta; I just support the Cause and things/community supporting it)

Comment author: atucker 05 March 2013 05:31:03PM 8 points [-]

Upvoted, but I'm a bit confused as to what we're trying to refer to with "spam".

If by spam we mean advertising, yes. Definitely.

If by spam we mean undesirable messaging that lowers the quality of the site, then I would think that this is very much not spam.

Comment author: Qiaochu_Yuan 05 March 2013 05:50:02PM *  10 points [-]

Some people (myself included) use "spam" to refer to any kind of advertising in a public setting, e.g. you might preface an email sent out to multiple mailing lists as "sorry for the spam, guys, but..." even if it's a valuable and high-quality email. The connotation, to me, is mildly self-deprecating rather than strictly negative.

Comment author: Dr_Manhattan 05 March 2013 06:38:02PM 0 points [-]

If this startup was not associated with MIRI I would downvote it; there are lots of great startups but this is not the place to advertise them.

Comment author: MugaSofer 06 March 2013 09:23:10AM *  1 point [-]

It's medicine, done rationally. This is a site about rationality. The relevance seems clear regardless of it's origin.

Comment author: Dr_Manhattan 06 March 2013 07:36:03PM 6 points [-]

A lot of businesses could have "done rationally" appended to them. MetaMed is "medicine, done rationally" (using statistics). Google is "search done rationally"(with statistics). The only reason medicine stands out is due the the rather poor baseline.

Comment author: wedrifid 07 March 2013 12:10:07PM *  4 points [-]

The only reason medicine stands out is due the the rather poor baseline.

Alternative words to "only" include "valid" and "sufficient".

Google is "search done rationally"(with statistics).

Your example doesn't support your intended conclusion. In a world with irrational and often unhelpful search engines and an unknown, newly formed "Google", it would be entirely appropriate to make people aware of it, in a similar post to this one.

Comment author: Tyrrell_McAllister 05 March 2013 05:48:52AM 1 point [-]

Statistical and Health Illiteracy (Patients)

Is this a placeholder for more citations?

Comment author: Eliezer_Yudkowsky 05 March 2013 04:22:37PM 1 point [-]

I accidentally left that in after deleting the section underneath it. Like I said, this was only a fraction of their total citations list.

Comment author: prase 09 March 2013 09:17:40PM 0 points [-]

I'd be interested in the linked Begg's paper but it's behind a paywall. Can someone please tell what exactly they had done and how did they obtain all those various p-values?

Comment author: gwern 09 March 2013 10:30:15PM *  1 point [-]
Comment author: prase 10 March 2013 10:24:54PM 0 points [-]

Thank you.

Comment author: Decius 07 March 2013 09:02:38AM 0 points [-]

What are the capital investments that need to be recouped in the early adopter period? Is the price tag based on "It is worth more than this to our target market.", or "It costs this much to do this research, with reasonable amortizing of capital costs."?

Comment author: Strange7 10 March 2013 05:29:47AM 1 point [-]

At a guess: computer hardware, office space, recruiting competent people, training them to work together effectively, and a well-organized library of the company's previous reports so that not every request requires them to reinvent the wheel.

Comment author: Decius 10 March 2013 05:40:50AM 0 points [-]

Computer hardware and support adequate for the large-scale implementation is roughly 500k capital and 240K/yr; office space incl utilities should be in the realm of $500k/yr (again, for a large-scale operation) and one hundred competent people should be about $4m/year; with another $1m for management expenses. Total capital+first year's operating expenses is ~$6m; if they expect to sell a thousand reports per year (over one man-month each) at $5k each, they repay the investors almost in the first year.

I haven't tried to price the custom software involved, but for such a (in the large business sense) small investment I don't see why they didn't start full-scale.

Comment author: Larks 14 March 2013 07:53:21PM 1 point [-]

You're only budgetting $40k per person? That seems low, especially considering overhead, health insurance etc.

Comment author: Decius 14 March 2013 09:36:55PM 1 point [-]

I think it's a reasonable rate for part-time independent contractors putting out one tenth of a report in a month.

Comment author: Kawoomba 07 March 2013 09:27:21AM 1 point [-]

Is price ever based on anything other than "this will maximize our revenue over this period of time"?

Comment author: Decius 07 March 2013 05:30:20PM 0 points [-]

The terminology you wanted was 'maximize our profit'. And yes, some pricing is based on a goal other than maximizing profit.

Comment author: Jayson_Virissimo 14 March 2013 06:22:18AM *  2 points [-]

...some pricing is based on a goal other than maximizing profit.

I've worked at three jobs where the firm was not even in the ballpark of approximately maximizing profit. The first is now out of business. The second and third were in government.

Comment author: Decius 15 March 2013 12:14:33AM 0 points [-]

Government jobs are now going out of business.

You still have to understand the bottom line to be in business, but you don't have to worship it above e.g. employee health and welfare.

Comment author: wedrifid 11 March 2013 06:14:54PM 0 points [-]

And yes, some pricing is based on a goal other than maximizing profit.

For example, it is sometimes based on the goal "Maximize the bonus given to the CEO".

Comment author: Decius 11 March 2013 08:09:28PM 1 point [-]

I was referring to corporations with core values that don't involve Laffer peaks of money.

Comment author: wedrifid 12 March 2013 02:45:27AM *  0 points [-]

I was referring to corporations with core values that don't involve Laffer peaks of money.

This position wasn't challenged, missed, nor even weakened by my reply. Rather, it was strengthened by the agreement with the actual claim you made.

Comment author: Decius 14 March 2013 02:13:56AM 1 point [-]

Sorry; it seemed to me that you were agreeing with the claim I made by subverting the intent. As someone who intends to create and invest heavily into a corporation with goals other than maximizing the taxable investment income I receive, it is a sensitive subject to me.

Comment author: wedrifid 14 March 2013 02:24:49AM 1 point [-]

Sorry; it seemed to me that you were agreeing with the claim I made by subverting the intent. As someone who intends to create and invest heavily into a corporation with goals other than maximizing the taxable investment income I receive, it is a sensitive subject to me.

Exciting (and brave, considering the failure rate). I'm curious... what industry/goal if you don't mind sharing?

Comment author: Decius 14 March 2013 02:48:18AM 1 point [-]

Coffee shop/social justice. Its a planned attempt to deinforce class division by making the middle-class investor(s) a little richer while making the working-class employees each absolutely more richer.

I'm currently have someone with the philosophical background but not quite enough business training to serve as general manager. I've figured a low six-digit upfront investment along with a couple years to get a fully-qualified general manager, a couple more hundred $k in capital costs, and another in operating losses each year for four years, leading to a recoup starting eight years after beginning. I've got about half that now, and just need to get my day job adjusted and settled to a better location to oversee the operation and expect to cover the remaining investment out of income. (Before I actually start, I'm going to develop enough of a plan to know how much reality deviates from the plan at any given point)

Comment author: mfb 23 March 2013 01:48:25PM 0 points [-]

So much room for improvements in healthcare even without new stuff :).