Summary: medical progress has been much slower than even recently predicted.

In the February and March 1988 issues of Cryonics, Mike Darwin (Wikipedia/LessWrong) and Steve Harris published a two-part article “The Future of Medicine” attempting to forecast the medical state of the art for 2008. Darwin has republished it on the New_Cryonet email list.

Darwin is a pretty savvy forecaster (who you will remember correctly predicting in 1981 in “The High Cost of Cryonics”/part 2 ALCOR’s recent troubles with grandfathering), so given my standing interests in tracking predictions, I read it with great interest; but they still blew most of them, and not the ones we would prefer them to’ve.

The full essay is ~10k words, so I will excerpt roughly half of it below; feel free to skip to the reactions section and other links.

1 The Future of Medicine

1.1 Part 1

What we hope we are especially good at as cryonicists is predicting the future — particularly the future of medicine. After all, our lives depend upon it. Because that’s what cryonics is about — tomorrow’s medicine today. In order for cryonics to seem reasonable, in order for it to be reasonable, it is necessary to have some idea, at least in broad outline, of where medicine is going and of where it ultimately can go. I think that the cryonicists’ record on this point in a broad sense has been very good.

…One thing which is rarely seen in cryonics publications is an attempt to see the shape of things to come in the near or intermediate future. Oddly enough, that’s a far more difficult and dangerous undertaking than predicting ultimates. Nor is this a problem confined to cryonics or the future of medicine. Sadi Carnot (the founder of thermodynamics) could tell you all about the “perfect heat engine,” but would have no doubt had trouble giving you hard numbers on how well heat engines would be made to perform over the 20 years or so following publication of his work….When I look over predictions made in the 1950’s or the 1960’s about the future of medicine and/or technology, I always chuckle about just how far afield these guys were. A good example is a list of predictions made by Herman Kahn which was summarized in CRYONICS REPORTS in August of 1967 (volume 2, #8). They are reproduced as Table 1 below. Read ’em and weep — or laugh if you will!

Table 1. Less Likely But Important Possibilities, from: The Next 33 Years: A Framework For Speculation, by Herman Kahn and Anthony J. Weiner (1967) [predictions for 2000 AD]

  • “True” artificial intelligence
  • Practical use of sustained fusion to produce neutrons
  • Artificial growth of new limbs and organs
  • Room temperature superconductors
  • Major use of rockets for transportation (either terrestrial or extraterrestrial)
  • Effective chemical or biological treatment for most mental illnesses
  • Almost complete control of marginal changes of heredity
  • Suspended animation (for years or centuries)
  • Practical materials with nearly “theoretical limit” strengths
  • Conversion of mammals (humans?) to fluid breathers
  • Direct input into human memory banks

…My personal perspective is one of being a hard-core cryonicist who was involved in clinical medicine for the better part of a decade. My biases about predicting the future could probably be summarized as follows: I have a lot of sympathy for the incrementalist view of progress - particularly in the highly regulated area of medicine. It’s regulated because it directly and powerfully touches people’s well-being and because it is not a very fault-tolerant area — mistakes are costly and since people like being alive (at least in the short run) they get edgy if an error separates them from their actuarial expectations.

I thus believe that any predictions about the future of medicine have to include what I call the “space program factor” (SPF). By this I mean simply that progress in the space program would have proceeded far, far faster (and thus approximated more closely what was theoretically possible) if it were not a high-visibility project with lots of political and social overtones which make it fault-intolerant — if you could burn up as many astronauts as you do test pilots every month, it would cost a lot less to get where you’re going. First-shot fail-safe engineering is costly. Medicine suffers from the same kinds of problems — witness the FDA as both the solution and the problem.

1.1.1 Diagnostics

I foresee a veritable explosion of diagnostic techniques and procedures. A large number of illnesses which are poorly understood today will be well-characterized the next decade and will be easy to diagnose very early in their development or even before they develop because they will be found to have direct or indirect genetic causes. Fairly predictive tests for Alzheimer’s disease, schizophrenia, depression, some malignancies, heart disease, and most of the rest of the major killers and disablers will probably be in place by 2000 to 2010. Many if not most of these ailments will be assessable in terms of a very sophisticated genetic risk profile which it will be possible to generate in infancy or childhood (or in utero). A wide range of genetic probes for illness-generating genes should be available by the end of the century.

A side-note: genetic associations have been a very fertile field for John Ioannidis, and a big study just blew away a bunch of SNP-IQ correlations.

Real-time diagnosis will also be revolutionized by the turn of the century. The next 10 to 15 years will see increasing miniaturization of sensors and chemistry packages. Tissue probes or biosensors which can measure a wide array of biological and biochemical factors will be packaged in very small, very stable devices which hold calibration over prolonged periods of time (weeks to months to years) and which can easily be inserted into the patient’s body or tissues. For example, I foresee multi-sensor units mounted on very small needle or catheter tips which can be inserted intravenously, intracranially, intra-cerebrally, subcutaneously, and so on.

These sensors will be able to give real-time measurements of blood gases, pH, electrolytes, enzyme levels, and a host of other biochemical parameters that now involve costly, time-consuming, and/or impossible “laboratory studies” requiring withdrawal of a sample and processing. Real-time biosensors will revolutionize acute care of critically ill patients.

…The first generation of these devices should be in the marketplace somewhere between 1990 and 1995. More sophisticated instruments capable of a wider array of measurements will quickly follow. These sensors will also have a profound impact in acute stabilization of patients in a field setting. It will be possible for paramedical personnel to quickly and effectively insert such instruments in an acutely ill patient — a victim of cardiac arrest or trauma, and immediately and globally assess that patient’s condition, relaying that information to an expert (more on who that expert will be later).

…Diagnostic imaging should rapidly come down to a battle between ultrasound and MRI (NMR; (nuclear) magnetic resonance imaging). Because ultrasound units owe their size and weight almost entirely to the computer that processes the information, the size and effectiveness of these units will change on the same rapid exponential curve as the size and power of computers. MRI is a technology which has some other physical limitations, but by the year 2000, even MRI units will be far smaller, less costly, and capable of far, far better results. Bedside units or “on floor” units (i.e., units in the ICU or CCU) may be available for repeated assessment of the patient’s condition. MRI and its grandchildren and cousins should in particular be expected to undergo considerable refinement. Metabolic MRI will also be in wider use, allowing for real-time evaluation of the metabolic and working state of patient’s hearts, brains and other organs. By 2000 to 2010 the cost and size of these units may be drastically reduced and they may be in field use for acute metabolic and structural evaluation of patients with trauma or in cardiac arrest.

I recently learned that, besides the usual blame for increasing medical costs, some categories of doctors have been strenuously urged to reduce MRI use as actively harmful.

By the late 1990’s there should be an answer to this problem in the development of the Portable Doctor or Expert Medical Device (EMD). The EMD will be both a diagnostician and therapist integrated into one unit. In an emergency medical setting (either in an ambulance or in an ICU or CCU) this powerful computer will be directly coupled to a wide array of both simple and complex medical assessment devices….EMDs will be a very hot item. Initially (i.e., the 1990’s) they will be confined to ambulances and the ICU, CCU, and specialty areas of the hospital, such as radiology and cardiology labs. But there will be powerful incentives for wider application of these devices. As computing capacity drops in cost and increases radically in sophistication (i.e., parallel processors, neural networks, truly massive memories, and so on) expert medical (and other) systems will see increasing application. There will be devices on the market such as a “Home Doctor” diagnostic program, which will basically be an internal medicine physician in a can.

…After 2000, many people will probably have a small sensor array permanently implanted and coupled to telemetry equipment which can be activated to call for help or alert the person that trouble is brewing. People with a known risk of sudden health problems will be the first to use these kinds of devices. With the development of smaller and cheaper telemetry equipment (directly linked to large-antenna satellites), separate telemetry arrangements will disappear. Implantable, computer-controlled defibrillators are already a reality; analogous devices to deliver drugs in case of cardiac or brain infarct (stroke) will eventually become reality.

1.1.2 Resuscitation

Expect a shift back to open-chest heart massage and away from closed-chest massage in medical and perhaps even paramedical settings. Closed-chest CPR will be realized to be ineffective at maintaining cerebral viability and will be replaced by far more effective open chest methods. In paramedical (i.e., field) settings the emphasis will be on very rapid defibrillation — or actually “leaving the patient alone” until circulation can be effectively restored and medications given to inhibit reperfusion injury. Closed chest CPR and restarting circulation by laymen “in the field” will be realized to be doing more harm than good and there may well be a move away from field CPR, with laymen being instructed to leave the patient without circulation until it can be restarted adequately and under controlled conditions.

By the late 1990’s, extended use of CPR will be a thing of the past and major metropolitan areas will have “death reversal units” (DRUs) in emergency rooms and perhaps even in larger paramedical units. The DRUs will employ rapid femoral cut-downs and blood-pump/oxygenator supported resuscitation to recover people who have suffered extended periods of ischemia (in the 30 minute to 1 hour range). CPR will be realized very often to be ineffective at recovering patients who are profoundly ischemic and the advent of pharmacologic intervention allowing for cerebral resuscitation will provide tremendous pressure for emergency rooms to develop the capability to very rapidly put an ischemic patient on bypass and completely and adequately support his circulatory and respiratory needs until his brain can recover and/or his heart can be repaired and restarted. An intermediate scenario would be the development of small, flexible impeller pumps that can be collapsed and passed through a large bore percutaneous catheter through the femoral artery and into the abdominal aorta. Such a pump (acting much like the propeller on an outboard boat motor) could then be used to supplement CPR, perhaps providing 2–3 liters per minute of cardiac output.

…Another effect of drugs like the lazaroids and calcium channel blockers will be the more effective treatment of acute injuries to a wide range of tissues such as the spinal cord and brain. Much of the damage that occurs to these tissues is free radical related and can be inhibited by use of these drugs…Intervention into secondary inflammation will be most important in the brain and spinal cord. Deployment of these techniques will result in the salvage of many spinal cords that would be considered irreversibly injured by today’s medicine. There will be far, far fewer paraplegics. However, expect an increase in the number of permanently brain-injured patients and in the number of patients with “subtle” forms of cerebral injury resembling mild stroke or the cognitive or mood disorders seen in diseases like multiple sclerosis or acute head injury. These disease states will result because people with brain trauma who would have died acutely from secondary free radical mediated injury (cerebral edema and so on) will be saved with lazaroids and other cerebral rescue techniques.

1.1.3 Antibiotics

The next twenty years should see many powerful new antibiotics engineered directly from knowledge of the structure of the relevant microbial enzyme which it is desired to inhibit. Not only will these antibiotics be more powerful, but because they do not exist in nature, strain resistance will not so easily develop toward them as it has for the antibiotics of today.

In addition, the next generation of antibiotics will include many which have been designed for effect against viruses, an area where medicine is presently largely powerless.

The pharmaceutical industry and antibiotics have been a case-study in stagnation, failure, and diminishing marginal returns. There is only one, highly experimental, anti-viral that I have heard of. In a followup email, Darwin responded to someone else pointing out DRACO:

Finally, while Geoff cites this putative advance in antiviral drug therapy, the fact is that my prediction about a plethora of new and highly effective targeted molecular antimicrobials by 2008 was WRONG. In fact, antibiotic research is all but dead, and there are virtually no fundamentally new antibiotics in the drug pipeline. This should scare the crap out of all us, because we are rapidly approaching complete antibiotic resistance with a number of common and highly lethal bugs, including staph (MRSA), streptococcus,  E. coli, pseudomonas and candida. It is only a matter of months to a few years, at most,  before completely antibiotic resistance staph and streptococcus emerge. Pharmaceutical companies have a large negative incentive for developing new antimicrobials. At the cost of over a billion dollars a new drug (regulatory) and the high risk of withdrawal of the drug within 5 years (2 out of 3), as well as the near certainty of punishing litigation for adverse effects, antibiotics are not merely uneconomical to develop, they are fiscal suicide. Only drugs that will be chronically used by very large numbers of patients are now worth developing.

 

(This agrees with my own general impressions, which I didn't feel competent to baldly state.)

1.1.4 Immunology and cancer

…Monoclonal and synthetic antibodies carrying toxins or regulatory molecules will be used to turn off or destroy the fraction of immune cells which initially respond and proliferate when a transplant is carried out. More widespread transplantation of tissues will be undertaken, including transplantation of limbs and scalp. Xenografts will be used increasingly in the mid to late 1990’s and it will not be uncommon for people to have pancreatic tissue from bovine or porcine sources and perhaps hearts, lungs, and livers from other animals. Expect the first workable transplants to be from great apes (chimps, gorillas, orangutans), with porcine and bovine grafts coming later.

Immunology and immunotherapy will also be revolutionized by a far more complete understanding of the immune system resulting from the AIDS epidemic and basic research in the immunology of diseases such as multiple sclerosis and aging. The ability to rapidly and cheaply synthesize bioregulatory molecules will open up a wide array of therapeutic possibilities. Expect effective treatments for most autoimmune diseases (lupus, multiple sclerosis, myasthenia gravis, and so on) by the mid to late 1990’s. The mid to late 1990’s should also see the wider application of immunorestoratives for use with the aged and ill. Cancer therapy will improve considerably as a result of these advances as well as a result of selective targeting techniques. By the early to mid–1990s the first generations of monoclonal antibodies linked to chemotherapeutic agents or powerful natural toxins will be used against a few cancers.

1.1.5 Atherosclerosis

Atherosclerosis will undergo a very marked but nevertheless gradual reduction in frequency and severity of occurrence as physicians slowly become educated about what is already known and begin to use existing therapeutic modalities more aggressively. By the mid to late 1990’s it will be more widely understood that atherosclerosis can be reversed, and there will be wider use of drugs such as lovastatin to reduce serum cholesterol, coupled with sound dietary advice. However, even well into the late 1990’s and perhaps beyond, atherosclerotic disease (heart attack, stroke, ischemic limb disease, and so on) will continue to be a serious source of morbidity and mortality. By the late 1990’s, 2nd and 3rd generation therapies will be coming on-line which will be able to reverse atherosclerotic disease and more directly inhibit it

1.2 Part 2

1.2.1 Anesthesia

Expect “modular” anesthesia by the 1990’s to the early 2000’s. The development of potent anxieolytics (anxiety removers) which do not depress consciousness and the development of total pain inhibitors will allow for complicated surgical procedures on conscious patients. Expect to see major thoracic and limb surgery on high risk patients (i.e., patients unable to tolerate anesthesia) using such agents.Major abdominal surgery requiring deep muscle relaxation will continue to require skeletal muscle paralysis and general anesthesia. However, expect new drugs in the market place in the late 1990’s which induce unconsciousness without respiratory or cardiac depression.

Surgical and post surgical mortality will decrease sharply due to such anesthetics and the use of real-time physiological and biochemical monitoring during and after surgery using biosensors.

1.2.2 Surgery

…Catheters, laparascopes, and thorascopes with sensors, operating tools, and an impressive array of capabilities will be increasingly used. Abdominal surgery will shift more and more towards the use of the fiberoptic laparascope, endoscope, and laser as miniaturization of tools occurs and disease is diagnosed earlier. Early diagnosis will create the need for less drastic procedures.

Fine-tuned repair of heart valves and blood vessels, and examination and biopsy of suspected abdominal and retroperitoneal lesions will be early candidates for application of this technology.

…In contrast to therapeutic surgery, the frequency of cosmetic surgery will probably increase dramatically as techniques are refined and prosthetics improve in quality and drop in cost. As people live longer, and stay productive longer as well, they will increasingly turn to medicine to maintain not only their health but their appearance. Cosmetic surgery will experience a boom until such time as the fundamental mechanisms underlying the aging process can be brought under control.

1.2.3 Geriatrics

Advances will be slow here, but significant. Expect increasing understanding and application of trophic factors and bioregulatory compounds. Early candidates for rejuvenation will be the immune system and other stem cell systems or systems with higher cell turnover. By the early decades of 2000, significant rejuvenation and geroprophylaxis of skin, bone, immune, and other “high turn- over” tissues will be possible as the natural regulatory molecules which control these systems are understood and applied.

…By the early years of the 21st century the first generation of compounds effective at “rejuvenating” (i.e., restoring some degree of normal maintenance and repair to existing brain cells) the central nervous system will be available. These drugs will work by turning on protein synthesis and stimulating natural repair mechanisms.

However, pathologies of the brain and other non-dividing tissues (renal, cardiac, and musculoskeletal system) will continue to be major sources of morbidity and mortality over the next two decades. As atherosclerosis and immune-related disorders are dealt with more effectively, expect an increasing shift of morbidity and mortality to central nervous system-related causes. Beyond 2000 this may be treated to a limited extent with fetal transplant

We all know how well this has worked out. More troubling is that in some respects, we appear further from any solutions or treatments than before; while resveratrol did well in a recent human trial, the sirtuin research that seemed so promising has been battered by null results and failures to replicate. And anti-aging drugs have their own methodological difficulties; from the followup email:

Antiaging drugs are unlikely to be free of adverse effects. In fact, it seems very likely that they will be burdened with many adverse effects and that they will even kill a minority of people who use them. The common perception is that antiaging drugs will make people super fit, healthier and more resistant to disease. And yet, in calorie restriction and effective antiaging drug studies there is emerging evidence that slowing aging comes at the cost of interfering with fundamental processes that make organisms fitter for both reproduction and for surviving in a hostile environment.

Consider the putative antaging drug rapamycin. It seems likely that rapamycin interferes with senesence by affecting the PI3-kinase and TOR: PIKTORing cell growth pathways. This almost certainly means that in some individuals there will serious and even lethal side effects - cancer being one of them. [Persons with a history of promiscuity, and thus a heavy burden of chronic viral infection, and those with certain "unfavorable" genotypes will likely be at very high risk.] But, beyond cancer, interfering with these fundamental and deeply evolutionarily conserved pathways is likely to cause a range of adverse effects that negatively (and possibly irreversibly) impact normal body functions, such as energy level, cognition, sexual performance, and so on.. While some people are now using rapamycin as an antiaging drug...it is virtually inconceivable that any major pharmaceutical company anywhere in the world would (or will) market such a drug for "normal" aging. This is important to understand because it gives us basic insight into what will almost certainly be a major barrier to the development and marketing of antiaging drugs: they will necessarily be used by large numbers of people over the course of many decades (and thus millions of drug/person years) and they are incredibly unlikely to be free of adverse, and sometimes even lethal side effects.

1.2.4 Psychiatry & Behavior

Diagnosis by brain scanning (metabolic MRI) and chemical analysis of cerebrospinal fluids will be commonplace in 20 years. As neuroregulatory compounds are better understood and as the biochemistry underlying mental disorders is elucidated there will be more effective treatments. Expect 2nd and 3rd generation drugs and combinations thereof for treatment of depression and psychosis by the late 1990’s. There will probably be several very effective therapeutic agents for compulsive disorders in the marketplace by the early to mid 1990’s.

From the previously quoted followup email:

Similarly, psychiatric drugs (which are typically chronically used) are no longer economical to develop and market because of the litigation costs associated with them. Widespread chronic use of any drug means that the likelihood of adverse conditions that were impossible to detect in the testing phase of the drug development process are almost certain to emerge. Statistics rule in drug development, and a Phase III study that lasts a year and enrolls 5,000 patients is simply not adequately powered to predict what will happen when 5 million patients take a drug for 20 years! The only way to get that data is to do that study. And therein lies a powerful caution about antiaging drugs. These drugs will likely need to be taken starting in young adulthood, or in middle age, at latest, and they will need to be taken for a lifetime. Indeed, if they are effective, for a longer lifetime than any but a few super-centenarians  has previously lived.

 

1.2.5 Implants & Prosthetics

Early spectacular applications will be small vessel prostheses (wide use by the early to mid 1990’s) for use in traumatized and atherosclerotic limbs and organs and venous prostheses (mid to late 1990’s) for use in treating traumatic injuries and deep vein incompetence (which results in varicosities, chronic pain, and edema-related skin changes in the leg, often leading to non-healing ulcers or limb loss). Another application of non-thrombogenic surfaces will be a practical artificial heart and more widespread use of extracorporeal support for infants, trauma and cardiac arrest victims, and others where anticoagulation provides a major barrier to the use of artificial circulation.

…Good synthetic bone and skin should be available by the late 1990’s to early 2000’s. Good red cell and plasma substitutes (synthetic blood) should be seen increasing in clinical use throughout the early 1990’s and in frequent use by the late 1990’s to early 2000’s.

There will be steady improvement in other synthetic materials such as hip, knee, and other joints, as well as in other less dramatic materials such as connective tissue replacements. Expect a slow replacement of prosthetic approaches to therapy as natural repair and regeneration processes are better understood and utilized. Expect to see synthetic connective tissue products for tendon repair which contain bioregulatory molecules (BRMs) that stimulate tendon regeneration. Artificial tendons made of both synthetic and/or natural materials will come into use in the late 1980’s to early 1990’s. In short, expect stunning advances in tissue replacement technology for all tissues that have primarily structural function and which are not complicated chemical processing plants, such as the liver or kidneys, or mechanically active such as the heart. In addition to connective tissue and bone, a candidate for early (late 1980’s to early 1990’s) replacement is the cornea. Expect evolution in biocompatible materials to allow for replacement of the cornea with an appropriate plastic, much like the lens of the eye is already replaced with polymer inserts.

1.2.6 Hemodialysis

Advances in hemodialysis will also be very incremental. There may be a gradual shift to peritoneal dialysis (PD) if good drugs to block glucosylation of proteins and inhibit cholesterol deposition are available. The major problem with PD today is that it raises blood sugars to astronomical levels, causing diabetic-like side effects. Inhibition of these side effects may lead to renewed application of this modality.

Direct changes in dialysis are likely to be along the lines of better membrane materials which allow for transport of wastes not currently removable by conventional dialysis and nonthrombogenic surfaces which will reduce the need for anticoagulation. The use of BRMs such as erythropoetin to treat anemia and bone growth factors to treat dialysis bone disease will help to improve the quality and quantity of patient’s lives on dialysis.

Perhaps the biggest advance in this area will be advances in immunology and infectious disease treatment. The ability to administer BRMs to stimulate immune function and improve general health should act to extend dialysis patients’ lives considerably.

…Of course, the biggest improvement in the life expectancy and health of dialysis patients will probably come in the form of the increasing use of transplantation and its application to a wider age range of patients with better long term results.

The most striking revolution in prosthetics will probably occur in dentistry. Expect a whole family of new materials to enter the dental operatory. A workable vaccine against streptococcus mutans should be available by the mid to late 1990’s, greatly reducing the incidence of tooth decay by eliminating the major class of mouth organisms that cause it. Similar advances in prevention and in treatment of gum disease can be expected as well, although probably not as soon. Repairing dental defects will also be revolutionized by the introduction of good, tough, and reliable polymers which will replace metallic amalgams. By the late 1990’s to early 2000’s biocompatible ceramics and coated polymers will be available that will allow for workable single tooth and multitooth gum-implanted prostheses.

1.2.7 Organ Preservation

Ever since the work of people like Mazur, Fahy, and Pegg was published, it has become pretty clear what the constraints are on long term viable cryopreservation of organs: don’t form any significant amount of ice; it injures mechanically and it injures chemically. The problem is that water loves to turn into ice when it’s cooled below 0øC. To circumvent this, a lot of very drastic changes have to be made in the system. Whenever you attempt to make a drastic change in a complicated, interdependent living system — like replacing half the water in it with industrial chemicals — you are in for trouble. The trouble will come in the form of a very tight or narrow window for success: everything will have to be “just right.”

This is where current vitrification technology is now. The existence of such a tight window means that vitrification of large masses will be a technological tour-de-force requiring very sophisticated computer controlled perfusion equipment and exotic and very costly high pressure chambers. Quality control and reliable storage and rewarming of organs will be very costly and difficult.

The future holds the possibility of developing better solute systems which vitrify more easily and which are less toxic (have a wider window for success). It is difficult to predict the pace of advance in this area since it will be arrived at by a mixture of empirical methods and theoretical insights. A big determining factor will be luck. Will the NIH and the Red Cross continue to fund such efforts? And, more to the point, will technological advances in other areas of organ preservation obviate the need for them? If we were betting men, we’d put our dollars on the latter rather than on the former. Major advances in organ preservation (as opposed to cell and tissue preservation) over the next decade will probably be in three areas: 1) Extended hypothermic storage of organs in the 2 to 3 weeks range; 2) Extended normothermic or room temperature storage of organs in the weeks to months range and; 3) mixtures of the above two modalities which yield similar available time courses of storage.

…The next 5 to 10 years should also see major advances in our understanding of the effects of deep hypothermia on the tissues and organs of non-hibernating mammals. These advances should be readily translatable into better flush and perfusion storage techniques for organs. A good understanding of lipid metabolism and mechanisms of cell swelling in deep hypothermia may allow for preservation of organs in the 2øC to 10øC temperature range for periods of several months — thus definitively ending the need for long term solid state preservation of transplantable organs.

1.2.8 Other Approaches to Organ Preservation

One possibility for a major advance over the next two decades is room temperature or hypothermic preservation of organs or organisms using metabolic inhibitors. There have been tantalizing clues in the examination of a wide variety of estivators (animals which go into states of profoundly reduced metabolism at normal temperatures, such as the African lungfish, which can shut off metabolism at temperatures in the range of 30øC to 40øC) that anti-metabolite compounds exist which may be able to induce states of profoundly reduced metabolism at ambient (i.e., 70øF) temperatures.

1.2.9 Genetic therapy

Expect very gradual application of this technology. Early candidates for gene replacement will be in storage diseases such as Lesch-Nyhan, Tay-Sachs, and other “single enzyme missing” disorders. Later applications will include treatments for hypercholesterolemia, some forms of hypertension, and other congenital missing enzyme syndromes. Very late applications (2000 or later) may be in the treatment of a wide range of mental illnesses and cancers.

1.2.10 Prevention

The principal lesson is the lesson of the impact of calorie restriction on overall health, well-being, and lifespan. The basic message here is “you are what you eat.” In terms of treating atherosclerotic disease, the role of prevention is already clear. By reducing fat intake and decreasing serum cholesterol to below 150 mg/dl, most atherosclerotic disease can be avoided. Similarly, basic changes in nutrition such as trace element and vitamin supplementation can greatly reduce the number of late onset malignancies. Eliminating smoking will also be a major factor in achieving this end….Calorie restriction achieved by means of education and therapeutic agents seems the next big area of preventics to be explored by medicine. Expect the development of truly effective anorectics for treatment of gross obesity and eating disorders by the late 1980’s and then secondary use of these for treatment of mild obesity and weight control in the normal middle aged. Products with reduced calories employing fat substitutes such as sucrose polyester should also be entering the marketplace in the early 1990’s and these will help to reduce the calorie load further.

1.2.11 The Downside

A little information is a dangerous thing, and sometimes a lot of information can be an even more dangerous thing. The reason is that progress in therapeutics, which is relatively difficult, always lags far behind progress in diagnosis, which is relatively easy. This imbalance results in a tension which forces premature treatment which often does more harm than good. It is well to note that each new diagnostic modality brings with it a flood of new information which will at first be grossly misused before anyone understands what it means (Harris’s Law of Diagnostics Advance).

A recent example of this sort of thing is the EKG machine, which for the first time showed that many seemingly normal people had strange cardiac rhythms, some of which were seen also around the time people died suddenly of heart problems. Because of this association, for the last 15 years, a number of very powerful drugs have been used to treat people with such rhythms. Many drug-induced fatalities resulted. Unfortunately, only now is it beginning to be understood that most people with good heart function are less in danger from such rhythms than they are from the drugs used to treat them — a finding of little consolation to the people already killed by the drugs.

And on to the economics:

There is a second downside to advanced medicine, of course, besides the danger, that is the cost of “middlingly advanced technology” (such as what we’ll see in the next fifty years) in a society takes a socialistic view of health care. Such as ours.

Non-molecular technology is expensive. It should be obvious to the reader, with a bit of thought that in a world of non-molecular technology, the potential demand for medical care as technology advances, is (for all intents and purposes) Iinfinite. In America, we have adopted the unfortunate policy of letting everyone pay for everyone else’s medical care, which has had exactly the same result as if we had let everyone pool their money and pay for each other’s lunch: everyone orders lobster. We have paid for the lobster only by spreading the costs around to places where they are not obvious. For instance, when you buy an American car, you pay more money for the health care costs of the people who built it than you do the steel that goes into it. This kind of thing can continue very subtly and very insidiously until a very large fraction of the gross national product is eaten up by health care costs. (In our country, it is already 11% and rising).

One day, you may find that you have had to forego your family vacation in order to buy Granny that new AUTODOC which measures 245 different chemicals in her blood every minute and transmits all of the results to Medical Multivac in Bethesda. Of course you may not realize this: all you will know is that the vacation went because money is so tight, taxes are so high, and inflation is so bad. But your money went to Granny nevertheless. The only answer to this problem, short of nanotechnology, is rationing.

But rationing itself becomes the last great social cost of advanced medical technology under socialism, because history shows that it is never done on an individual (person by person) basis. When people do not pay for their own medical care, no one (not doctors, families, or the government) has ever been willing to make the decision of who should benefit from a given technology, and who should not. Therefore, all systems of rationing to control medical costs ultimately have come down in the past to rationing technology across the board…So all of the rosy predictions made in this article must be tempered with the “social” realities that medicine will have to deal with in the next 20 years. Many of the advances we have discussed may simply not materialize because we are not wealthy enough to afford them collectively. That will be a great tragedy.

2 Reactions

On reading all the foregoing, I commented: that was a depressing read. As far as I can tell, they were dead on about the dismal economics, somewhat right about the diagnostics, and fairly wrong about everything else. Which is better than the old predictions listed, only one of which struck me as obviously right (but in a useless way, who actually uses perfluorocarbons for liquid breathing?).

To which Darwin said:

At the time I wrote it I kept saying to myself, almost none of this stuff is going to happen in 20 years - not here anyway. However, I have to come up with something.

Ironically, in the area of cerebral resuscitation, where I am a supposed “expert,” I tried very hard to be realistic and to be both accurate and precise. That was arguably the area where I did the worst - exactly the way other experts fare when they try to predict the future of their fields…So, here I am, 24 years out from making those predictions and I read the crap posted on Less Wrong and on Cryonet and I don’t whether to scream in rage and anger, or weep. How is possible to reach and convince this new generation of cryonics “passivists” that Yudkowsky and Alcor are breeding and make them understand that progress will continue to be unacceptably slow unless the system itself is changed?

See also Fight Aging!’s post, “Overestimating the Near Future”:

…Many of the specific predictions in the article were in fact demonstrated in the laboratory to some degree, and were technically feasible to develop as commercial products by the year 2000, and in some cases earlier but at much greater expense. Certainly there are partial hits for many of the predictions by 2010, in the sense of it being possible, somewhat demonstrated, or in the early stages of being shown to be a practical goal. Yet the regulatory environment in much of the developed world essentially rules out any form of adventurous, rapid, highly competitive development in clinical medicine - such as exists in the electrical engineering, computing, and other worlds. We are cursed therefore with the passage of many years between a new medical technology being demonstrated possible and then attempted in the marketplace … if it ever makes it to the marketplace at all. This must change if we are to see significant progress.

Darwin comments there:

I’ve been going over my original manuscript and surfing the web for specific applications (approved or in process) which meet the criteria of my predictions of 24 years ago. While many of my “lesser” predictions are in fact being realized (often in ways totally unforeseen by us when we wrote the article) overall it is a profoundly depressing experience.

Perhaps nowhere has that been more true than in the areas of aging and cerebral resuscitation - two fields of endeavor I’ve spent a lifetime working on, or intimately involved with those who are. In 1999, we announced that we had achieved repeatable recovery of dogs following 16+ minutes of whole body noromothermic cardiac arrest with no neurological deficit. The enabling molecules and techniques (principally a combination of melatonin, alpha-phenyl-n-tert-butyl-nitrone (PBN), and mild post-cardiac arrest therapeutic hypothermia) all seemed eminently applicable in the (then) immediate future. Indeed, an analog of PBN, 2,3-dihydroxy–6-nitro–7-sulfamoyl-benzo(F)quinoxaline (NBQX) had passed Phase I and II clinical trials for the treatment of stroke with flying colors, and seemed destined for approval.

That was 13 years, ago, and there is still not a single drug available (approved or otherwise) anywhere in the world to treat cerebral ischemia-reperfusion injury - the real killer in cardiac arrest and stroke! Do a literature search on Pubmed for melatonin + cerebral ischemia and you will get ~130 hits - almost all of them dramatically positive. Melatonin is a naturally occurring bioregulatory molecule which is inexpensive and freely available as an over the counter “nutrient.” Even as a stand alone molecule, melatonin is powerfully protective in both global and regional cerebral ischemia, and yet no human application has been forthcoming. It’s been 15 years since our patent on melatonin and other cerebroprotective molecules was issued, 17 years since the patent was applied for, and over 20 years since I made the discovery! Indeed, mild therapeutic hypothermia, made the supposed standard of care for post cardiac arrest neuroinjury nearly a decade ago, is still largely ignored and is used well in only a handful of hospitals worldwide.

What kind of black irony is it that I live in terror of stroke and cardiac arrest (for both myself and my loved ones) and yet the very molecules I discovered to combat them are as unavailable as if they had never been found? Change? Yes, change is certainly needed, and soon.

3 Further reading

Previous Darwin-related posts:

See also Tyler Cowen's The Great Stagnation and “Peter Thiel warns of upcoming (and current) stagnation”.

New to LessWrong?

New Comment
48 comments, sorted by Click to highlight new comments since: Today at 11:27 AM

Ow Ow Ow Ow

Weirdly enough, there is one prediction that looks like it panned out:

Repairing dental defects will also be revolutionized by the introduction of good, tough, and reliable polymers which will replace metallic amalgams. By the late 1990’s to early 2000’s biocompatible ceramics and coated polymers will be available that will allow for workable single tooth and multitooth gum-implanted prostheses.

It would have to be in the single least life-critical area.

A lot of those areas turned out to be intrinsically harder than anyone expected. Oncology, Alzheimer's...

One thing that I just cannot understand: We had semi-workable artificial hearts 30 years ago. Now, yes, it is hard to make surfaces biocompatible. Still, that has been accomplished in many cases. As a society, we are reasonably good at mechanical engineering. How come a quarter of us still lose our live to the failure of a pump? We hear all the time about global warming, and sustainable this and recyclable that, and sometimes about what NASA might do. Prioritizing any of those things ahead of a decent permanent artificial heart is crazy.

It would have to be in the single least life-critical area.

I thought the increase in cosmetic surgery prediction was also very accurate, even if the US is not yet competitive with (say) South Korea.

Oops, missed that area, even less life-critical, Many Thanks! (Can I construe the replacement of solid metal crowns with polymers and ceramics as a cosmetic change, and therefore being in an area of overlap? :) )

Depends. Weren't solid metal crowns often involved with mercury amalgams? Whenever mercury is involved with anything, I have an unshakeable suspicion that someone is being harmed somewhere. It's a little like lead or a bloody body: maybe it's perfectly innocent and there's a reasonable explanation why you should not be worried by its presence... but don't bet on it and call the police.

Hmm - as far as I know, the metal crowns that I have didn't require any mercury amalgams.

I have three such ceramic implants. I remember having them put in over a simple half-hour operation, being awed by the amazing advances that medicine had made to allow me to carry on my life as if I hadn't knocked my teeth out at all. Little did I know that this was one of the only success stories of the last decade of medicine!

Really interesting. Much more accurate outside of his specialty.

This puts me in a tricky situation. My general position has been that it would be better for the US public if the FDA even more strongly regulated against treatments which do not go through the full clinical trial gamut with high marks. I don't like it when people who are in crisis are vulnerable to getting ripped off by snake oil salesmen.

But it seems like, as so often happens, there's a net marginal utility issue going on here. The FDA, in attempting to slow down potentially harmful and/or worthless medicine, also slows down legitimately awesome medicine from reaching the public as quickly as it could.

And then again, there is the problem the article mentions of diagnosis outpacing treatment, and the resulting over-confidence in the net safety of new medicines.

So what from a policy perspective is needed here? Let's say I now have exclusive, uncontested, and (at least until I take action) unresented control of the FDA. What should I do with it?

But it seems like, as so often happens, there's a net marginal utility issue going on here. The FDA, in attempting to slow down potentially harmful and/or worthless medicine, also slows down legitimately awesome medicine from reaching the public as quickly as it could.

The harm from delayed treatment is estimated to be around an order of magnitude larger than the damage prevented by FDA standards. The FDA's current position is far too cautious.

So what from a policy perspective is needed here? Let's say I now have exclusive, uncontested, and (at least until I take action) unresented control of the FDA. What should I do with it?

This would require more control than just the FDA, but entirely abandon restrictions on what drugs / treatments doctors can prescribe. Require all patient data to be public (once anonymized). Rather than treating things as binary (either the drug is good enough or not good enough) you let people set their own thresholds and decide on the best data available.

The harm from delayed treatment is estimated to be around an order of magnitude larger than the damage prevented by FDA standards. The FDA's current position is far too cautious.

Do you have any source for this? I'd be interested in reading it.

Here is the first google result.

Good discussion, and a good reference.

Re DSimon's:

My general position has been that it would be better for the US public if the FDA even more strongly regulated against treatments which do not go through the full clinical trial gamut with high marks. I don't like it when people who are in crisis are vulnerable to getting ripped off by snake oil salesmen.

FDAReview says:

If the U.S. system resulted in appreciably safer drugs, we would expect to see far fewer postmarket safety withdrawals in the United States than in other countries. Bakke et al. (1995) compared safety withdrawals in the United States with those in Great Britain and Spain, each of which approved more drugs than the United States during the same time period. Yet, approximately 3 percent of all drug approvals were withdrawn for safety reasons in the United States, approximately 3 percent in Spain, and approximately 4 percent in Great Britain. There is no evidence that the U.S. drug lag brings greater safety.

(This was actually something of a surprise to me. My wife has been on a couple of medications which have since been withdrawn, so I've been getting increasingly uncomfortable with the general level of safety of pharmaceuticals, so I'd been leaning in DSimon's direction on this - but this evidence says that increased scrutiny from the FDA hasn't been helping)

The section on off-label uses of drugs is also very persuasive:

Yet any textbook or medical guide discussing stomach ulcers will mention amoxicillin as a potential treatment, and a doctor who did not consider prescribing amoxicillin or other antibiotic for the treatment of stomach ulcers would today be considered highly negligent. Off-label uses are in effect regulated according to the FDA’s pre-1962 rules (which required only safety, not efficacy), whereas on-label uses are regulated according to the post-1962 rules.

I'm not sure I'd quite agree with

entirely abandon restrictions on what drugs / treatments doctors can prescribe.

The phase I trials, looking for human toxicity, still sound reasonable. To my mind, it does look like the efficacy trials should be moved out of the FDA - basically crowd-sourced as post-market data gathering.

I agree with

Require all patient data to be public (once anonymized).

It would help if as many groups as possible have the opportunity to dig through the data as possible. One caveat/suggestion: Epidemiological studies tend to be terrible at giving solid conclusions. Double blind randomized studies are able to cancel out far more of the confounding variables. I suggest that, for any medical decisions that are anywhere close to a 50:50 decision, that an incentive be offered to explicitly randomize the decision and record that fact, along with the other outcome data on the case. Where there is uncertainty anyway, this won't hurt the participating patients on average anyway, and it would embed a continuous stream of randomized trials in the available data.

If ever a post needed a summary break, it's this one. Please edit it in.

[edit] Thanks!

[-][anonymous]12y00

A what?

[This comment is no longer endorsed by its author]Reply

The big story of our times is Great Convergence - formerly dirt-poor 90% of human population rapidly increases their wealth, health, political freedoms etc. This is accompanied by stagnation in the formerly super-wealthy 10% of human population (and there are some models claiming to explain why these two processes are linked).

Technology progresses extremely rapidly, rich world stagnation simply means rich countries are further behind technological frontier than they used to be. That's all.

The world on average is progressing extremely rapidly. Average lag behind technological frontier is diminishing rapidly. What's the point in throwing ridiculous amount of money on saving one life of old person in wealthy country, if you can make hundreds of undercapitalized people in poor countries productive for the same price? There's no logic in doing so, so it is not done.

It will take only 100-150 years for Great Convergence to complete. By simple extrapolation adoption of new technologies should accelerate sometime before it happens.

[-][anonymous]12y30

What's the point in throwing ridiculous amount of money on saving one life of old person in wealthy country, if you can make hundreds of undercapitalized people in poor countries productive for the same price?

Well if that person is you or your grandfather selfishness sounds a pretty good and human reason.

Most people don't spend their own money on saving their grandparents, they spend other people's money. Don't act surprised that other people's willingness to throw tens of millions at your grandfather's last year is not unlimited.

Also if people really cared about how long other people in their country lived, total cigarette ban would be a super simple and super cheap way to start (especially since e-cigarettes are an existing and viable low-cancer substitute - people want the psychoactive bits not the tar). And trans fat ban - or at least strict labeling requirement (which would amount to the same, since nobody want that, and trans fats don't have any special taste or anything, they're just industrial poison in food). Or throwing some money at making roads safer (most accidents happen on small fraction of bad spots). And in countless other ways. Throwing ridiculous amount of money at people when they're oldest is stupid way to achieve an already stupid goal.

And trans fat ban - or at least strict labeling requirement (which would amount to the same, since nobody want that, and trans fats don't have any special taste or anything, they're just industrial poison in food).

I think you overestimate the degree to which people are intentional about their food intake.

IAWYC, but

total cigarette ban would be a super simple and super cheap way to start

I don't think that would work. Marijuana is already illegal but people smoke it anyway, and the war against it costs lots of money.

e-cigarettes are an existing and viable low-cancer substitute

I've heard that lots of people start smoking for signalling purposes (some people even claim it's the only reason why anyone starts smoking at all), and I'm not sure e-cigarettes would send exactly the same signals.

The legal situation here in Australia of e-cigarettes being more restricted than cigarettes pisses me off when I think about it.

I was collecting papers on the topic of dependency on nicotine-replacement therapy (patches, gums, inhalers) the other day, and I was fascinated to read in explanations of why so little non-smoker data was available that, prior to 1996, you needed a prescription to buy them in the USA.

'So', I thought, 'before 1996, if you were over 18-21, you could buy any tobacco product you wanted in unlimited amounts and guaranteed that you were cutting several years on average off your life expectancy; yet you could not buy any amount of nicotine patches which come with essentially no side-effects and absolutely zero effect on life expectancy. Oh America!'

I'm now reminded of the brother of a friend of mine who has never smoked, but nevertheless has an annoying nicotine craving that stems from having tried out a nicotine patch in his teens.

"Total cigarette ban" doesn't mean "Very few people smoke", it means "Almost all smokers redirect their money to the black market; hope you like financing terrorist groups".

France has thrown lots of money at making people drive safely, it's working (and fines are a big part of it so there's money recovered that way) but it doesn't seem miraculously impressive. Making roads themselves safer tend to encourage reckless driving so it's not obvious there are big gains here.

France has thrown lots of money at making people drive safely, it's working (and fines are a big part of it so there's money recovered that way) but it doesn't seem miraculously impressive.

Here's list of countries by traffic fatalities per capita, per vehicle, and per distance travelled. Disparity is just ridiculously huge compared to death cares from just about any other cause, even between seemingly similar countries.

Making roads themselves safer tend to encourage reckless driving so it's not obvious there are big gains here.

Except there's no serious evidence for that, and massive counter-evidence (see table above).

Also if people really cared about how long other people in their country lived, total cigarette ban would be a super simple and super cheap way to start [...] And trans fat ban [...]

The lack of these bans doesn't mean people don't really care about lifespan; it could just mean they value something else more, such as the autonomy represented by being able to smoke cigarettes or eat trans fats. Or (as implied by some of this comment's siblings) they don't think those measures work well enough to justify whatever the bans' anticipated downsides.

[-][anonymous]12y-10

Most people don't spend their own money on saving their grandparents, they spend other people's money.

Upper middle class and the wealthy do this quite a bit, even in countries with universal healthcare.

Don't act surprised that other people's willingness to throw tens of millions at your grandfather's last year is not unlimited.

Why in the world would I? I don't care very much about other people's grandfathers, why should they care about mine? I'm indirectly willing to kill quite a few people to save my own life or that of my family. Even my friends are each worth more than one life to me, going of revealed preferences.

Also you are forgetting that the purpose of the state is basically to serve the desires of its citizens, it is not a global utility maximizer. That citizens of a country would cooperate for selfish gain is hardly unheard of. Also we care more about people in our in-group more than people in our out-group. Many different people identify these by culture, subculture, company, religion, citizenship, ideology, language, profession, nationality or language.

Poor people in Africa are far. We feel more idealistic and more moral thinking about helping them. We get more brownie points of signalling we wish too or will help them than by helping local poor people. But we are ironically less likley to do anything for their benefit, since that is mostly a near action. We more accurately perceive that local poor people are sometimes nasty but we end up helping them more anyway.

Also when thinking about helping people in far places we are less even likley to be pragmatic about the best way to acheive this. Considering how much we fail even at helping those around us this can be a dispiriting.

Throwing ridiculous amount of money at people when they're oldest is stupid way to achieve an already stupid goal.

You should read "The great Charity Storm". We systematically overspend stupidly on education, healthcare and helping poor people.

We systematically overspend stupidly on education, healthcare and helping poor people.

Wait, nothing in "The Great Charity Storm" indicates that we overspend on those things. It just says our spending in those areas has increased since 1800, and gives some theories as to why that might be. I would certainly agree that we don't spend money well in those areas, but it's not the quantity that's the problem.

Your statements about what we want to do (care for our in-group, donate to "far" causes to gain status) don't mean anything about what we should do. I recognize that I have little emotional reason to care about people I'll never meet who live very different lives from mine, but I believe it's wrong for them to suffer and die when I could easily prevent that. So I make an effort to think about things that make them feel nearer - their bodies hurt like mine, they protect their children, they make music, they fall in love.

"We're born to think this way" doesn't mean you can't try to change it.

[-][anonymous]12y20

I think it was implied rather strongly by the explanation he offered. I obviously think it plausible if not probable, lest I wouldn't invoke it.

Around 1800 in England and Russia, the three main do-gooder activities were medicine, school, and alms (= food/shelter for the weak, such as the old or crippled). Today the three spending categories of medicine, school, and alms make up ~40% of US GDP, a far larger fraction than in 1800. Why the vast increase?

My explanation: we long ago evolved strong feelings of respect for these activities, but modern context changes have allowed out-of-equilibrium exploitation of such feelings.

We have evolved strong feeling regarding these activities that are no longer reliable in our modern context. Can you see why this implies we will not only be irrational in our decisions on how much to spend (even in our original context the intuitions where geared towards evolution's utility function not our own) but also in what way we spend on those things.

but I believe it's wrong for them to suffer and die when I could easily prevent that.

How familiar are you with the Far vs. Near material on Overcoming Bias? The reason I invoked it was to point out that when thinking in far mode we are more likley to consider such principles very important, yet in near mode much less so. And remember both far and near are shards of desire

How familiar are you with the Far vs. Near material on Overcoming Bias?

Medium-familiar? As a dichotomy, it seems useful if it lets you do things differently because of it. So if you recognize that your far-mode diet isn't working because your food cravings are near, it may be helpful to make more concrete, near-mode steps. Likewise, if your far-mode ideals say that it's wrong for people to die of TB when there's a cheap cure, but you never get around to acting on it or you instead donate to nearer but less efficient causes, doing something to put it in nearer mode might be helpful. As in, "I will look at pictures of people in countries where people die of stupid diseases and remember that they are regular people like me", or "I will donate to an efficient health charity and then have an ice cream sundae." (Though his may interfere with the far-mode diet...)

Also you are forgetting that the purpose of the state is basically to serve the desires of its citizens,

(mind-killed)

That's an interesting notion. I would have thought that the purpose of the state is to oppress its people, and that modern governments are so much nicer because checks and balances / political infighting cause them to be ground to a near-halt.

[-][anonymous]12y30

I should have said the supposed or stated purpose of a state is to serve the desires of its citizens. Maybe I should have been even more fancy and disguised "desires" as rights. Most people vote and behave like the government is the default engine for doing good as they define it, so it didn't seem to controversial to describe it that way in this context.

You are obviously correct that government's role (purpose is the wrong word to use) is to oppress its people. Government is nothing but a territorial monopolist of violence, though few people explicitly think about it that way. However it can sometimes be useful to be oppressed.

Also generally I'm of the opinion that in the long run formalized check and balances don't really work. It seems pretty unlikely that anything like a stable equilibrium of actual power relations can be enforced by something as weak and easily worked around and gamed as laws or constitutions. Many Western Democracies don't have a strong separation of powers formally and don't seem any more or less nice. Now while this may seem like a trivial difference, but it really isn't. It basically means that formal definitions of the balance of power are for example unable to contain changes in actual power ratios be they caused by technology, culture or economics.

Keeping the polite fiction however works together with other aspect of "democracy" to convince its citizens it is legitimate. Much like divine right was a polite fiction with the same function in a different time. It seems to me very likley that that the reason democracy seems nicer is because it is much more capable of convincing and indoctrinating citizens that it is legitimate and good. A government capable of perfect brainwashing would never need to be mean at all to maintain power.

While I can agree there is a lot of political infighting isn't this more a result of the iron law of oligarchy than anything designed on purpose?

While I can agree there is a lot of political infighting isn't this more a result of the iron law of oligarchy than anything designed on purpose?

Meh. I'm agnostic about whether it was "on purpose". Humans revolt and so select for governments that aren't revolting.

I'm not sure how the iron law of oligarchy results in political infighting, and i'm skeptical of the iron law of oligarchy, but I don't think that's particularly relevant if we agree about the facts on the ground.

Also generally I'm of the opinion that in the long run formalized check and balances don't really work.

Well, nothing works in the indefinite long run, unless your goal is entropy. It does seem to stop a lot of legislation from being passed / sticking in the US, which I suppose is only a benefit from a particular perspective.

[-][anonymous]12y00

Meh. I'm agnostic about whether it was "on purpose". Humans revolt and so select for governments that aren't revolting.

I think that selection filter is much weaker than most imagine. The poor don't revolt.

but I don't think that's particularly relevant if we agree about the facts on the ground.

Agreed.

What's the point in throwing ridiculous amount of money on saving one life of old person in wealthy country, if you can make hundreds of undercapitalized people in poor countries productive for the same price?

This story would be more plausible as an explanation of slow medical progress if there hadn't been big increases in medical R&D spending and employment (on first-world diseases) over the last 40 years, and massive growth in overall medical spending relative to GDP. It doesn't explain the declining rate of drugs developed per dollar invested in pharma R&D, or the broader failure to translate research spending to health gains.

Technology progresses extremely rapidly, rich world stagnation simply means rich countries are further behind technological frontier than they used to be. That's all.

Or maybe there's some kind of ceiling on economical progress, and the First World has already hit it but the rest of the world hasn't.

[-][anonymous]12y70

Stagnation in our time.

The lesson here may be: once a society starts to take progress for granted, it grinds to a halt.

Counterexample: the computer industry.

That's pretty much the only counterexample, though.

Melatonin is a naturally occurring bioregulatory molecule which is inexpensive and freely available as an over the counter “nutrient.”

...

What kind of black irony is it that I live in terror of stroke and cardiac arrest (for both myself and my loved ones) and yet the very molecules I discovered to combat them are as unavailable as if they had never been found?

Huh? What am I missing here? I take melatonin all the time, it's far from "unavailable".

Melatonin has a very short half life and is secreted as needed by the pineal gland. It's apparent primary biological function is as a signal transduction/regulatory molecule. It's unclear if this function is what is responsible for its protective effect in ischemia-reperfusion injury (IRI), because melatonin is also a powerful radical scavenger - and in fact, a particularly effective scavenger of the radical species associated with neuronal injury in IRI, such as peroxynitrite. Other factors to consider are the timing, route of administration and dose used in our studies. The drug was given intravenously in a micellized form to speed delivery across the blood brain barrier. This was done at the start of reperfusion. Finally, the effective dose given was very large (and was based on the stoichiometry of the radical species we wanted to scavenge). The drug was also given in conjunction with many others and, perhaps critically, in combination with the rapid induction of mild therapeutic hypothermia ( 3 deg C below normothermia). Next up on my agenda to test was whether the drug combination was effective without hypothermia since it is very problematic to achieve a 3 deg C reduction in body temperature in ~15 min or less! Unfortunately, that study was canned.

The point here is that the application of any such treatment in the setting of a critical illness would require that it be both an integrated and ACCEPTED part of the medical infrastructure. For instance, it was over 30 years ago that Peter Safar, et al., demonstrated that mild hypothermia AFTER cardiac arrest was profoundly effective in reducing ischemic brain injury, and it has been 9 years since ILCOR made post-cardiac arrest hypothermia the standard of care: http://circ.ahajournals.org/content/108/1/118.full. And yet, post-arrest hypothermia is used almost nowhere. So, even if a treatment is approved and demonstrated to be scientifically valid, it still may not see widespread clinical application for a host of reasons.

I recently watched a BBC documentary called "Back From The Dead", mainly about using extreme hypothermia to prevent IRI in some rather extreme cases, though drug development was also mentioned (that portion mostly focused on the study of cell death).

One case was a Norwegian woman who fell in a crevasse while hiking on a glacier - the extreme cold plus 3+ hours of constant CPR was enough to keep her brain alive long enough to be revived. She made a full recovery and now works at the hospital that revived her.

Another was a man who's blood was intentionally cooled to extreme hypothermic temperatures in order to repair an aortic aneurism. Doctors were able to operate for 45 minutes with the patient in full cardiac arrest with no ill effects.

It's amazing to me that the basis for these techniques have been around for so long, and yet still they seem like science fiction when anyone discusses them. Since the benefits of mild hypothermia had been at least hinted at 30+ years ago, you would think researchers would have been playing with extreme hypothermia soon after and we'd be a lot further along with this stuff in general.

I don't have any idea how often hypothermia is actually used to save lives, but the documentary made it seem rare, with extreme hypothermia being only used in one or two hospitals in the world. Your experience seems to back that up as well.

You're not taking your nightly melatonin pill when you are unconscious or suffering a stroke or cardiac arrest, nor are you popping it in a loved one's mouth in similar circumstances; he's referring to use, routine or exceptional, by medical personnel.

I'd like to see cheap and easy chemical testing-- for example, being able to track what's in your tap water. I expect to see that before I see the sort of cheap and easy medical monitoring you describe. For that matter, tracking the current nutrient value in food would be interesting. Neither one seems to be on the immediate horizon, which makes me wonder if such medical monitoring as you describe is within 10-15 years.

What are the odds of more knowledge of amazingly simple methods for dealing with potentially medical problems? I recently cleared up a case of gastric reflux-- I was waking up with a mouthful of acid fairly often-- just by sleeping on my left side. I don't think that sort of solution exists for every ailment, but I bet there's more of them to be found.

Did a physician inform you about the sleeping-on-side hack?

I personally know of an instance where a person had gastric reflux, went to the doctor, was prescribed a proton pump inhibitor as the sole treatment (Nexum), didn't get any better, went poking around the internet with google, and got non-medicine lifestyle adjustments (elevated bed head, fasting for five hours before bedtime) to fix their reflux problem. Then, later they told the doctor, who said nothing but "oh yeah".

We had no google in 1988 although surely some futurist genius somewhere foresaw it.

I got the sleeping on the left side hack from the wikipedia article.

Seth Roberts posts articles now and then about physicians not being interested in patients who find (cheap/simple) cures for themselves.

[-][anonymous]8y00

Wow. Thank you.

So much interesting here. Metabolic MRI, wow! Brain MRI's can detect cysts, tumors, vasculitis, abcesses, hormonal disorders like pituitary problems or cushing syndrome according to LiveStrong. Whenever someone on the internet brings up brain MRI's, people spew the same shit they feel they're expected to spew - 'you don't know how MRI works, they can't diagnose anything yada yada'. It's just like how everyone says 'go see a doctor' when people ask for medical advice, and they get easy upvotes. I find it really funny. I doubt the people cynical about brain MRI's are neuroradiologists or anything, or psychiatric taxonomists critical of neuroimaging as a diangostic tool. I reckon they've just asked that question before, or something analogous, and internalised that 'it's not a done thing, now I should enforce that rule too'.