Comment author: pscheyer 25 June 2015 05:29:01AM *  6 points [-]

Subject: Warfare, History Of and Major Topics In

Recommendation: Makers of Modern Strategy from Machiavelli to the Nuclear Age, by Peter Paret, Gordon Craig, and Felix Gilbert.

I recommend this book specifically over 'The Art of War' by Sun Tzu or 'On War' by Clausewitz, which seem to come up as the 'war' books that people have read prior to (poorly) using war as a metaphor. The Art of War is unfortunately vague- most of the recommendations could be used for any course of action, which is sort of a common problem with translations from chinese due to the heavy context requirements of the language. Clausewitz is actually one of the articles in Makers of Modern Strategy- the critical portions of On War are in the book, in historical context.

The important part of Makers of Modern Strategy is that each piece (the book is a collection of the most important essays in the development of military thought through the ages, starting with the medieval period and through nuclear warfare. I have other recommendations for the post-nuclear age of cyberwarfare and insurgency and I'll post them separately.) is placed in context and paraphrased for critical details. Military strategy is an ongoing composition, but the inexperienced read a single strategic author and think they have everything figured out.

This book is great because it walks you through each major strategic innovation, one at a time, showing how each is a response to the last and how each previous generation being sure they've got everything figured out is how their successors defeat them. My overall takeaway was one of humility- even the last section on nuclear war has been supplanted by cyber and insurgent warfare, and it is a sure bet that someone will always find a way to deploy force to defeat an opponent. This book walks you through how to defeat naive and inexperienced combatants in a strategic sense. Tactics, as always, are contingent on circumstances.

Comment author: [deleted] 29 January 2014 02:20:35PM -1 points [-]

Well, here are the thoughts that you provoked from me about this.

Here is the topic for discussion: should we trust psychiatric analysis using frequentist statistics and ignore the outliers, or should we individually analyze psychiatric studies to see if they contain outliers who show symptoms which we personally desire? Should we act differently when seeking nootropics to improve performance than we do when seeking medication for crippling OCD? Should we trust our psychiatrists, who are probably not very statistically savvy and probably don't read the cases of the outliers?

I think we may want to split this up into two questions: What you should do personally, if you feel you have condition X, and what you should do as a government if you want to help treatment for condition X.

For instance, as a government, I would go for repeatability, and size. Just forget the idea of trusting or not trusting suggestive outliers and go for more verification, with a larger sample: If for no other reason then determining the frequency of the outlying effect, which would be important for making large scale medical recommendations.

However, a single person can't generally commission large medical studies, so they might want to just read through the literature and read multiple papers about the effects of such things, perhaps also cross referencing their own medical history. I think Metamed, which was mentioned on Less Wrong a while back, does something like this, if you don't have the time to analyze your health that carefully personally.

If you don't want to do that, (either independently or through commissioning experts), then chances are you will be relying on:

I think that if even the right placebo could cause changes which improve my effectiveness, it would be worth a shot.

Now, psychiatric placebos can be shown to have a variety of effects in papers, and it can get hammed up a bit in headline rereporting.

http://nymag.com/thecut/2014/01/study-placebo-sleep-just-as-good-as-real-sleep.html

But it isn't all hammed up either: it gets rather complicated.

http://en.wikipedia.org/wiki/Placebo

(As a side note, I upvoted you, since that was a good way of provoking thoughts, and it seemed like a placebo for increasing your mental wellbeing.)

In response to comment by [deleted] on Personal Psychiatric Analysis
Comment author: pscheyer 30 January 2014 11:25:29AM 0 points [-]

Interesting addition of the government perspective. I think that my contributions to that perspective have very little potential for value-added, as that perspective seems to be prevalent in academia and the private and public sectors. I am taking the individual perspective for this discussion.

I would also be interested in a Metamed opinion on this topic, as you are correct, it seems like the magnified version of what I'm suggesting. I'm basically asking 'should you hire metamed to prescribe you off-label nootropics based on existing studies?'

Comment author: hyporational 30 January 2014 03:02:02AM *  0 points [-]

How small would the sample size have to be before you would consider trying the drug yourself, just to see if you, too, lived forever as long as you took it?

Reducing sample size also blinds you to any ill effects the drug might have. If you're looking to generalize your idea about outliers, immortality seems to be an especially poor example since it's more unusual than any real outlier you might come across.

As far as I know, psychiatrists cannot reliably predict that a given drug will improve a patient's long-term diagnosis, and psychiatrists/psychologists cannot even reliably agree on what condition a patient is manifesting.

Taboo reliable. How would scrutinizing outliers make them more reliable? Acknowledging people are unique snowflakes doesn't help if you have no tools to know when and how they're unique. Other specialties have the same problem, psychiatrists' tools are just especially crude in comparison.

Mental disorders appear to resist diagnosis and solution, unlike, say, a broken leg or a sucking chest wound

Substitute those surgical conditions with some endocrine condition for example and the contrast may not be so stark.

If you're treating yourself, you're especially prone to bias. Doctors acknowledge this and many think they shouldn't treat themselves. If you want to utilize outliers, at least have someone else do or confirm the research. Doesn't have to be a doctor if you don't trust them.

Comment author: pscheyer 30 January 2014 11:21:10AM 0 points [-]

Taboo reliable. Sure. I hold the opinion that psychiatrists cannot predict that a given drug will improve a patient's long-term diagnosis, and that psychiatrists/psychologists cannot agree on what condition a patient is manifesting. I agree that we have no tools to know when or how they're unique. I'm taking the perspective that the (admittedly very biased individual) should consider trying available options with low entry costs and demonstrably unimportant side effects, to see if they are unique snowflakes like those few in the study. The costs seem low and the potential upside high when considering psychological augmentation via off-prescription nootropics.

Good point on the endocrine condition. Very similar situation to what i'm trying to express. Probably a better example than mine.

I'm trying to figure out if bias in the case of the consumer who doesn't have access to prescription medication is enough, if you have a perspective of 'try the otc thing to see if you get the same positive outlier result, if not, discontinue.'

Comment author: Yvain 30 January 2014 03:40:11AM *  12 points [-]

A small investigational drug trial won't be powered to detect outliers, and you won't be able to reliably solve that by invoking Bayesian statistics.

In large drug trials I think this is to some degree already done, but it's limited by the extreme sketchiness of suddenly inventing new endpoints for your study after you have the data. It would probably take the form of increasing the threshold for an endpoint (for example, "No significant difference between drug and placebo was found with the planned endpoint of decreasing HAM-D ratings by 3 or more, but there were significantly more patients in the drug group who had their HAM-D ratings decrease by 10 or more". Everyone is rightly suspicious of people who do this, because, again, changing endpoints. But if it happened enough someone would take notice. Trust me, "not coming up with clever ways to make their drug look effective for at least some people" is not one of pharmaceutical companies' failure modes.

But keep in mind that you sort of loaded the original example by choosing something that almost never happens (someone living to 110 without any signs of aging). In a psychiatry study, what's the most extreme example you're going to get? Someone's depression remits completely? Big deal. Most people's depressive episodes remit completely after a couple of months anyway, and in 25% of people they never return (in even more people, they take many years to return, and almost no studies continue for the many years it would take to notice). In a drug trial of 10000 people (the number you gave above) hundreds or thousands of people in each group are going to have their depression remit completely; if the drug has a superpowerful effect on one person and cures her depression forever, that will get lost in noise in the way that someone living to 110 with the body of a 30 year old might not.

(it's instructive to compare this to the way studies investigate side effects. If one person in a 10000 person study has their arms fall off, the investigators will notice, because that's sufficiently rare as to raise suspicion it was caused by the drug. The drug will then end up with a black box warning saying "may make arms fall off.")

Another way these sorts of outlier effects might be detected is by subgroup analyses (which are also extremely sketchy). If there is no effect in general, researchers may check whether there is an effect among men, among women, among blacks, among whites, among Latinos, among postmenopausal Burmese women who wear hats and own at least two pets and have a history of disease in their left kidney, anything that turns up a positive result. But again, this is hardly something we want to encourage.

But all these things are for investigational drugs. if we're talking about a drug that's already been approved and has a strong prescription history, then your worries about individualized response would get subsumed into the good responder / bad responder distinction, which is a very very big area of research which we know a lot about and when we don't know it it's not for lack of trying.

For example, among bipolar patients, response to lithium can be (very inconsistently) predicted by selecting for patients who have stronger family history of disease, have fewer depressive symptoms, have slower cycles, have more euthymic periods, have less of a history of drug use, start with a manic episode, demonstrate psychomotor retardation, demonstrate premorbid mood lability, lack premorbid personality disturbance, possibly have deranged serotonin metabolism in platelets, possibly have increased calcium binding to red blood cells, possibly lack the HLA-A3 antigen, possibly have a particular variant of the gene GADL1, etc etc etc.

(in practice we don't expend much effort to check most of these things, because their predictive power is so weak that it's almost always a worse idea than just making a best guess based on the data you have, putting someone on lithium or on an alternative, then switching if it doesn't work)

As far as I know, psychiatrists cannot reliably predict that a given drug will improve a patient's long-term diagnosis, and psychiatrists/psychologists cannot even reliably agree on what condition a patient is manifesting. Mental disorders appear to resist diagnosis and solution, unlike, say, a broken leg or a sucking chest wound.

The whole "medical doctors can always consistently treat medical diseases, but psychiatrists are throwing darts blindfolded" story is something of a myth - see for example Putting the efficacy of psychiatric and general medicine medication into perspective: review of meta-analyses

Comment author: pscheyer 30 January 2014 11:15:32AM 2 points [-]

Thanks for that last link, it was an interesting update on the effectiveness of psychiatry. I was weighting my knowledge of the prevalence of rotten corpses in psychology into my estimate of the effectiveness of psychiatric methods, which now seems to be conflating two very different things. Although it does still seem that the set of psychiatrists who are capable of ignoring the prevalent rotten corpses in psychology when prescribing drugs is still small enough to tip the field toward doing your own analyses. I guess i don't have a good set of heuristics for comparing the effects of personal bias v the effects of a psychiatrist trained in psychology and prone to that field's biases.

Yes, my example was loaded. The thought experiment was 'weird, unrecognized by the system outlier, of personal interest to the reader,' and whether/in-what-circumstances it should influence the reader to try the drug. If one of those circumstances is 'pharma doesn't try to make their drug look effective as a nootropic,' i feel it sums my perspective a bit better than 'pharma doesn't try to make their drug look effective for at least some people, within the set of markets they've established as worth aiming marketing toward during a given time period.'

Comment author: RichardKennaway 29 January 2014 07:01:24AM 8 points [-]

The basic question to ask is: did he live to 110 because of the drug, or was he going to be unusually long-lived anyway and happened to be enrolled in the trial?

If the drug was developed to combat some mechanism of senescence, one might reasonably entertain the possibility of a causal effect, but if it was for an unrelated matter, I don't see a reason to expect it. Either way, one would want the scientists to do a lot more tests on that individual to discover the mechanisms of his longevity.

("Personally, young man, I attribute my years to a diet of whisky, cigars, and strictly fried food.")

Comment author: pscheyer 29 January 2014 07:33:55AM *  -1 points [-]

That is one basic question to ask. The fact that it was not developed to combat a mechanism of senescence does not mean that it fails to inadverdently combat a mechanism of senescence. I agree that more study of the individual is in order. However, personally I'd probably still try the stuff in the interim- I wouldn't want to lose years waiting on papers to be published, and i feel that the chance is worth it.

The previous sentence is really the point of the prompt- what level of evidence do you need to strike out on your own, against the frequentist stats saying it doesn't happen for most people? What amount of upside?

Personal Psychiatric Analysis

1 pscheyer 29 January 2014 06:02AM

Imagine reading about the following result buried in a prestigious journal:

 

We administered [Drug X] to 10,000 patients 80+ years of age selected to be a statistical representation of the populace. None had exhibited any prior medical history to suggest unusual conditions, outside of the normal range of issues collected over a lifetime. 1/3 of the patients were selected as a control group, and the others were entered into a longitudinal study of [Drug X] in which they were given varying doses over a 30 year timespan. [Please read charitably and flesh this out to be a good, well run longitudinal study by your personal standards. The important thing is the number of patients involved.] 

Of the patients administered [drugx] 1x/month for 10 years, we found that there was an increase of average lifespan by 1 year compared to normal actuarial tables. We are unsure of the cause of this. We also had one patient who has yet to die after 30 years and shows no signs of aging. Our drug has effectively demonstrated its properties as a medication designed to reduce cholesterol and will proceed to be approved for normal prescription.

Now, personally, reading this I would be completely uninterested in the normal result and fascinated by the one, crazy, outlier. Living to the age of 110 is abnormal enough that within 6,666 people selected as a statistical representation of the population, it is extremely unlikely that anyone would live that long, much less continue performing at the apparent health of an 80 year old.

How small would the sample size have to be before you would consider trying the drug yourself, just to see if you, too, lived forever as long as you took it? What adverse effects and hassles would you go through to try it? Would these factors interact to influence your decision (Mild headaches and a pill 4x/day in exchange for maybe apparent eternal life? Sign me up!)

 

This example is an oversimplification to make a point- often in clinical trials there are odd outliers in the results. Patients who went into full remission, or had a full recovery, or were cured of schizophrenia completely.

In the example above, if the sample size had been 10 people, 9 of whom had no adverse effects and one who lived forever, I would take it. I have been known to try nootropics with little or no proven effect, because there are outliers in their samples who have claimed tremendously helpful effects and few people with adverse effects, and i want to see if I get lucky. I think that if even the right placebo could cause changes which improve my effectiveness, it would be worth a shot.

As far as I know, psychiatrists cannot reliably predict that a given drug will improve a patient's long-term diagnosis, and psychiatrists/psychologists cannot even reliably agree on what condition a patient is manifesting. Mental disorders appear to resist diagnosis and solution, unlike, say, a broken leg or a sucking chest wound. I have learned that Cognitive Behavioral Therapy (CBT) has consistent results against a number of disorders, so I have endeavored to learn and apply CBT to my own life without a psychologist or psychiatrist. It has proven extremely effective and worthwhile.

Here is the topic for discussion:  should we trust psychiatric analysis using frequentist statistics and ignore the outliers, or should we individually analyze psychiatric studies to see if they contain outliers who show symptoms which we personally desire? Should we act differently when seeking nootropics to improve performance than we do when seeking medication for crippling OCD? Should we trust our psychiatrists, who are probably not very statistically savvy and probably don't read the cases of the outliers?

Where are the holes in my logic, which suggests that psychiatrists who think like medical doctors/general practitioners have a completely incorrect perspective (the law of averages) for finding and testing potential solutions for the extremely personalized medicinal field of psychotherapy/psychiatry (in which everyone is, actually, an extremely unique snowflake.).

 

This is more of a thought-provoking prompt than a well-researched post, so please excuse any apparent assertions in the above, all of which is provided for the sake of argument and arises from anecdata.

Comment author: hyporational 10 September 2013 05:18:26PM *  5 points [-]

Cultivating a sense of perfectionism in the most mundane aspects of life is probably what most militaries employ. This includes overlearning everything from folding bedsheets and shining your boots to complicated drills and executing all kinds of personal maintenance with minimal amount of time. Apply long enough, and winging it won't even cross your mind anymore. I think this is a very useful idea if correctly and moderately applied. Strict hierarchy helps the practice, obviously.

The well meaning idea didn't get that well applied during my conscription. I learned to fold my bedsheets like a pro, but hardly learned how to shoot a weapon.

Comment author: pscheyer 12 September 2013 11:32:32PM 2 points [-]

Hahaha, exact same thing here. The US Air Force makes a big thing out of attention to detail- a single errant fold in a bedsheet or T-shirt results in the entire 50 person unit's crap being thrown everywhere and all of you have to do it again.

In contrast, we went to the shooting range once and had to hit the target a single time out of 40 shots to pass. In fairness, if the AF is using rifles everything is pear-shaped anyway.

Comment author: Vaniver 11 September 2013 01:17:33AM 3 points [-]

And there are earlier echos:

There is a phrase in Latin: Promoveatur ut amoveatur - "Let him be promoted to get him out of the way." It was apparently a pretty common one, not unbelievable considering the nepotist bureaucratic nightmare that was the Roman Empire.

Comment author: pscheyer 12 September 2013 11:22:58PM 3 points [-]

the nepotist bureaucratic nightmare that was the Roman Empire

One of my goals with this thread is to figure out how to avoid such nepotist bureaucratic nightmares, which have historically dominated the long-term outlook of empires from China to Rome to, increasingly, the US.

Comment author: katydee 10 September 2013 05:21:42AM 1 point [-]

One interesting implication of this is that if you're really good it's possible to have quite a wide impact.

Comment author: pscheyer 12 September 2013 11:19:31PM 3 points [-]

Mmm. There are qualifications. First, your orders are enforced by other people- and limited by their ability to understand and adapt your orders. As time goes on and your orders are outdated, they will not be updated until someone of equal or greater rank devotes both attention and personnel to updating them, and it is rare for this to happen until something definitively proves they are outdated (an incident of some sort).

So, yes, a wide impact. But not a wide impact at your top quality level, a wide impact at the level that manages to percolate through your chain of subordinates and a persistent impact (for better or worse) until an incident causes a policy update.

Comment author: hyporational 11 September 2013 05:40:29PM *  2 points [-]

Finland. Please don't scheme to invade us or we'll mop you to submission.

I was exaggerating a bit. What I mean there was no hope of becoming any good with the minimal training. I have fired and know how to handle a pistol, an assault rifle, a machine gun, a shotgun, a sniper rifle, a bazooka, an antiaircraft gun and have thrown a live grenade once. Can't really hit anything with them...

I'm not sure if the lack of training was because of pinching pennies on ammo, but I wouldn't be surprised because of all the other kinds of nonsense. We had an abundance of mops, though.

Comment author: pscheyer 12 September 2013 11:03:13PM 0 points [-]

American Air Force is the same.

View more: Next