I'll also point out that the vast majority of fees paid by crypto users are paid to Ethereum, which seems fairly close to allowing us to perform a fundamental value analysis:
Post-merge, these fees would go to holders of the currency.
Not a drug I've looked into! I ended up confining my research into FDA-approved weight loss medication, so I probably missed a number of non-FDA-approved medication that also works for weight loss.
I suppose. But it's also true that you should minimize the number of debilitating medical conditions you're suffering from long-term.
Which brings us back to the thing where we end up having to choose between a chronic condition which is heavily correlated with a whole bunch of secondary health problems and reduced life expectancy, and being on a drug from which we have not (yet) observed long-term ill effects.
The back-of-the-envelope life expectancy calculations were mostly just there to point out that under most plausible assumptions, the risk/benefit calculations seem lopsided to the point where it shouldn't be a terribly difficult decision.
Whoops, sorry, I don't actually know anything about ECA. Possibly that's how it works, at least partially! I'm pretty sure it's true that stimulants are appetite suppressants, but it's also possible it has another mechanism of action having to do with non-exercise activity thermogenesis or similar.
Anyway: the way I was thinking about this is, obesity is caused by excess calories. That being the case, there's no particular reason to anticipate obese people wouldn't be getting appropriate amounts of fiber/micronutrients/etc; or at very least, I have not heard anyone make such a case.
So while it's definitely true that drugs wouldn't help with nutritional deficiencies, it's also not clear to me that this is necessarily relevant to the health impacts of obesity.
I feel my disclaimer in the post:
>[Note: as pointed out by comments below, extrapolation to life-years saved is very speculative, since all the studies on this in humans are going to be confounded all to hell by healthy user bias and socioeconomic correlations and the like. That said, it feels like a fairly reasonable extrapolation given the comorbidity of obesity to various extremely problematic medical conditions. Be warned!]
should be sufficient to exempt me from charges of "pretending to know things."
The confidence intervals thing is probably a good idea, but I have no idea where to start on that, really, since the confidence intervals would be mostly driven by "how confident am I feeling about using correlational studies on health outcomes to make causal claims about the effects of a treatment" more than any objective factor.
I'm not actually sure about whether a study looking at the effects of successful weight loss on mortality would be all that helpful for this conversation, since that would still end up being a totally correlational study with enormous error bars and confounders, and successful long-lasting weight loss isn't very common (itself which will introduce yet more confounders). Also I don't think such a study exists.
It seems like given the enormous amounts of blood, sweat and treasure that have been expended in the investigation of long and short-term effects of particular diets, probably the most consistent result is that the null hypothesis prevails for almost all dietary interventions that don't modify caloric intake.
This is most dramatically illustrated by the Women's Health Initiative study, a very large-scale RCT of low-fat diets. A couple of representative results are at https://pubmed.ncbi.nlm.nih.gov/16467234/ and https://pubmed.ncbi.nlm.nih.gov/16467232/ and https://pubmed.ncbi.nlm.nih.gov/16391215/; they did not, in general, find any meaningful differences between the low-fat and control groups in terms of cardiovascular disease risk or breast cancer risk, which were their primary endpoints. (The small and mostly non-statistically-significant results they did observe are difficult to untangle from the average 2 kg of weight lost by the experimental group.)
All told, it seems like the only two clearly-demonstrated-important aspects of nutrition are:
Everything else is clouded in layers of controversy fueled by observational studies with varyingly-dodgy attempts at controlling for confounders.
As such I tend to view pretty much anything I'm eating in a given day as normal and healthy based on whether it allows me to stay within my desired caloric intake. Most of the time it's quasadillas with low-carb tortillas, Catalina Crunch cereal, eggs, milk, and frequently McDonald's breakfasts (I'm actually very fond of their biscuits.) And carrots. Lots of carrots.
No. I compulsively use the refactor/rename operation (cntrl-shift-r in my own Visual Studio setup) probably 4 or 5 times in a given coding session on my personal Unity project, and trust that all the call sites got fixed automatically. I think this has the downstream effect of having things become a lot more intelligible as my code grows and I start forgetting how particular methods that I wrote work under-the-hood.
Find-all-usages is also extremely important when I'm at work; just a couple weeks ago I was changing some authentication logic for a database we used, and needed to see which systems were using it so I could verify they all still worked after the change. So I just right-click and find-usages and I can immediately evaluate everywhere I need to fix.
As an aside, I suspect a lot of the critiques of statically-typed languages come from people whose static typing experiences come from C++ and Java, where the compiler isn't quite smart enough to infer most things you care about so you have to repeat a whole bunch of information over and over. These issues are greatly mitigated in more modern languages, like C# (for .NET) and Kotlin (for the JVM), both of which I'm very fond of. Also: I haven't programmed in Java for like three years, so it's possible it has improved since I touched it last.
Full disclosure: pretty much all my experiences the last few years have been of statically-typed languages, and my knowledge of the current dynamic language landscape is pretty sparse. All I can say is that if the dynamic language camp has found solutions to the refactor/rename and find-all-usages problems I mentioned, I am not aware of them.
You can get some of these benefits from optional/gradual typing systems, like with Typescript; the only thing is that if it's not getting used everywhere you get a situation where such refactorings go from 100% to 90% safe, which is still pretty huge for discouraging refactoring in a beware-trivial-inconveniences sense.
I like type checkers less because they help me avoid errors, and more for ergonomics. In particular, autocomplete-- I feel I code much, much faster when I don't have to look up APIs for any libraries I'm using; instead, I just type something that seems like should work and autocomplete gives me a list of sensible options, one of which I generally pick. (Also true when it comes to APIs I've written myself.) I'm working on a Unity project right now where this comes in handy-- I can ask "what operations does this specific field of TMPro.TextMeshProUGUI support", and get an answer in a half a second without leaving the editor.
More concretely than that, static typing enables extremely useful refactoring patterns, like:
I kind of agree with the article posted that, in general, the kinds of things you want to demonstrate about your program mostly cannot be demonstrated with static typing. (Not always true-- see Parse, don’t validate (lexi-lambda.github.io) -- but true most of the time.)
I've found it quite useful for debugging issues in writing Azure ARM templates (declarative JSON documents describing architectural components.) "This ARM template failing with this error message" is something it's able to debug and correct easily, though it's also gotten a lot of API details wrong. It can correct itself afterwards, though, if told the new error message.
It's a very trial-and-error process, but one which proceeds rapidly.