All of 25Hour's Comments + Replies

Answer by 25Hour10

I've found it quite useful for debugging issues in writing Azure ARM templates (declarative JSON documents describing architectural components.)  "This ARM template failing with this error message" is something it's able to debug and correct easily, though it's also gotten a lot of API details wrong.  It can correct itself afterwards, though, if told the new error message.

It's a very trial-and-error process, but one which proceeds rapidly.

25Hour30

I'll also point out that the vast majority of fees paid by crypto users are paid to Ethereum, which seems fairly close to allowing us to perform a fundamental value analysis:

CryptoFees.info

Post-merge, these fees would go to holders of the currency.

25Hour10

Not a drug I've looked into!  I ended up confining my research into FDA-approved weight loss medication, so I probably missed a number of non-FDA-approved medication that also works for weight loss.

25Hour80

I suppose.  But it's also true that you should minimize the number of debilitating medical conditions you're suffering from long-term.

Which brings us back to the thing where we end up having to choose between a chronic condition which is heavily correlated with a whole bunch of secondary health problems and reduced life expectancy, and being on a drug from which we have not (yet) observed long-term ill effects.

The back-of-the-envelope life expectancy calculations were mostly just there to point out that under most plausible assumptions, the risk/benefit calculations seem lopsided to the point where it shouldn't be a terribly difficult decision.

5JenniferRM
This conversation has basically recapitulated most of the ideas that lead to the antagonist pleiotropy hypothesis, except over "conscious lifelong health interventions" instead of over "blind mutations retained by natural selection". Short term wins often have long term costs.  As clever hack piles on top of clever hacks, the combinatorial explosion of possible interactions goes up pretty fast.  Once wins of this sort pile up, your functional planning horizon shortens. At some point you just say "X will almost certainly get me before the bad parts of Y gets me" and you do Y anyway? But eventually the house of cards topples over. If your lifelong health meta-strategy is aimed at still being able to ski when you're 85, you probably want to minimize pills in general? Find tiny repeatable actions with numerous positive effects that fit into a weekly routine in a way that adapts to a variety of contexts and have many positive effects and stick the adherence? Green veggies? Walk a mile every day? And so on? If you're overweight and 51 and smoke and are pre-diabetic and walking gives you back pain and you'd still prefer to die in ~12 years of a new thing rather than in ~4 years from the thing your doctor recently mentioned... sure... try some pills maybe? Or maybe bariatric surgery? Lots of stuff works locally in the short run, and the long term, in general, often can't be touched without running into a morass of interlinked complexity.
25Hour10

Whoops, sorry, I don't actually know anything about ECA.  Possibly that's how it works, at least partially!  I'm pretty sure it's true that stimulants are appetite suppressants, but it's also possible it has another mechanism of action having to do with non-exercise activity thermogenesis or similar.

Anyway: the way I was thinking about this is, obesity is caused by excess calories.  That being the case, there's no particular reason to anticipate obese people wouldn't be getting appropriate amounts of fiber/micronutrients/etc; or at very leas... (read more)

25Hour20

I feel my disclaimer in the post:

>[Note: as pointed out by comments below, extrapolation to life-years saved is very speculative, since all the studies on this in humans are going to be confounded all to hell by healthy user bias and socioeconomic correlations and the like.  That said, it feels like a fairly reasonable extrapolation given the comorbidity of obesity to various extremely problematic medical conditions.  Be warned!]

should be sufficient to exempt me from charges of "pretending to know things."

The confidence intervals thing is prob... (read more)

-8ChristianKl
Answer by 25Hour50

It seems like given the enormous amounts of blood, sweat and treasure that have been expended in the investigation of long and short-term effects of particular diets, probably the most consistent result is that the null hypothesis prevails for almost all dietary interventions that don't modify caloric intake.

This is most dramatically illustrated by the Women's Health Initiative study, a very large-scale RCT of low-fat diets. A couple of representative results are at https://pubmed.ncbi.nlm.nih.gov/16467234/ and https://pubmed.ncbi.nlm.nih.gov/16467232/ an... (read more)

2Gunnar_Zarncke
How frequently do you eat sweets? About the carrots: I have seen warnings that excessive carrot consumption can show on the skin but I have a friend who also snacks a lot of carrots and it is not showing. I think the risk is very low and easy to fix. 
25Hour30

No. I compulsively use the refactor/rename operation (cntrl-shift-r in my own Visual Studio setup) probably 4 or 5 times in a given coding session on my personal Unity project, and trust that all the call sites got fixed automatically. I think this has the downstream effect of having things become a lot more intelligible as my code grows and I start forgetting how particular methods that I wrote work under-the-hood.

Find-all-usages is also extremely important when I'm at work; just a couple weeks ago I was changing some authentication logic for a datab... (read more)

5Viliam
Similar here. I use renaming less frequently, but I consider it extremely important to have a tool that allows me to automatically rename X to Y without also renaming unrelated things that also happen to be called X. With static typing, a good IDE can understand that method "doSomething" of class "Foo" is different from method "doSomething" of class "Bar". So I can have automatically renamed one (at the place it is declared, and at all places it is called) without accidentally renaming the other. I can automatically change the order of parameters in one without changing the other. It is like having two people called John, but you point your cursor at one of them, and the IDE understands you mean that one... instead of simply doing textual "Search/Replace" on your source code. In other words, every static type declaration is a unit test you didn't have to write... that is, assuming we are talking about automated testing here; are we? I agree that when you are writing new code, you rarely make type mistakes, and you likely catch them immediately when you run the program. But if you use a library written by someone else, and it does something wrong, it can be hard to find out what exactly went wrong. Suppose the first method calls the second method with a string argument, the second method does something and then passes the argument to the third method, et cetera, and the tenth method throws an error because the argument is not an integer. Uhm, what now? Was it supposed to be a string or an integer when it was passed to the third or to the fifth method? No one knows. In theory, the methods should be documented and have unit tests, but in practice... I suspect that complaining about having to declare types correlates positively with complaining about having to write documentation. "My code is self-documenting" is how the illusion of transparency feels from inside for a software developer. I mostly use Java, and yes it is verbose as hell, much of which is needless.
2Adam Zerner
Good to know. Thanks!
25Hour130

I like type checkers less because they help me avoid errors, and more for ergonomics.  In particular, autocomplete-- I feel I code much, much faster when I don't have to look up APIs for any libraries I'm using; instead, I just type something that seems like should work and autocomplete gives me a list of sensible options, one of which I generally pick.  (Also true when it comes to APIs I've written myself.)  I'm working on a Unity project right now where this comes in handy-- I can ask "what operations does this specific field of TMPro.Text... (read more)

7Adam Zerner
Thank you for those thoughts, they're helpful. * I actually was aware of the autocomplete benefit before. I've only spent about three months using a staticly typed language (TypeScript). In that time I found myself not using autocomplete too much for whatever reason, but I suspect that this is more the exception than the rule, that autocomplete is usually something that people find useful. * I wasn't aware of those benefits for refactoring! That's so awesome! If it's actually as straightforward as you're saying it is, then I see that as a huge benefit of static typing, enough where my current position is now that you'd be leaving a lot on the table if you don't use a staticly typed language for all but the smallest of projects. At work we actually decided, in part due to my pushing for it, to use JavaScript instead of TypeScript for our AWS Lambda functions. Those actually seem to be a time when you can depend on the codebase being small enough where static typing probably isn't worth it. Anyway, important question: were you exaggerating at all about the refactoring points?