I blog at https://dynomight.net where I like to strain my credibility by claiming that incense and ultrasonic humidifiers might be bad for you.
Well done, yes, I did exactly what you suggested! I figured that an average human lifespan was "around 80 years" and then multiplied and divided by 1.125 to get 80×1.125=90 and 80/1.125=71.111.
(And of course, you're also right that this isn't quite right since (1.125 - 1/1.125) / (1/1.125) = (1.125)²-1 = .2656 ≠ .25. This approximation works better for smaller percentages...)
Interesting. Looks like they are starting with a deep tunnel (530 m) and may eventually move to the deepest tunnel in Europe (1444 m). I wish I could find numbers on how much weight will be moved or the total energy storage of the system. (They say quote 2 MW, but that's power, not energy—how many MWh?)
According to this article, a Swiss company is building giant gravity storage buildings in China and out of 9 total buildings, there should be a total storage of 3700 MWh, which seems quite good! Would love to know more about the technology.
You're 100% right. (I actually already fixed this due to someone emailing me, but not sure about the exact timing.) Definitely agree that there's something amusing about the fact that I screwed up my manual manipulation of units while in the process of trying to give an example of how easy it is to screw up manual manipulations of units...
You mentioned a density of steel of 7.85 g/cm^3 but used a value of 2.7 g/cm^3 in the calculations.
Yes! You're right! I've corrected this, though I still need to update the drawing of the house. Thank you!
Word is (at least according to the guy who automated me) that if you want an LLM to really imitate style, you really really want to use a base model and not an instruction-tuned model like ChatGPT. All of ChatGPT's "edge" has been worn away into bland non-offensiveness by the RLHF. Base models reflect the frightening mess of humanity rather than the instructions a corporation gave to human raters. When he tried to imitate me using instruction-tuned models it was very cringe no matter what he tried. When he switched to a base model it instantly got my voice almost exactly with no tricks needed.
I think many people kinda misunderstand the capabilities of LLMs because they only interact with instruction-tuned models.
Why somewhat? It's plausible to me that even just the lack of DHA would give the overall RCT results.
Yeah, that seems plausible to me, too. I don't think I want to claim that the benefits are "definitely slightly lower", but rather that they're likely at least a little lower but I'm uncertain how much. My best guess is that the bioactive stuff like IgA does at least something, so modern formula still isn't at 100%, but it's hard to be confident.
My impression was that the backlash you're describing is causally downstream of efforts by public health people to promote breastfeeding (and pro-breastfeeding messages in hospitals, etc.) Certainly the correlation is there (https://www.researchgate.net/publication/14117103_The_Resurgence_of_Breastfeeding_in_the_United_States) but I guess it's pretty hard to prove a strict cause.
I'm fascinated that caffeine is so well-established (the most popular drug?) and yet these kinds of self-experiments still seem to add value over the scientific literature.
Anyway, I have a suspicion that tolerance builds at different rates for different effects. For example, if you haven't had any caffeine in a long time (like months), it seems to create a strong sense of euphoria. But this seems to fade very quickly. Similarly, with prescription stimulants, people claim that tolerance to physical effects happens gradually, but full tolerance never develops for the effect on executive function. (Though I don't think there are any long-term experiments to prove this.)
These different tolerances are a bit hard to understand mechanistically: Doesn't caffeine only affect adenosine receptors? Maybe the body also adapts at different places further down the causal chain.
(Many months later) Thanks for this comment, I believe you are right! Strangely, there do seem to be many resources that list them as being hydrogen bonds (e.g. Encyclopedia Brittanica: https://www.britannica.com/science/unsaturated-fat which makes me question their editorial process.) In any case, I'll probably just rephrase to avoid using either term. Thanks again, wish I had seen this earlier!
Wow, I didn't realize bluesky already supports user-created feeds, which can seemingly use any algorithm? So if you don't like "no algorithm" or "discover" you can create a new ranking method and also share it with other people?
Anyone want to create a lesswrong starter pack? Are there enough people on bluesky for that to be viable?