All of Crissman's Comments + Replies

A shout-out to the Rationalist Team in the Unaging Challenge, for having the highest completion rate of the seven teams after three weeks! https://www.unaging.com/unaging-system-2025/

Streamlined the registration page, and added a field to note that you want to join a Less Wrong team: https://www.unaging.com/unaging-system-2/

Hmm... "Join a LessWrong team..."? Changing from "the" to "a" should make it clear that these aren't the honorable folks who run this website.

Oura also fine. Some of the people in the beta group are using them.

Thanks for the comments. You're right that "will not extend your life" is too strong. I revised it to "is unlikely to significantly extend your life." Given the impact of other factors on longevity (strength training: 25%, aerobic exercise: 37%, walking 12k steps: 65%, 20g nuts daily: 15%), I do feel the reduction in all-cause mortality from weight loss shouldn't be the top priority.

Crissman104

Well, it was a bummer that my research on fasting found out I wasted my time doing 5:2 fasting for the last decade. Welp, I'll just research the next blog on calorie restriction. Everyone knows that's grea...

https://www.unaging.com/calorie-restriction/

Dammit. Why did I waste those five years doing calorie restriction before I started fasting?

2FlorianH
Appreciate actually the overall take (although not sure how many would not have found most of it simply common sense anyway), but: A bit more caution with the stats would have been great * Just-about-significant ≠ 'insignificant and basta'. While you say the paper shows up to incl. 27 there's no 'effect' (and concluding on causality is anyway problematic here, see below), all data provided in the graph you show and in the table of the paper suggest BMI 27 has a significant or nearly significant (on 95%..) association with death even in this study. You may instead want to say the factor is not huge (or small compared to much larger BMI variations), although the all-cause point-estimate mortality factor of roughly 1.06 for already that BMI is arguably not trivial at all: give me something that, as central-albeit-imprecise estimate, increases my all-cause mortality by 6%, and I hope you'd accept if I politely refused, explaining you propose something that seems quite harmful, maybe even in those outcomes where I don't exactly die from it. * Non-significance ≠ No-Effect. Even abstracting from the fact that the BMI 27 data is actually significant or just about so: "not significant" reduction in deaths on BMI 18-27 in the study wouldn't mean as you claim "will not extend your life". It means, the study was too imprecise to be exactly 95% or more sure that there's a relationship. Without strong prior to the contrary, the point estimate, or even any value to the upper CI bound, cannot be excluded at all as describing the 'real' relationship. * Stats lesson 0: Association ≠ Causality. The paper seems to purposely talk about association, mentioning some major potential issues with interfering unobserved factors already in the Abstract, and there are certainly a ton of confounding factors that may well bias the results (it would seem rather unnatural to expect people who work towards having a supposedly-healthy BMI to behave not differently on average in any other health-r

Dammit. I researched Calorie Restriction, and found there's another five years of my life on a restrictive diet that didn't serve much purpose. Lab rats lie. I posted about it: https://www.unaging.com/calorie-restriction/

Premature death is basically dying before you would on average otherwise. It's another term for increased all-cause mortality. If according to the actuarial tables, you have a 1.0% change of dying at your age and gender, but you have a 20% increased risk of premature death, then your chance is 1.2%.

And yes, please read more on the blog!

I made a thing for adjusting your circadian rhythm for jet lag: https://www.unaging.com/jetlag/

Crissman3-2

Hello! I'm a health and longevity researcher. I presented on Optimal Diet and Exercise at LessOnline, and it was great meeting many of you there. I just posted about the health effects of alcohol.

I'm currently testing a fitness routine that, if followed, can reduce premature death by 90%. The routine involves an hour of exercise, plus walking, every week.

My blog is unaging.com. Please look and subscribe if you're interested in reading more or joining in fitness challenges!

2Screwtape
Welcome Crissman! Glad to have you here. I'm curious how you define premature death- or should I read more and find out on the blog?
Crissman10

Health and longevity blogger from Unaging.com here. I've submitted talks on optimal diet, optimal exercise, how to run sub 3:30 for your first marathon, and sugar is fine -- fight me!

Looking forward to extended, rational health discussions!

I see. So the agent issue I address above is a sub-issue of overall inner alignment.

In particular, I was the addressing deceptively aligned mesa-optimizers, as discussed here: https://astralcodexten.substack.com/p/deceptively-aligned-mesa-optimizers

Thanks!

I think it's right. Inner alignment is getting the mesa-optimizers (agents) aligned with the overall objective. Outer alignment ensures the AI understands an overall objective that humans want.

3Quintin Pope
Not quite. Inner alignment, as originally conceived, is about the degree to which the trained model is optimizing for accomplishing the outer objective. Theoretically, you can have an inner-misaligned model that doesn't have any subagents (though I don't think this is how realistic AGI will work).  E.g., I weakly suspect that a reason deep learning models are so overconfident is actually due to an inner-misalignment between the predictive patterns SGD instills and the outer optimization criterion, where SGD is systematically under-penalizing the model's predictive patterns for over-confident mispredictions. If true, that would represent an inner misalignment without there being any sort of deception or agentic optimization from the model's predictive patterns, just an imperfection in the learning process. More broadly, I don't think we actually want truly "inner-aligned" AIs. I think that humans, and RL systems more broadly, are inner-*misaligned* by default, and that this fact is deeply tied in with how our values actually work. I think that, if you had a truly inner-aligned agent acting freely in the real world, that agent would wirehead itself as soon as possible (which is the action that generates maximum reward for a physically embedded agent). E.g., humans being inner-misaligned is why people who learn that wireheading is possible for humans don't immediately drop everything in order to wirehead.
Answer by Crissman120

Advice: Humidifiers. We need them now, and everywhere that people gather in temperate climates. There's a reason why the common cold, influenza, and indeed SARS all die out as summer approaches in seasonal climates--relative humidity over 40% is the best method for controlling airborne viruses.

Influenza season has been ending every spring, (https://journals.plos.org/plospathogens/article/file?type=printable&id=10.1371/journal.ppat.1003194) long before DNA tests, masks, or alcohol sprays. Humidity under 30% like we regularly encounter in buildings ... (read more)

7robert7
It is almost 90 degrees in Singapore, and the humidity is almost 90 percent. and it spreads...

What's your model for how useful this is? COVID spreads in public places, and it's mostly spreading via droplets on surfaces. Changing indoor humidity seems like it would only have a very minor impact, if any.