Updates and Reflections on Optimal Exercise after Nearly a Decade
Previously: https://www.lesswrong.com/posts/bZ2w99pEAeAbKnKqo/optimal-exercise Firstly, do the basic epistemics hold up? As far as I know, yes. The basic idea that lifting twice a week and doing cardio twice a week add up to a calorie expenditure that get you the vast majority of exercise benefits compared to extreme athletes holds up, especially when you take reverse causality adjustments into effect (survivorship bias on the genetic gifts of the extreme). Nothing I've encountered since has cast much doubt on this main takeaway. What updates have I had, then, both in personal experience and in giving training advice to others as well as any research that has come out since then? 1. A greater emphasis on injury prevention, as the dis-utility from injuries vastly outweighs positive effects from chasing numbers. This one was sadly a foreseeable update with aging, and thus I lose bayes points for it. I did in fact get an injury deadlifting despite a substantial emphasis on good form and not pushing to the limit as many do. 2. Exercise selection and program optimization likely matter less than I thought, and research that has come out in the meantime has supported this. 3. One and two combined imply that there is no real downside to picking exercises with lower injury potential for the joints and back. WRT to cardio, running is a high impact activity and people shouldn't feel bad about choosing lower impact rowing, swimming, or biking (though biking near cars likely eliminates a lot of the health effects in expectation as it is somewhat dangerous). If you can run, great, I enjoy it quite a lot, but I also don't particularly like the hedonic gradient of pushing yourself to run at the volume and frequency that seems necessary to really git gud (many runners run 5-6 days a week). WRT resistance training, I don't pursue any of the powerlifts (squat, bench, deadlift) anymore, instead focusing on other exercises that don't load the spine/knees as much but allow you
My fuzzy unjustified research sense is that people seem to be doing far too much in the direction of assuming that future AIs will maintain properties of current AIs as they scale, whereas I'm expecting more surprising qualitative shifts. Like if evolution built a hind brain and dusted off its hands with how aligned it is and then oops prefrontal cortex.
Edit: to add more explicitly, I think it's something like ontology shifts introduce degrees of freedom for reward goodharting.