Yeah maybe -- I have a ton of calf problems in general when running, and I should probably see a running coach or something.
This pretty clearly did make the calf problems even worse than usual though :p
I tried the quick gait:
1. running with a backpack
2. running for exercise without a backpack
I think I'm sold on it for 1, seems better than the long, loping gait I previously used for backpack running
Not sold for 2, seems to wear out my calves quickly
Other things that help you run with a backpack:
1. use both a hip strap and a sternum strap, and tighten both (especially the sternum strap) way tighter than you normally would for walking. In my experience this eliminates most of the jostling of the backpack relative to not using straps
2. instead of carrying water bottle on the outside, put it inside for better balance and no chance of it falling out
3. use a high-quality backpack with good padding, and probably with a rigid back, e.g. (https://smile.amazon.com/North-Face-Router-Meld-Black/dp/B092RJ8G86?sa-...
Yep that helps a ton! (having tested it many times)
I'd be interested in joining for a Bay Area kickoff!
My biggest differences with Rohin's prior distribution are:
1. I think that it's much more likely than he does that AGI researchers already agree with safety concerns
2. I think it's considerably more likely than he does that the majority of AGI researchers will never agree with safety concerns
These differences are explained more on my distribution and in my other comments.
The next step that I think would help the most to make my distribution better would be to do more research.
I thought about how I could most efficiently update my and Rohin’s views on this question.
My best ideas are:
1. Get information directly on this question. What can we learn from surveys of AI researchers or from public statements from AI researchers?
2. Get information on the question’s reference class. What can we learn about how researchers working on other emerging technologies that might have huge risks thought about those risks?
I did a bit of research/thinking on these, which provided a small update towards thinking that AGI researchers wi...
I answered the following subquestion to help me answer the overall question: “How likely is it that the condition Rohin specifies will not be met by 2100?”
This could happen due to any of the following non-mutually exclusive reasons:
1. Global catastrophe before the condition is met that makes it so that people are no longer thinking about AI safety (e.g. human extinction or end of civilization): I think there's a 50% chance
2. Condition is met sometime after the timeframe (mostly, I'm imagining that AI progress is slower than I expect...
I answered the following subquestion to help me answer the overall question: “How likely is it that the condition Rohin specified would already be met (if he went out and talked to the researchers today)?”
Considerations that make it more likely:
1. The considerations identified in ricaz’s and Owain’s comments and their subcomments
2. The bar for understanding safety concerns (question 2 on the "survey") seems like it may be quite low. It seems to me that researchers entirely unfamiliar with safety could gain the required...
I've been working on making Elicit search work better for reviews. Would be curious for more detail on how Elicit failed here, if you'd like to share!