I thought about manually deleting them all but I don't feel like it.
Therefore, do things you'd be in favor of having done even if the future will definitely suck. Things that are good today, next year, fifty years from now... but not like "institute theocracy to raise birth rates", which is awful today even if you think it might "save the world".
"Let's abolish slavery," when proposed, would make the world better now as well as later.
I'm not against trying to make things better!
I'm against doing things that are strongly bad for present-day people to increase the odds of long-run human species survival.
https://roamresearch.com/#/app/srcpublic/page/10-11-2024
I'm not defeatist! I'm picky.
And I'm not talking specifics because i don't want to provoke argument.
wait and see if i still believe it tomorrow!
I think I agree with this post directionally.
You cannot apply Bayes' Theorem until you have a probability space; many real-world situations, especially the ones people argue about, do not have well-defined probability spaces, including a complete set of mutually exclusive and exhaustive possible events, which are agreed upon by all participants in the argument.
You will notice that, even on LessWrong, people almost never have Bayesian discussions where they literally apply Bayes' Rule. It would probably be healthy to try to literally do that more often! But making a serious attempt to debate a contentious issue "Bayesianly" typically looks more like Rootclaim's lab leak debate, which took a lot of setup labor and time, and where the result of quantifying the likelihoods was to reveal just how heavily your "posterior" conclusion depends on your "prior" assumptions, which were outside the scope of debate.
I think prediction markets are good, and I think Rootclaim-style quantified debates are worth doing occasionally, but what we do in most discussion isn't Bayesian and can't easily be made Bayesian.
I am not so sure about preferring models to propositions. I think what you're getting at is that we can make much more rigorous claims about formal models than about "reality"... but most of the time what we care about is reality. And we can't be rigorous about the intuitive "mental models" that we use for most real-world questions. So if you're take is "we should talk about the model we're using, not what the world is", then...I don't think that's true in general.
In the context of formal models, we absolutely should consider how well they correspond to reality. (It's a major bias of science that it's more prestigious to make claims within a model than to ask "how realistic is this model for what we care about?")
In the context of informal "mental models", it's probably good to communicate how things work "in your head" because they might work differently in someone else's head, but ultimately what people care about is the intersubjective commonalities that can be in both your heads (and, for all practical purposes, in the world), so you do have to deal with that eventually.
I don't think it was articulated quite right -- it's more negative than my overall stance (I wrote it when unhappy) and a little too short-termist.
I do still believe that the future is unpredictable, that we should not try to "constrain" or "bind" all of humanity forever using authoritarian means, and that there are many many fates worse than death and we should not destroy everything we love for "brute" survival.
And, also, I feel that transience is normal and only a bit sad. It's good to save lives, but mortality is pretty "priced in" to my sense of how the world works. It's good to work on things that you hope will live beyond you, but Dark Ages and collapses are similarly "priced in" as normal for me. Sara Teasdale: "You say there is no love, my love, unless it lasts for aye; Ah folly, there are episodes far better than the play!" If our days are as a passing shadow, that's not that bad; we're used to it.
I worry that people who are not ok with transience may turn themselves into monsters so they can still "win" -- even though the meaning of "winning" is so changed it isn't worth it any more.