People like to pretend they are doing fine by using a cognitive algorithm for judging that is riddled with availability heuristic, epistemically unsound dialectics and other biases. Almost everyone I meet is physically and emotionally unwell and shies away from thinking about it. What rare engagement does happen occurs with close intimates who are selected for having the same blind spots as them.
It's like everyone has this massive assumption that things will turn out fine, even though the default outcome is terrible (see obesity and medicated mental health rates). Or they just have learned helplessness about learned helplessness.
Under what circumstances do you get people telling you they are fine? That doesn't happen to me very much--"I'm fine" as part of normal conversation does not literally mean that they are fine.
I think you'd need to define "fine" a little better for me to understand your argument. The likely result, for each of us, is death. I feel pretty helpless about that, and it's both learned and reasoned.
In the meantime, I do some things that make it slightly more pleasant for me and others, and perhaps (very hard to measure) more likely that there will be more others in the future than there would otherwise be. But I also do things that are contrary to those long-term goals, which bring shorter-term joy or expectation of survival.
The default (and inevitable) outcome _is_ terrible. And that's fine.
Traversable wormholes, were they to exist for any length of time, would act as electric and gravitational Faraday cages, i.e. attenuate non-normal electric and gravitational field exponentially inside their throats with the scale parameter of the mouth size/throat circumference. Consequently, the electric/gravitational field around them is non-conservative. This follows straightforwardly from solving the Laplace equation, but never discussed in the literature as far as I can find.
Not new, but possibly more important than it gets credit for. I haven't had time to figure out why it doesn't apply pretty broadly to all optimization-under-constraints problems.
https://en.wikipedia.org/wiki/Theory_of_the_second_best
Updated: 2019-12-10
2 of them:
I am confused. If MWI is true, we are all already immortal, and every living mind is instantiated a very large number of times, probably literally forever (since entropy doesn't actually decrease in the full multiverse, and is just a result of statistical correlation, but if you buy the quantum immortality argument you no longer care about this).
Of course, I'm not expecting you to support the idea in the answers, but simply mentioning its conclusion:)