An extreme form of brain damage might be destruction of the entire brain. I don't think that someone with their entire brain removed has consciousness but lacks the ability to communicate it; suggesting that consciousness continues after death seems to me to be pushing well beyond what we understand "consciousness" to refer to.
The brain seems to be something that leads to consciousness, but is it the only thing?
Maybe other things can "lead to" consciousness as well, but what makes you suspect that humans have redundant ways of generating consciousness? Brain damage empirically causes damage to consciousness, so that pretty clearly indicates that the brain is where we get our consciousness from.
If we had redundant ways of generating consciousness, we'd expect that brain damage would simply shift the consciousness generation role to our other redundant system, so there wouldn't be consciousness damage from brain damage (in the same way that damage to a car's engine wouldn't damage its ability to accelerate if it had redundant engines). But we don't see this.
we don't really know.
We know there's no afterlife. What work is "really know" doing in this sentence, that is capable of reversing what we know about the afterlife?
Well, in dath ilan, people do still die, even though they're routinely cryonically frozen. I suspect with an intelligence explosion death becomes very rare (or horrifically common, like, extinction).
I'd caution that suspecting (out loud) that she might develop an exercise disorder would be one of those insulting or belittling things you were worried about (either because it seems like a cheap shot based on the anorexia diagnosis, or because this might be one approach to actually getting out from under the anorexia by exerting control over her body).
Likely a better approach to this concern would be to silently watch for those behaviours developing and worry about it if and when it actually does happen. (Note that refusing to help her with training and diet means she gets this help from someone who is not watching out for the possibility of exercise addiction).
There are a few approaches that might work for different people:
In any case, make it clear from the outset you want to be respectful about it.
It seems like the War on Terror, etc, are not actually about prevention, but about "cures".
Some drug addiction epidemic or terrorist attack happens. Instead of it being treated as an isolated disaster like a flood, which we should (but don't) invest in preventing in the future, it gets described as an ongoing War which we need to win. This puts it firmly in the "ongoing disaster we need to cure" camp, and so cost is no object.
I wonder if the reason there appears to be a contradiction is just that some policy-makers take prevention-type measures and create a framing of "ongoing disaster" around it, to make it look like a cure (and also to get it done).
One would be ethical if their actions end up with positive outcomes, disregarding the intentions of those actions. For instance, a terrorist who accidentally foils an otherwise catastrophic terrorist plan would have done a very ‘morally good’ action.
This seems intuitively strange to many, it definitely is to me. Instead, ‘expected value’ seems to be a better way of both making decisions and judging the decisions made by others.
If the actual outcome of your action was positive, it was a good action. Buying the winning lottery ticket, as per your example, was a good action. Buying a losing lottery ticket was a bad action. Since we care about just the consequences of the action, the goodness of an action can only be evaluated after the consequences have been observed - at some point after the action was taken (I think this is enforced by the direction of causality, but maybe not).
So we don't know if an action is good or not until it's in the past. But we can only choose future actions! What's a consequentialist to do? (Equivalently, since we don't know whether a lottery ticket is a winner or a loser until the draw, how can we choose to buy the winning ticket and choose not to buy the losing ticket?) Well, we make the best choice under uncertainty that we can, which is to use expected values. The probability-literate person is making the best choice under uncertainty they can; the lottery player is not.
The next step is to say that we want as many good things to happen as possible, so "expected value calculations" is a correct way of making decisions (that can sometimes produce bad actions, but less often than others) and "wishful thinking" is an incorrect way of making decisions.
So the probability-literate used a correct decision procedure to come to a bad action, and the lottery player used an incorrect decision procedure to come to a good action.
The last step is to say that judging past actions changes nothing about the consequences of that action, but judging decision procedures does change something about future consequences (via changing which actions get taken). Here is the value in judging a person's decision procedures. The terrorist used a very morally wrong decision procedure to come up with a very morally good action: the act is good and the decision procedure is bad, and if we judge the terrorist by their decision procedure we influence future actions.
--
I think it's very important for consequentialists to always remember that an action's moral worth is evaluated on its consequences, and not on the decision theory that produced it. This means that despite your best efforts, you will absolutely make the best decision possible and still commit bad acts.
If you let it collapse - if you take the shortcut and say "making the best decision you could is all you can do", then every decision you make is good, except for inattentiveness or laziness, and you lose the chance to find out that expected value calculations or Bayes' theorem needs to go out the window.
There's no other source of morality and there's no other criterion to evaluate a behaviour's moral worth by. (Theorised sources such as "God" or "innate human goodness" or "empathy" are incorrect; criteria like "the golden rule" or "the Kantian imperative" or "utility maximisation" are only correct to the extent that they mirror the game theory evaluation.)
Of course we claim to have other sources and we act according to those sources; the claim is that those moral-according-to-X behaviours are immoral.
what is different about how we value morality based on its origin?
Evolution, either genetic or cultural, doesn't have infinite search capacity. We can evaluate which of our adaptations actually are promoting or enforcing symmetric cooperation in the IPD, and which are still climbing that hill, or are harmless extraneous adaptations generated by the search but not yet optimised away by selection pressures.
Sorry, I was trying to get at 'moral intuitions' by saying fairness, justice, etc. In this view, ethical theories are basically attempts to fit a line to the collection moral intuitions - to try and come up with a parsimonious theory that would have produced these behaviours - and then the outputs are right or interesting only as far as they approximate game-theoretic-good actions or maxims.
Even given other technological civilisations existing, putting "matter and energy manipulation tops out a little above our current cutting edge" at 5% is way off.
I went from ardently two-boxing to ardently one-boxing when I read that you shouldn't envy someone's choices. More general than that, actually; I had a habit of thinking "alas, if only I could choose otherwise!" about aspects of my identity and personality, and reading that post changed my mind on those aspects pretty rapidly.