Here's one way to think of anthropic reasoning. You'll need an urn, and a stochastic process that can put balls in the urn.
For example, to think about safety vs. danger in the past, we make a stochastic process that is a toy model of a possibly-dangerous situation, but instead of people living we put a ball in the urn. So perhaps we flip a coin that has Safety written on one side and Danger on the other. If Safety, we put a ball in the urn. If Danger, we flip another coin between Nuclear War and No Nuclear War, and if No Nuclear War, we put a ball in the urn.
So from our perspective as the experimenter, when we try to draw a ball from the urn, there's a 50% chance of Danger having come up, and a 25% chance that when we try to draw a ball from the urn, there will be nothing there, on account of Nuclear War. If we want to do simple anthropic reasoning from the perspective of the ball (sphairic reasoning?), then what we do is we condition on the ball being drawn from the urn. Conditional on this event, we know that Nuclear War did not happen, and we believe that Safety was twice as likely as Danger.
I only bring up this re-statement of your reasoning because I personally find this "stochastic process, condition on drawing a ball to get its perspective" framing useful for thinking about confusing anthropic stuff.
I commented on the original on Facebook when it came out, and while you take a much more reasoned approach to explaining the situation, I found some value in riffing on the emotional punch of the original. Namely, I like that it clearly exposes that anthropics and quantum immortality are two sides of the same coin, and if you admit one you have to admit the other. This was not the author's original intent, since they were trying to show that there is some unclear place where anthropics bleeds in to un-intuitive reasoning, but if you accept something compatible with MWI and not all possible universes being equally likely then it seems we must accept quantum immortality, anthropics, and other "weird" conclusions based on the supposed existence of other branches.
Nah, you only get quantum immortality if you condition on future events rather than just present ones.
The user Optimization Process presented a very interesting collection of five anthropic situations, leading to seemingly contradictory conclusions where we can't conclude anything about anything.
It's an old post (which I just discovered), but it's worth addressing, because it's wrong - but very convincingly wrong (it had me fooled for a bit), and clearing up that error should make the situation a lot more understandable. And you don't need to talk about SSA, SIA, or other advanced anthropic issues.
The first example is arguably legit; it's true that:
But what's really making the argument work is the claim that:
But the main argument fails at the very next example, where we can start assigning reasonable priors. It compares worlds where the cold war was incredibly dangerous, with worlds where it was relatively safer. Call these "dangerous", and "safe". The main outcome is "survival", ie human survival. The characters are currently talking about surviving the cold war - designate this by "talking". Then one of the character says:
This encodes the true statement that P(survival | talking) is approximately 1, as are P(survival | talking, safe) and P(survival | talking, dangerous). In these conditional probabilities, the fact that they are talking has screened off any effect of the cold war on survival.
But Bayes law still applies, and
Since P(survival | dangerous) < P(survival) (by definition, dangerous cold wars are those where the chance of surviving are lower than usual), we get that
Thus the fact of our survival has indeed caused us to believe that the cold war was safer than initially thought (there are more subtle arguments, involving near-misses, which might cause us to think the cold war was more dangerous than we thought, but those don't detract from the result above).
The subsequent examples can be addressed in the same way.
Even the first example follows the same pattern - we might not have sensible priors to start with, but if we did, the update process proceeds as above. But beware - "a deity constructed the universe specifically for humans" is strongly updated towards, but that is only one part of the more general hypothesis "a diety construced the universe specifically for some thinking entities", which has a much weaker update.
What is anthropic reasoning for, then?
Given the above, what is anthropic reasoning for? Well, there are sublte issues with SIA, SSA, and the like. But even without that, we can still use anthropic reasoning about the habitiability of our planet, or about the intelligence of creatures realted to us (that's incidentally why the intelligence of dolphins and octopuses tells us a lot more about the evolution of intelligence, as they're not already "priced in" by anthropic reasoning).
Basically, us existing does make some features of the universe more likely and some less likely, and taking that into account is perfectly legitimate.