I don't agree that I am making unwarranted assumptions; I think what you call "assumptions" are merely observations about the meanings of words. I agree that it is hard to program an AI to determine who the "he"s refer to, but I think as a matter of fact the meanings of those words don't allow for any other possible interpretation. It's just hard to explain to an AI what the meanings of words are. Anyway I'm not sure if it is productive to argue this any further as we seem to be repeating ourselves.
No, because John could be speaking about himself administering the medication.
If it's about John administering the medication then you'd have to say "... he refused to let him".
It’s also possible to refuse to do something you’ve already acknowledged you should do, so the 3rd he could still be John regardless of who is being told what.
But the sentence did not claim John merely acknowledged that he should administer the medication, it claimed John was the originator of that statement. Is John supposed to be refusing his own requests?
John told Mark that he should administer the medication immediately because he was in critical condition, but he refused.
Wait, who is in critical condition? Which one refused? Who’s supposed to be administering the meds? And administer to whom? Impossible to answer without additional context.
I don't think the sentence is actually as ambiguous as you're saying. The first and third "he"s both have to refer to Mark, because you can only refuse to do something after being told you should do it. Only the second "he" could be either John or Mark.
Early discussion of AI risk often focused on debating the viability of various elaborate safety schemes humanity might someday devise—designing AI systems to be more like “tools” than “agents,” for example, or as purely question-answering oracles locked within some kryptonite-style box. These debates feel a bit quaint now, as AI companies race to release agentic models they barely understand directly onto the internet.
Why do you call current AI models "agentic"? It seems to me they are more like tool AI or oracle AI...
I am still seeing "succomb".
In the long scale a trillion is 10^18, not 10^24.
I say "zero" when reciting phone numbers. Harder to miss that way.
I think you want to define to be true if is true when we restrict to some neighbourhood such that is nonempty. Otherwise your later example doesn't make sense.
I noticed all the political ones were phrased to support the left-wing position.
Axioms are only "true" or "false" relative to a model. In some cases the model is obvious, e.g. the intended model of Peano arithmetic is the natural numbers. The intended model of ZFC is a bit harder to get your head around. Usually it is taken to be defined as the union of the von Neumann hierarchy over all "ordinals", but this definition depends on taking the concept of an ordinal as pretheoretic rather than defined in the usual way as a well-founded totally ordered set.
An axiom system is consistent if and only if it has some model, which may not be the intended model. So there is a meaningful distinction, but the only way you can interact with that distinction is by finding some way of distinguishing the intended model from other models. This is difficult.
The models that appear in the multiverse approach are indeed models of your axiom system, so it makes perfect sense to talk about them. I don't see why this would generate any contradiction with also being able to talk about a canonical model.
Independence results are only about what you can prove (or equivalently what is true in non-canonical models), not about what is true in a canonical model. So I don't see any difficulty to be reconciled.