Memento is easily one of the best movies about “rationality as practiced by the individual” ever made. [...] When the “map” is a panoply of literal paper notes and photographs, and the “territory” is further removed from one’s lived experience than usual… it behooves one to take rationality, bias, motivated cognition, unquestioned assumptions, and information pretty damn seriously!
Wasn't the main character's attempt at "rationality as practiced by the individual" kind of quixotic though? I didn't get the impression that the moral of the story was "you should be like this guy". He would have been better off not trying any complicated systems and just trying to get help for his condition in a more standard way...
Let’s say my p(intelligent ancestor) is 0.1. Imagine I have a friend, Richard, who disagrees.
No wait, the order of these two things matters. Is P(intelligent ancestor|just my background information) = 0.1 or is P(intelligent ancestor|my background information + the fact that Richard disagrees) = 0.1? I agree that if the latter holds, conservation of expected evidence comes into play and gives the conclusion you assert. But the former doesn't imply the latter.
What makes certain axioms “true” beyond mere consistency?
Axioms are only "true" or "false" relative to a model. In some cases the model is obvious, e.g. the intended model of Peano arithmetic is the natural numbers. The intended model of ZFC is a bit harder to get your head around. Usually it is taken to be defined as the union of the von Neumann hierarchy over all "ordinals", but this definition depends on taking the concept of an ordinal as pretheoretic rather than defined in the usual way as a well-founded totally ordered set.
Is there a meaningful distinction between mathematical existence and consistency?
An axiom system is consistent if and only if it has some model, which may not be the intended model. So there is a meaningful distinction, but the only way you can interact with that distinction is by finding some way of distinguishing the intended model from other models. This is difficult.
Can we maintain mathematical realism while acknowledging the practical utility of the multiverse approach?
The models that appear in the multiverse approach are indeed models of your axiom system, so it makes perfect sense to talk about them. I don't see why this would generate any contradiction with also being able to talk about a canonical model.
How do we reconcile Platonism with independence results?
Independence results are only about what you can prove (or equivalently what is true in non-canonical models), not about what is true in a canonical model. So I don't see any difficulty to be reconciled.
I don't agree that I am making unwarranted assumptions; I think what you call "assumptions" are merely observations about the meanings of words. I agree that it is hard to program an AI to determine who the "he"s refer to, but I think as a matter of fact the meanings of those words don't allow for any other possible interpretation. It's just hard to explain to an AI what the meanings of words are. Anyway I'm not sure if it is productive to argue this any further as we seem to be repeating ourselves.
No, because John could be speaking about himself administering the medication.
If it's about John administering the medication then you'd have to say "... he refused to let him".
It’s also possible to refuse to do something you’ve already acknowledged you should do, so the 3rd he could still be John regardless of who is being told what.
But the sentence did not claim John merely acknowledged that he should administer the medication, it claimed John was the originator of that statement. Is John supposed to be refusing his own requests?
John told Mark that he should administer the medication immediately because he was in critical condition, but he refused.
Wait, who is in critical condition? Which one refused? Who’s supposed to be administering the meds? And administer to whom? Impossible to answer without additional context.
I don't think the sentence is actually as ambiguous as you're saying. The first and third "he"s both have to refer to Mark, because you can only refuse to do something after being told you should do it. Only the second "he" could be either John or Mark.
Early discussion of AI risk often focused on debating the viability of various elaborate safety schemes humanity might someday devise—designing AI systems to be more like “tools” than “agents,” for example, or as purely question-answering oracles locked within some kryptonite-style box. These debates feel a bit quaint now, as AI companies race to release agentic models they barely understand directly onto the internet.
Why do you call current AI models "agentic"? It seems to me they are more like tool AI or oracle AI...
I am still seeing "succomb".
In the long scale a trillion is 10^18, not 10^24.
Basically both of these arguments will seem obvious if you fall into camp #2 here, and nonsensical if you fall into camp #1.