You're right about the prisoner. (Which also reminds me of Locke's locked-room example regarding voluntariness.) That particular situation doesn't distinguish those worlds.
(I should clarify that in each of these "worlds", I'm talking about situations that occur to humans, specifically. For instance, Bayes math clearly works for abstract agents with predefined goals. What I want to ask is, to what extent does this provide humans with good advice as to how they should explicitly think about their beliefs and goals? What System-2 meta beliefs should we adopt and what System-1 habits should we cultivate?)
Heroes in myth defy predictions essentially by taking a wider view -- by getting out of the box (or by smashing the box altogether, or by altering the box, etc.).
I think we're thinking about different myths. I'm thinking mostly of tragic heroes and anti-heroes who intentionally attempt to avoid their fate, only to be caught by it anyway — Oedipus, Agamemnon, or Achilles, say; or Macbeth. With hints of Dr. Manhattan and maybe Morpheus from Sandman. If we think we're in Bayes' world, we expect to be in situations where getting better predictions gives us more control over outcomes, to drive them towards our goals. If we think we're in Cassandra's world, we expect to be in situations where that doesn't work.
As to the Buddha's world, it seems to be mostly about goals and values -- things on the subject of which the Bayes' world is notably silent.
That's pretty much exactly one of my concerns with the Bayes-world view. If you can be misinformed about what your goals are, then you can be doing Bayes really well — optimizing for what you think your goals are — and still end up dissatisfied.
If we think we're in Bayes' world, we expect to be in situations where getting better predictions gives us more control over outcomes
No, not really. Bayes gives you information, but doesn't give you capabilities. A perfect Bayesian will find the optimal place/path within the constraints of his capabilities, but no more. Someone with worse predictions but better abilities might (or might not) do better.
...If you can be misinformed about what your goals are, then you can be doing Bayes really well — optimizing for what you think your goals are — and still
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Thread started before the end of the last thread to ecourage Monday as first day.