What about the claims in "Maintaining behavior" that you do need consistent aversives (punishment), but only inconsistent rewards? That seems to say the exact opposite of the earlier stance: it says that you should use lots of punishments (every time the subject gets something wrong), and few rewards.
I'm confused as to what the book actually wants you to do.
That doesn't seem to follow, actually. You could easily have a very large universe that's almost entirely empty space (which does "repeat"), plus a moderate amount of structures that only appear once each.
And as a separate argument, plenty of processes are irreversible in practice. For instance, consider a universe where there's a "big bang" event at the start of time, like an ordinary explosion. I'd expect that universe to never return to that original intensely-exploding state, because the results of explosions don't go backwards in time, right?
Thanks for the mention, I had never heard of that concept before.
I have strong reflexes of revulsion against this idea that everything must reoccur (aren't plenty of processes irreversible in our world?), but it's getting too off-topic for the original article, and I need to think more about this.
"The experiment being repeated sufficiently often might be considered a reasonably mild restriction; in particular, it is a given if the universe is large enough that everything which appears once appears many times."
Why is that a given? The set of integers is very large, but the number 3 only appears once in it.
Another easy option for rolling a N-sided die is a N-sided prism, like a pencil that you roll on the table that can only come to rest on one of N sides (and never on the tips). With 3 sides it becomes a triangular prism that doesn't quite roll as well as we'd like, but it's doable.
Yet another option is a spinning top with N faces, where you can set N to whatever you want that's >= 3.
But you're right that in practice, probably re-labeling an existing dice, like relabeling a d6 as [1,1,2,2,3,3], is easiest.
Agreed.
All information has a cost (time is a finite resource), the value of any arbitrary bit of information is incredibly variable, and there is essentially infinite information out there, including tremendous amounts of "junk".
Therefore, if you spend time on low-value information, claiming that it has non-zero positive value, then you have that much less time to spend on the high-value information that matters. You'll spend your conversation energy towards trivialities and dead-ends, rather than on the central principle. You'll scoop up grains of sand while ignoring the pearls next to you, so to speak. And that's bad.
Thanks, that worked.
Are pictures or links missing from this post? I can't see any of the Elon Musk or Neville Longbottom pics that the text talks about.
The link to "The Optimizer's Curse" in the article is dead at the moment (<https://faculty.fuqua.duke.edu/~jes9/bio/The_Optimizers_Curse.pdf>), but I think I found it at <https://jimsmith.host.dartmouth.edu/wp-content/uploads/2022/04/The_Optimizers_Curse.pdf>. If that's the right one, can you update the link?