This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.
I assume you are talking hypothetically and not really saying that we, in reality, have these priors? Is there an article about this 'small world' 'big world' distinction?
This went completely over my head. Why did you bring agents in the conversation?
I had in mind Solomonoff Induction.
Here's the last time that came up; I think it's mostly in margins rather than an article on its own.
Ah, because when talking about the how to model problems (which I think Bayesian rationality is an example of), agents are the things that do that.