This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.
I am getting to grips with the basics of Bayesian rationality and there is something I would like to clarify. For this comment please assume that whenever I use the word 'rationality' I mean 'Bayesian rationality'.
I feel there is too strong a dependency between rationality and available data. If current understanding is close to the truth then using rational assessment will be effective. But in any complex subject the data is so inconclusive that the possibility that we can not even conceive the right hypothesis, to rationally choose it from its alternatives, is quite high. No? I will give a simplified example.
In this post it is said:
It then goes on to explain how we rationally choose between the options. That is all good. Let's suppose though that the actual cause of the headache is psychosomatic. And let us also suppose that the culture in which the experiment is taking place does not have a concept of psychosomatic causes. They just always think it is either cancer or a cold. And most of the times it is. Is it not true that a rational assessment of the situation will fail? How would someone with a sound rational mind approach that situation (in the world of the thought experiment)?
This is dealt with in science by not accepting explanations as truths until they are confirmed experimentally (Well.. in an ideal science cause in reality scientists jump into philosophical speculation all too often). But rationality can only be effective if we assume that we are quite close to an accurate understanding of nature. And I hope you will agree that the evidence does not indicate that at all.
Am I missing something here?
Note that experimental confirmation isn't really the thing here; experiments just give you data and the problem here is conceptual (the actual truth isn't in the hypothesis space).
Most Bayes is "small world" Bayes, where you have conceptual and logical omniscience, which is possible only because of how small the problem is. "Big world" Bayes has universal priors that give you that conceptual omniscience.
In order to make a real agent, you need a language of conceptual uncertainty, logical uncertainty, and naturalization. (Most of these f... (read more)