VincentYu comments on Naming the Highest Virtue of Epistemic Rationality - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (28)
There is a major issue with the proposed scoring - it is underspecified. In particular, in the definition:
How do we determine if B is in an agent's set of beliefs? We cannot only consider the beliefs that are currently running through the agent's mind, because we'd end up with at most a few. We need a definition of what "B is in your beliefs" means. However, it is very difficult to specify all of an agent's beliefs - humans don't walk around carrying a well-defined sack of beliefs with probabilities attached.
Less importantly, the linearity in the sum can be exploited. For example, I can easily get myself to believe the following sequence of statements in Peano arithmetic:
1=1
2=2
3=3
...
This will give me a favorable score with minimal effort. At least in this case, the proposed scoring is orthogonal to measuring epistemic rationality.
If when asked "how much do you believe in B?" your neuro net gives an answer by remembering instead of sciencing, then B is in your beliefs. This seems like it will work, but I just thought of it, i'm not sure,