If your probability assessment of the subjective experience of a Tuesday following the subjective experience of going to sleep on a Monday were 0.00001, you would expect it to happen around once in a lifetime. Personally, I would assign a much higher probability than that.
I'm not sure if I understand how it would be dangerous to change how you feel about things based on evidence. We should strive to hold our beliefs lightly.
When I say belief, than I mean stuff that's in your mind and that effects the way you act. The question "What do I believe?" is a different question than "What's resonable for me to believe?".
I never consciously formed my "Tuesday -> Wednesday but not Tuesday->Monday" belief. I just noticed the belief when I got it challenged. It produced a lot of stress. The fact that I can know intellectually that my memory isn't perfect doesn't change the fact that I belief on an emotional level in my memory.
It might not be the best e...
Hello to all,
Like the rest of you, I'm an aspiring rationalist. I'm also a software engineer. I design software solutions automatically. It's the first place my mind goes when thinking about a problem.
Today's problem is the fact that our beliefs all rest on beliefs that rest on beliefs. Each one has a <100% probability of being correct. Thus, each belief built on it has an even smaller chance of being correct.
When we discover a belief is false (or less dramatically, revise its probability of being true), it propagates to all other beliefs that are wholly or partially based on it. This is an imperfect process and can take a long time (less in rationalists, but still limited by our speed of thought and inefficiency in recall).
I think that software can help with this. If a dedicated rationalist spent a large amount of time committing each belief of theirs to a database (including a rational assessment of its probability overall and given that all other beliefs that it rests on are true) as well as which other beliefs their beliefs rest on, you would eventually have a picture of your belief network. The software could then alert you to contradictions between your estimate of a belief's probability of being true and its estimate based on the truth estimate of the beliefs that it rests on. It could also find cyclical beliefs and other inconsistencies. Plus, when you update a belief based on new evidence, it can spit out a list of beliefs that should be reconsidered.
Obviously, this would only work if you are brutally honest about what you believe and fairly accurate about your assessments of truth probabilities. But I think this would be an awesome tool.
Does anyone know of an effort to build such a tool? If not, would anyone be interested in helping me design and build such a tool? I've only been reading LessWrong for a little while now, so there's probably a bunch of stuff that I haven't considered in the design of such a tool.
Your's rationally,
Avi