At the moment I’m using yEd to create a dependency map of the Sequences, which is roughly equivalent to creating what I guess you could call an inferential network. Since embarking on this project I’ve discovered just how useful having a structured visual map can be, particularly for things like finding the weak points of various conclusions, establishing how important a particular idea is to the validity of the entire body of writing, and using how low a post is on the hierarchy as a heuristic for establishing the inferential distance to the concepts it contains.
So I’m thinking that the use of a belief network mapping tool might not necessarily be mainly in allowing updates to propagate though a personal network, but creating networks representing bodies of public knowledge. Like for example, the standard model of physics. As you can imagine this would be immensely useful for both research and education. For research such a network would point to the places where (for example) the standard model is weak, and for education it would lay out the order in which concepts should be taught in order for students to let them form an accurate internal working model without getting confused.
TL;DR: Yes, I’d love to help you design and build such a tool.
That's a great idea. And in the domain of physics, it might be a lot easier to quantify the probability that a belief is wrong. And what theories theories rest upon. The same could be done for pure mathematics.
Hello to all,
Like the rest of you, I'm an aspiring rationalist. I'm also a software engineer. I design software solutions automatically. It's the first place my mind goes when thinking about a problem.
Today's problem is the fact that our beliefs all rest on beliefs that rest on beliefs. Each one has a <100% probability of being correct. Thus, each belief built on it has an even smaller chance of being correct.
When we discover a belief is false (or less dramatically, revise its probability of being true), it propagates to all other beliefs that are wholly or partially based on it. This is an imperfect process and can take a long time (less in rationalists, but still limited by our speed of thought and inefficiency in recall).
I think that software can help with this. If a dedicated rationalist spent a large amount of time committing each belief of theirs to a database (including a rational assessment of its probability overall and given that all other beliefs that it rests on are true) as well as which other beliefs their beliefs rest on, you would eventually have a picture of your belief network. The software could then alert you to contradictions between your estimate of a belief's probability of being true and its estimate based on the truth estimate of the beliefs that it rests on. It could also find cyclical beliefs and other inconsistencies. Plus, when you update a belief based on new evidence, it can spit out a list of beliefs that should be reconsidered.
Obviously, this would only work if you are brutally honest about what you believe and fairly accurate about your assessments of truth probabilities. But I think this would be an awesome tool.
Does anyone know of an effort to build such a tool? If not, would anyone be interested in helping me design and build such a tool? I've only been reading LessWrong for a little while now, so there's probably a bunch of stuff that I haven't considered in the design of such a tool.
Your's rationally,
Avi