I have been talking to some people (few specific people I thought would benefit and appreciate it) in my dorm and teaching them rationality. I have been thinking of skills that should be taught first and it made me think about what skill is most important to me as a rationalist.
I decided to start with the question “What does it mean to be able to test something with an experiment?” which could also mean “What does it mean to be falsifiable?”
To help my point I brought up the thought experiment with a dragon in Carl Sagan’s garage which is as follows
Carl: There is a dragon in my garage
Me: I thought dragons only existed in legends and I want to see for myself
Carl: Sure follow me and have a look
Me: I don’t see a dragon in there
Carl: My dragon is invisible
Me: Let me throw some flour in so I can see where the dragon is by the disruption of the flour
Carl: My dragon is incorporeal
And so on
The answer that I was trying to bring about was along the lines that if something could be tested by an experiment then it must have at least one different effect if it were true than if it were false. Further if something had at least one effect different if it were true than if it was false then I could at least in theory test it with an experiment.
This led me to the statement:
If something cannot at least in theory be tested by experiment then it has no effect on the world and lacks meaning from a truth stand point therefore rational standpoint.
Anthony (the person I was talking to at the time) started his counter argument with any object in a thought experiment cannot be tested for but still has a meaning.
So I revised my statement any object that if brought into the real world cannot be tested for has no meaning. Under the assumption that if an object could not be tested for in the real world it also has no effect on anything in the thought experiment. i.e. the story with the dragon would have gone the same way independent of its truth values if it were in the real world.
Then the discussion continued into could it be rational to have a belief that could not even in theory be tested. It became interesting when Anthony gave the argument that if believing in a dragon in your garage gave you happiness and the world would be the same either way besides the happiness combined with the principle that rationality is the art of systematized winning it is clearly rational to believe in the dragon.
I responded with truth trumps happiness and believing the dragon would force you to believe the false belief which is not worth the amount of happiness received by believing it. Even further I argued that it would in fact be a false belief because p(world) > p(world)p(impermeable invisible dragon) which is a simple occum’s razor argument.
My intended direction for this argument with Anthony from this point was to apply these points to theology but we ran out of time and we have not had time again to talk so that may be a future post.
Today however Shminux pointed out to me that I held beliefs that were themselves non-falsifiable. I realized then that it might be rational to believe non-falsifiable things for two reasons (I’m sure there’s more but these are the main one’s I can think of please comment your own)
1) The belief has a beauty to it that flows with falsifiable beliefs and makes known facts fit more perfectly. (this is very dangerous and should not be used lightly because it focuses to closely on opinion)
2) You believe that the belief will someday allow you to make an original theory which will be falsifiable.
Both of these reasons if not used very carefully will allow false beliefs. As such I myself decided that if a belief or new theory sufficiently meets these conditions enough to make me want to believe them I should put them into a special category of my thoughts (perhaps conjectures). This category should be below beliefs in power but still held as how the world works and anything in this category should always strive to leave it, meaning that I should always strive to make any non-falsifiable conjecture no longer be a conjecture through making it a belief or disproving it.
Note: This is my first post so as well as discussing the post, critiques simply to the writing are deeply welcomed in PM to me.
This post is somewhat confused. I would recommend that you finish reading the Sequences before making a future post.
One way to think about what is accomplished when you perform a thought experiment is that you are performing an experiment where the subject is your brain. The goal is to figure out what your brain thinks will happen, and statements about such things are falsifiable statements about brains.
The world is not the same either way because the dragon-believer is not the same either way. If the dragon-believer actually believes that there's a dragon in her garage (as opposed to believing in her belief that she has a dragon in her garage), that belief can affect how she makes other decisions. Truths are entangled and lies are contagious.
Why?
Can you give some examples of beliefs with this property?
Why call it a belief instead of an idea, then? (And why the emphasis on originality?)
Or at the very least, read Eliezer's new epistemology sequence, which directly addresses the questions at the heart of the OP.