Saying "X is emergent" is conveying some information, if there is someone in the room that does not already know this fact. Here is an example:
Quarks are emergent.
This is not an explanation though. It is more like a anti-explanation. I just claimed that there is an underlying explanation to quarks, and then stopped. I told you to make space for an explanation, in you mental world model, and then I left you with that space empty. If you believed my statement, and if you don't already know how quarks emerges and from what, I just made an explanation shaped hole in your mind. This is not nice of me.
But at least you now know that there is an explanation to be found. When you thought quarks was fundamental, you did not even know to look, because fundamental things can not be explained, only described.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Okay...
So, say it turns out that, well, Eve is irrational. Somehow.
Now what? Do we go "neener-neener" at her? What's the point? What's the use that you could get out of labeling this behavior irrational?
Suppose Adam dies and is cryo-frozen. During Eve's life, there will be no resuscitation of Adam. Sometime afterward, however, Omega will arrive, deem the problem interesting and simulate Adam via really really really advanced technology.
Turns out he didn't do it.
Is she now rational because, well, turns out she was right after all? Well, no, because getting the right answer for the wrong reasons is not the rational way to go about things (in general, it might help in specific cases if you need to get the answer right but don't care how).
....
Actually, let me just skip over a few paragraphs I was going to write and skip to the end.
You cannot have 100% confidence interval. Because then your belief is set in stone and it cannot change. You can have a googleplex nines if you want, but not 100% confidence.
Fallacy of argument from probability (if it can happen then it must happen) aside; How is it rational to discard a belief you are holding on shaky evidence if you think with near absolute certainty that no more evidence will arrive, ever? What will you do when there is more evidence? (Hint: Meeting Adam's mother at the funeral and hearing childhood stories about what a nice kid he was is more evidence for his character, albeit very weak evidence - and so are studies that show that certain demographics of the timeperiod that Adam lived in had certain characteristics) You gotta update! (I don't think that fallacy I mentioned applies; if it does, we can fix it with big numbers; if you are to hold this belief everywhere, then... the probabilities go up as it turns from "in this situation" to "in at least one of all these situations")
So to toss a belief aside because you think there will be no more evidence is the wrong action to me. You can park a belief. That is to take no action. Maintain status quo. No change in input is no change in output. But you do NOT clear the belief.
Let me put up a strawman - I'll leave it up to others to see if there's something harder underneath - if you hold this action - "I think there will be no more evidence, and I am not very confident either way, so I will discard the output" to be the rightful action to take, how do you prevent yourself from getting boiled like a frog in a pan (yes, that's a false story - still, I intend the metaphorical meaning: how do you stop yourself from discarding every bit of evidence that comes your way, because you "know" there to be no more evidence?)
In my opinion, to do as you say weakens or even destroys the gradual "update" mechanism. This leads to less effective beliefs, and thus is irrational.
Were we to now look at the 3 questions, I'd answer..
Again, Eve is irrational because she says it cannot be falsified. If we let Eve say "I still think he didn't do it because of his character, and I will keep believing this until I see evidence to the contrary - and if such evidence doesn't exist, I will keep believing this forever" - then yes, Eve is rational.
The second question, yes via this specific example. Here it can, thus it can.
Yes, it can be extended to belief in God. Provided we restrict "God" to a REALLY TINY thing. As in, gee, a couple thousand years ago, something truly fantastic happened - it was God! I saw it with my own eyes! You can keep believing there was, at that point in time, an entity causing this fantastic thing. Until you get other evidence, which may never happen. What you CANNOT do is say, "hey, maybe this 'God' that caused this one fantastic thing is also responsible for creating the universe and making my neighbor win the lottery and my aunt get cancer and ..." That's unloading a huge complexity on an earlier belief without paying appropriate penalties.
You don't only need evidence that the fantastical events were caused, you also need evidence they were caused by the same thing if you wish to attribute them to that same thing.
Assume I observe X, Y, Z and form three hypotheses
A obviously has highest probability since it includes B and C as special cases. However, which one of B and C do you think should get complexity penalty over the others?
In you story:
The relevant comparison is: Given that God did X, what is the probability that God also did Y and Z, verses God did not do those things.
P(God did Y, Z | God did X) = P(God did X,Y, Z) / P(God did X)
v.s.
P(God did not do Y, Z | God did X) = P(God did X, and something other than God did Y, Z) / P(God did X)
I am uncertain about how to correctly apply complexity penalty, but I do believe that the multi explanation model "God did X, and something other than God did Y, Z" should get complexity penalty over the sing explanation model "God did X, Y, Z".
The belief "God caused some tiny thing, a couple thousand years ago", should correlated with the belief "God did this big thing right now". This is why I firmly believe that God did not cause some tiny thing, a couple of thousand years ago.