David_Bolin
David_Bolin has not written any posts yet.

David_Bolin has not written any posts yet.

I don't think the belief in life after death necessarily indicates a wish to live longer than we currently do. I think it is a result of the fact that it appears to people to be incoherent to expect your consciousness to cease to be: if you expect that to happen, what experience will fulfill that expectation?
Obviously none. The only expectation that could theoretically be fulfilled by experience is expecting your consciousness to continue to exist. This doesn't actually prove that your consciousness will in fact continue to exist, but it is probably the reason there is such a strong tendency to believe this.
This article here talks about how very... (read 377 more words →)
Ok. In that sense I agree that this is likely to be the case, and would be the case more often than not with any educated person's assessment of who does rigorous work.
It is not that these statements are "not generally valid", but that they are not included within the axiom system used by H. If we attempt to include them, there will be a new statement of the same kind which is not included.
Obviously such statements will be true if H's axiom system is true, and in that sense they are always valid.
How does this not come down to saying that people you consider rigorous, on average did more work on their texts than people you don't consider rigorous, and therefore they wrote less as a whole?
If we take a random (educated) person, and ask him to classify authors into rigorous and non-rigorous, something similar should be true on average, and we should find similar statistics. I can't see how that shows some deep truth about the nature of rigorous thought, except that it means doing more work in your thinking.
I agree that it does mean at least that, so that e.g. some author has written more than 100 books, that is a pretty good sign that he is not worth reading, even if it is not a conclusive one.
I looked at your specified program. The case there is basically the same as the situation I mentioned, where I say "you are going to think this is false." There is no way for you to have a true opinion about that, but there is a way for other people to have a true opinion about it.
In the same way, you haven't proved that no one and nothing can prove that the program will not halt. You simply prove that there is no proof in the particular language and axioms used by your program. When you proved that program will not halt, you were using a different language and axioms. In the same way, you can't get that statement right ("you will think this is false") because it behaves as a Filthy Liar relative to you. But it doesn't behave that way relative to other people, so they can get it right.
I said "so the probability that a thing doesn't exist will be equal to or higher than etc." exactly because the probability would be equal if non-existence and logical impossibility turned out to be equivalent.
If you don't agree that no logically impossible thing exists, then of course you might disagree with this probability assignment.
Also, there is definitely some objective fact where you cannot get the right answer:
"After thinking about it, you will decide that this statement is false, and you will not change your mind."
If you conclude that this is false, then the statement will be true. No paradox, but you are wrong.
If you conclude that this is true, then the statement will be false. No paradox, but you are wrong.
If you make no conclusion, or continuously change your mind, then the statement will be false. No paradox, but the statement is undecidable to you.
There is no program such that no Turing machine can determine whether it halts or not. But no Turing machine can take every program and determine whether or not each of them halts.
It isn't actually clear to me that you a Turing machine in the relevant sense, since there is no context where you would run forever without halting, and there are contexts where you will output inconsistent results.
But even if you are, it simply means that there is something undecidable to you -- the examples you find will be about other Turing machines, not yourself. There is nothing impossible about that, because you don't and can't understand your own source code sufficiently well.
I've seen this kind of thing happen before, and I don't think it's a question of demographics or sockpuppets. Basically I think a bunch of people upvoted it because they thought it was funny, then after there were more comments, other people more thoughtfully downvoted it because they saw (especially after reading more of the comments) that it was a bad idea.
So my theory it was a question of difference in timing and in whether or not other people had already commented.
"If that was so, they'd get the same wobbly feeling on hearing the fire alarm, or even more so, because fire alarms correlate to fire less than does smoke coming from under a door. "
I do get that feeling even more so, in exactly that situation. I therefore normally do not respond to fire alarms.