Furthermore, the brain is massively parallel, and has a specialized architecture. For what it does, it's well optimized, at least compared to how optimized our software and hardware are for similar tasks at this point. For instance, laptop processors are general purpose processors, being able to do many different tasks they aren't really fast or good at any.
Intuitively this doesn't seem right at all: I can think of plenty of things that a human plus an external memory aid (like a pencil + paper) can do that a laptop can't, but (aside from dumb hardware stuff like "connect to the internet" and so on) I can't think of anything for which the reverse is true; while I can think of plenty of things that they both can do, but a laptop can do much faster. Or am I misinterpreting you?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
That looks like losing your rationality by reading LessWrong. As does this by XiXiDu that he links to.
A couple of quotes from the latter strike me:
and
That is as it should be. Blindly following logic wherever it takes you is like strapping yourself to a rocket with no steering.
I've never been able to make sense of the traditional koans, not because I find them hard puzzles, but because I don't even see what puzzle is being posed. But we have here in the LessWrong material koans aplenty.
Mentalism cannot be true! Physicalism cannot be true!
Bayesian reasoning is the only way! We cannot do Bayesian reasoning!
Aumann agreement! Dissension among rational people!
Human intelligence is possible! After sixty years of trying we haven't the slightest idea how!
Trolley problems!
TORTURE vs. SPECKS!
Quantum suicide!
Give me all your money and I'll repay you 3^^^3-fold!
The Utility Monster!
The Repugnant Conclusion!
You spend one dead child at Starbucks every year!
Vast stakes depend on your slightest decision! You cannot evaluate them! You must evaluate them!
You have six hours to cut down a tree! It will take twelve hours to sharpen your axe! The first god we make will torture you forever for failing!
This one's easy; I'm guessing this is about "rational" people (lesswrongers for instance) disagreeing. "Rational" in the above sentence isn't the same as rational as defined in Aumann's paper.
Specifically, we're human beings, two of us don't necessarily have the same priors, or common knowledge of a posterior A for every possible event A. So we're bound to disagree sometimes.