You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

drethelin comments on Connecting Your Beliefs (a call for help) - Less Wrong Discussion

24 Post author: lukeprog 20 November 2011 05:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

You are viewing a single comment's thread. Show more comments above.

Comment author: rwallace 20 November 2011 07:10:15AM 2 points [-]

That's actually a good question. Let me rephrase it to something hopefully clearer:

Compartmentalization is an essential safety mechanism in the human mind; it prevents erroneous far mode beliefs (which we all adopt from time to time) from having disastrous consequences. A man believes he'll go to heaven when he dies. Suicide is prohibited in a patch for the obvious problem, but there's no requirement to make an all-out proactive effort to stay alive. Yet when he gets pneumonia, he gets a prescription for penicillin. Compartmentalization literally saves his life. In some cases many other lives, as we saw when it failed on 9/11.

Here we have a case study where a man of intelligence and goodwill redirected his entire life down a path of negative utility on the basis of reading a single paragraph of sloppy wishful thinking backed up by no evidence whatsoever. (The most straightforward refutation of that paragraph is that creating a machine with even a noteworthy fraction of human intelligence is far beyond the capacity of any human mind; the relevant comparison of such a machine if built would be with that which created it, which would have to be a symbiosis of humanity and its technology as a whole - with that symbiosis necessarily being much more advanced than anything we have today.) What went wrong?

The most obvious part of the answer is that this is an error to which we geeks are particularly prone. (Supporting data: terrorists are disproportionately likely to be trained in some branch of engineering.) Why? Well, we are used to dealing in domains where we can actually apply long chains of logic with success; particularly in the age range when we are old enough to have forgotten how fallible were our first attempts at such logic, yet young enough to be still optimists, it's an obvious trap to fall into.

Yet most geeks do actually manage to stay out of the trap. What else goes wrong?

It seems to me that there must be a parameter in the human mind for grasping the inertia of the world, for understanding at a gut level how much easier is concept than reality, that we can think in five minutes of ideas that the labor of a million people for a thousand years cannot realize. I suppose in some individuals this parameter must be turned up too high, and they fall too easily into the trap of learned helplessness. And in some it must be turned too low, and those of us for whom this is the case undertake wild projects with little chance of success; and if ninety-nine fail for every one who succeeds, that can yet drive the ratchet of progress.

But we easily forget that progress is not really a ratchet, and the more advanced our communications, the more lethal bad ideas become, for just as our transport networks spread disease like the 1918 flu epidemic which killed more people in a single year than the First World War killed in four years, so our communication networks spread parasite memes deadlier still. And we can't shut down the networks. We need them too badly.

I've seen the Singularity mutate from a harmless, even inspiring fantasy, to a parasite meme that I suspect could well snuff out the entire future of intelligent life. It's proving itself in many cases immune to any weight of evidence against it; perhaps worst of all, it bypasses ethical defenses, for it can be spread by people of honest goodwill.

Compartmentalization seems to be the primary remaining defense. When that fails, what have we left? This is not a rhetorical question; it may be one of the most important in the world right now.

Comment author: drethelin 20 November 2011 07:36:45AM 5 points [-]

Compartmentalization may make ridiculous far beliefs have less of an impact on the world, but it also allows those beliefs to exist in the first place. If your beliefs about religion depended on the same sort evidence that underpins your beliefs about whether your car is running, then you could no more be convinced of religion than you could be convinced by a mechanic that your car "works" even though it does not start.

Comment author: rwallace 20 November 2011 07:40:42AM 4 points [-]

So your suggestion is that we should de-compartmentalize, but in the reverse direction to that suggested by the OP, i.e. instead of propagating forward from ridiculous far beliefs, become better at back-propagating and deleting same? There is certainly merit in that suggestion if it can be accomplished. Any thoughts on how?

Comment author: drethelin 20 November 2011 07:48:13AM 2 points [-]

You don't understand. Decompartmentalization doesn't have a direction. You don't go forwards towards a belief or backwards from a belief, or whatever. If your beliefs are decompartmentalized that means that the things you believe will impact your other beliefs reliably. This means that you don't get to CHOOSE what you believe. If you think the singularity is all important and worth working for, it's BECAUSE all of your beliefs align that way, not because you've forced your mind to align itself with that belief after having it.

Comment author: rwallace 20 November 2011 07:57:33AM 4 points [-]

I understand perfectly well how a hypothetical perfectly logical system would work (leaving aside issues of computational tractability etc.). But then, such a hypothetical perfectly logical system wouldn't entertain such far mode beliefs in the first place. What I'm discussing is the human mind, and the failure modes it actually exhibits.