You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Morendil comments on Connecting Your Beliefs (a call for help) - Less Wrong Discussion

24 Post author: lukeprog 20 November 2011 05:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

You are viewing a single comment's thread. Show more comments above.

Comment author: rwallace 20 November 2011 07:10:15AM 2 points [-]

That's actually a good question. Let me rephrase it to something hopefully clearer:

Compartmentalization is an essential safety mechanism in the human mind; it prevents erroneous far mode beliefs (which we all adopt from time to time) from having disastrous consequences. A man believes he'll go to heaven when he dies. Suicide is prohibited in a patch for the obvious problem, but there's no requirement to make an all-out proactive effort to stay alive. Yet when he gets pneumonia, he gets a prescription for penicillin. Compartmentalization literally saves his life. In some cases many other lives, as we saw when it failed on 9/11.

Here we have a case study where a man of intelligence and goodwill redirected his entire life down a path of negative utility on the basis of reading a single paragraph of sloppy wishful thinking backed up by no evidence whatsoever. (The most straightforward refutation of that paragraph is that creating a machine with even a noteworthy fraction of human intelligence is far beyond the capacity of any human mind; the relevant comparison of such a machine if built would be with that which created it, which would have to be a symbiosis of humanity and its technology as a whole - with that symbiosis necessarily being much more advanced than anything we have today.) What went wrong?

The most obvious part of the answer is that this is an error to which we geeks are particularly prone. (Supporting data: terrorists are disproportionately likely to be trained in some branch of engineering.) Why? Well, we are used to dealing in domains where we can actually apply long chains of logic with success; particularly in the age range when we are old enough to have forgotten how fallible were our first attempts at such logic, yet young enough to be still optimists, it's an obvious trap to fall into.

Yet most geeks do actually manage to stay out of the trap. What else goes wrong?

It seems to me that there must be a parameter in the human mind for grasping the inertia of the world, for understanding at a gut level how much easier is concept than reality, that we can think in five minutes of ideas that the labor of a million people for a thousand years cannot realize. I suppose in some individuals this parameter must be turned up too high, and they fall too easily into the trap of learned helplessness. And in some it must be turned too low, and those of us for whom this is the case undertake wild projects with little chance of success; and if ninety-nine fail for every one who succeeds, that can yet drive the ratchet of progress.

But we easily forget that progress is not really a ratchet, and the more advanced our communications, the more lethal bad ideas become, for just as our transport networks spread disease like the 1918 flu epidemic which killed more people in a single year than the First World War killed in four years, so our communication networks spread parasite memes deadlier still. And we can't shut down the networks. We need them too badly.

I've seen the Singularity mutate from a harmless, even inspiring fantasy, to a parasite meme that I suspect could well snuff out the entire future of intelligent life. It's proving itself in many cases immune to any weight of evidence against it; perhaps worst of all, it bypasses ethical defenses, for it can be spread by people of honest goodwill.

Compartmentalization seems to be the primary remaining defense. When that fails, what have we left? This is not a rhetorical question; it may be one of the most important in the world right now.

Comment author: Morendil 20 November 2011 08:28:20PM 2 points [-]

a parasite meme that I suspect could well snuff out the entire future of intelligent life

How do you propose that would happen?

Comment author: rwallace 20 November 2011 09:44:37PM 3 points [-]

We've had various kinds of Luddism before, but this one is particularly lethal in being a form that appeals to people who had been technophiles. If it spreads enough, best case scenario is the pool of people willing to work on real technological progress shrinks, worst case scenario is regulation that snuffs out progress entirely, and we get to sit around bickering about primate politics until whatever window of time we had runs out.

Comment author: Morendil 21 November 2011 07:43:30AM 2 points [-]

That's awfully vague. "Whatever window of time we had", what does that mean?

There's one kind of "technological progress" that SIAI opposes as far as I can tell: working on AGI without an explicit focus on Friendliness. Now if you happen to think that AGI is a must-have to ensure the long-term survival of humanity, it seems to me that you're already pretty much on board with the essential parts of SIAI's worldview, indistinguishable from them as far as the vast majority is concerned.

Otherwise, there's plenty of tech that is entirely orthogonal with the claims of SIAI: cheap energy, health, MNT, improving software engineering (so-called), and so on.

Comment author: rwallace 21 November 2011 10:16:55AM 3 points [-]

That's awfully vague. "Whatever window of time we had", what does that mean?

The current state of the world is unusually conducive to technological progress. We don't know how long this state of affairs will last. Maybe a long time, maybe a short time. To fail to make progress as rapidly as we can is to gamble the entire future of intelligent life on it lasting a long time, without evidence that it will do so. I don't think that's a good gamble.

There's one kind of "technological progress" that SIAI opposes as far as I can tell: working on AGI without an explicit focus on Friendliness.

I have seen claims to the contrary from a number of people, from Eliezer himself a number of years ago up to another reply to your comment right now. If SIAI were to officially endorse the position you just suggested, my assessment of their expected utility would significantly increase.

Comment author: Morendil 21 November 2011 01:45:05PM 3 points [-]

Well, SIAI isn't necessarily a homogenous bunch of people, with respect to what they oppose or endorse, but did you look for instance at Michael Anissimov's entries on MNT? (Focusing on that because it's the topic of Risto's comment and you seem to see that as a confirmation of your thesis.) You don't get the impression that he thinks it's a bad idea, quite the contrary: http://www.acceleratingfuture.com/michael/blog/category/nanotechnology/

Here is Eliezer on the SL4 mailing list:

If you solve the FAI problem, you probably solve the nanotech problem. If you solve the nanotech problem, you probably make the AI problem much worse. My preference for solving the AI problem as quickly as possible has nothing to do with the relative danger of AI and nanotech. It's about the optimal ordering of AI and nanotech.

The Luddites of our times are (for instance) groups like the publishing and music industries, the use of that label to describe the opinions of people affiliated with SIAI just doesn't make sense IMO.

Comment author: MichaelAnissimov 06 December 2011 08:22:56PM *  1 point [-]

Human-implemented molecular nanotechnology is a bad idea. I just talk about it to attract people in who think it's important. MNT knowledge is a good filter/generator for SL3 and beyond thinkers.

MNT without friendly superintelligence would be nothing but a disaster.

It's true that SIAI isn't homogeneous though. For instance, Anna is much more optimistic about uploads than I am personally.

Comment author: rwallace 22 November 2011 12:00:41AM 1 point [-]

Thanks for the link, yes, that does seem to be a different opinion (and some very interesting posts).

I agree with you about the publishing and music industries. I consider current rampant abuse of intellectual property law to be a bigger threat than the Singularity meme, sufficiently so that if your comparative advantage is in politics, opposing that abuse probably has the highest expected utility of anything you could be doing.

Comment author: Risto_Saarelma 21 November 2011 09:37:33AM 1 point [-]

Molecular nanotechnology and anything else that can be weaponized to let a very small group of people effectively kill a very large group of people is probably something SIAI-type people would like to be countered with a global sysop scenario from the moment it gets developed.