wuwei comments on Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation - Less Wrong

20 [deleted] 18 June 2009 03:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: wuwei 18 June 2009 07:07:41PM *  5 points [-]

And I will suggest in turn that you are guilty of the catchy fallacy name fallacy. The giant cheesecake fallacy was originally introduced as applying to those who anthropomorphize minds in general, often slipping from capability to motivation because a given motivation is common in humans.

I'm talking about a certain class of humans and not suggesting that they are actually motivated to bring about bad effects. Rather all it takes is for there to be problems where it is significantly easier to mess things up than to get it right.

Comment author: Vladimir_Nesov 18 June 2009 07:15:21PM 0 points [-]

I agree, this doesn't fall clearly under the original concept of giant cheesecake fallacy, but it points to a good non-specious generalization of that concept, for which I gave a self-contained explanation in my comment.

Aside from that, your reply addresses issues irrelevant to my critique of your assertion. It sounds like a soldier-argument.

Comment author: HughRistik 18 June 2009 08:25:07PM *  0 points [-]

It's not the giant cheesecake fallacy, but Vladimir Nesov is completely correct when he says:

Capability increases risk caused by some people, but it also increases the power of other people to mitigate the risks. Knowing about the increase in the capability of these factors doesn't help you in deciding which of them wins.

Anyone arguing that existential risks are elevated by increasing intelligence must also account for the mitigating factor against existential risk that intelligence also plays.

Comment author: timtyler 19 June 2009 01:34:25AM 2 points [-]

That is rather easily accounted for, I would think. Attack is easier than defense. It is easier to build a bomb than to defend against bomb attacks; it is easier to build a laser than to defend against laser attacks - and so on.

Comment author: HughRistik 19 June 2009 05:02:24AM *  3 points [-]

This is true. Yet capability to attack isn't the same thing as actually attacking.

Even at our current level of intelligence, and the world is not ravaged by nuclear weapons or biological weapons. Maybe we are just lucky so far.

All else being equal, smarter people are probably less likely to attack with globally threatening weapons, particularly when mutually assured destruction is a factor. In cases of MAD, attack isn't exactly "easy" when you are ensuring your own destruction as well. There are some crazy people with nukes, but you have to be crazy and stupid to attack in the case of MAD, and nobody so far has that combination of craziness and stupidity. MAD is an IQ test that all humans with nukes have passed so far (the US bombing Japan was not under MAD).

I propose a study:

The participants are a sample of despots randomly assigned to two conditions. The control condition is given an IQ test and some nukes. The experimental condition is given intelligence enhancement, an IQ test, and some nukes. At the end of the experiment, scientists stationed on the moon will measure the effect of the intelligence manipulation on nuke usage.

Comment author: cousin_it 19 June 2009 10:12:30AM 1 point [-]

But the US did bomb Japan. For each new existentially threatening tech, the first power to develop it won't be bound by MAD.

Comment author: Vladimir_Golovin 19 June 2009 12:13:43PM 2 points [-]

There could be cases when an older-generation technology can be used to assure destruction. Say, if the new tech doesn't prevent ICBMs and nuclear explosions, both sides will still be bound by MAD.

Comment author: loqi 19 June 2009 04:30:12PM 2 points [-]

And notice that it didn't provoke a nuclear war, and the human race still exists. Nuclear weapons weren't an existential threat until multiple parties obtained them. If MAD isn't a concern in using a given weapon, it doesn't sound like much of an existential threat.

Comment author: HughRistik 19 June 2009 07:09:52PM 0 points [-]

This is a problem, but not necessarily an existential risk, which is the topic under discussion. Existential risk has a particular meaning: it must be global, whereas the US bombing Japan was local.