wuwei comments on Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (77)
I think many of the most pressing existential risks (e.g. nanotech, biotech and AI accidents) come from the likely actions of moderately intelligent, well-intentioned, and rational humans (compared to the very low baseline). If that is right then increasing the number of such people will increase rather than decrease risk.
Could you elaborate a bit more on why you think this? Are there any historical examples you are thinking of?
To answer your second question: No, there aren't any historical examples I am thinking of. Do you find many historical examples of existential risks?
Edit: Global nuclear warfare and biological weapons would be the best candidates I can think of.
Could you answer my first question, too? Which are the intelligent, well-intentioned, and relatively rational humans you are thinking of? Scientists developing nanotech, biotech, and AI? Policy-makers? Who? How would an example disaster scenario unfold in your view?
Are you saying that the very development of nanotech, biotech, and AI would create an elevated level of existential risk? If so, I would agree. A common counter-argument I've heard is that whether we like it or not, someone is going to make progress in at least one of those areas, and that we should try to be the first movers rather than someone less scrupulous.
In terms of safety, using AI as an example:
World with no AI > World where relatively scrupulous people develop an AI > World where unscrupulous people develop an AI
Think about how the world would be if Russia or Germany had developed nukes before the US.
Intelligence did allow the development of nukes. Yet given that we already have them, global intelligence would probably decrease the risk of them being used.
Let's assume, for the sake of argument, that the mere development of future nanotech, biotech, and AI doesn't go horribly wrong and create an existential disaster. If so, then the existential risk will lie in how these technologies are used.
I will suggest that there is a certain threshold of intelligence greater than ours where everyone is smart enough not to do globally harmful stunts with nuclear weapons, biotech, nanotech, and AI and/or smart enough to create safeguards where small amounts of intelligent crazy people can't do so either. The trick will be getting to that level of intelligence without mishap.
I was reading the Wikipedia Cuban Missile Crisis article, and it does seem that intelligence helped avert catastrophe. There are multiple points where things could have gone wrong but didn't due to people being smart enough not to do something rash. I suggest that even greater intelligence might ensure that situations like this never develop or are resolved.
Here are some interesting parts:
If this guy had been smarter, maybe this mistake would never have been made.
Luckily, Kruschev and McNamara were smart enough not to escalate. Their intelligence protected against the risk caused by the stupid Soviet commander.
Basically, a stupid dude on the sub wanted to use the missile, but a smart dude stopped him.
Yes, existential risk ultimately came from the intelligent developers of nuclear weapons. Yet once the cat was out of the bag, existential risks came from people being stupid, and those risks were counteracted by people being smart. I would expect that more intelligence would be even more helpful in potential disaster situations like this.
The real risk seems to be from weapons developed by smart people falling into the hands of stupid people. Yet if even the stupidest people were smart enough not to play around with mutually assured destruction, then the world would be a safer place.
What relationship does the kind of 'smartness' possessed by the individuals in question have with IQ?
I don't think there are good reasons for thinking they're one and the same.
I agree with Annoyance here. My guess is that a higher IQ may help the individuals in the situations Hughristik describes, but this is not the type of evidence we should consider very convincing. In this example, I would guess that differences in the individual's desire and ability to think through the consequences of their actions is far more important than differences in there IQ. This may be explained by the incentives facing each individual.
This may be true, but "ability to think through the consequences of actions" is probably not independent of general intelligence. People with higher g are better at thinking through everything. This is what the research I linked to (and much that I didn't link to) shows.
This graph from one of the articles shows that people with higher IQ are less likely to be unemployed, have illegitimate children, live in poverty, or be incarcerated. These life outcomes seem potentially related to considering consequences and planning for the long-term. If intelligence is related to positive individual life outcomes, then it would be unsurprising if it is also related to positive group or world outcomes.
In the case of avoiding use of nuclear weapons, there is probably only a certain threshold of intelligence necessary. Yet from the historical example of the Cuban Missile Crisis, the thinking involved wasn't always trivial:
Both sides were constantly guessing the reasoning of the other.
In short, we do have reasons to suspect a relationship between intelligence and restraint with existentially risky technologies. People with higher intelligence don't merely have greater "book smarts," they have better cognitive performance in general and better life and career outcomes on an individual level, which may also extrapolate to a group/world level. Will more research be necessary to make us confident in this notion? Of course, but our current knowledge of intelligence should establish it as probable.
Furthermore, people with higher intelligence probably have a better ability to guess the moves of other people with existentially risky technologies and navigate Prisoners' Dilemmas of mutually assured destruction, as we see in the historical example of the Cuban Missile Crisis. We don't have rigorous scientific evidence for this point yet, though I don't think it's a stretch, and hopefully we will never have a large sample size of existential crises.
I'm not sure we have serious disagreements on this. Research on intelligence enhancement sounds like a good idea, for many reasons. I'm just choosing to emphasize that there are probably other much more effective approaches to reducing existential risks, and its by no means impossible that intelligence enhancement could increase existential risks.
What about the inherent incentive that motivates people even in the absence of strong external factors?
I'm not sure I understand you. Are you referring to the distinction between intrinsic and extrinsic motivation?
More like a distinction between different types of intrinsic factors.
When I said "smartness," I was thinking of general intelligence, the g-factor. As it happens, g does have a high correlation with IQ (0.8 as I recall, though I can't find the source right now). g is a highly general factor related to better performance in many areas including career and general life tasks, not just in academic settings (see p. 342 for a summary of research), so we should hypothesize that nuclear missile restraint is related to g also.
Someone who knows the details of this is welcome to correct me if I'm wrong, but as I understand it g is a hypothetical construct derived via factor analysis on the components of IQ tests, so it will necessarily have a high correlation with those tests (provided the results of the components are themselves correlated).
Correct. g is the degree to which performances on various subtypes of IQ tests are statistically correlated - the degree that performance on one predicts performance on another.
It's a very crude concept, and one that has not been reliably identified as being detectable without use of IQ tests, although several neurophysiologic properties have been suggested as indicating g.
believing lead in the water supply would decrease existential risks != advocating putting lead in the water supply
If you decreased the intelligence of everyone to 100 IQ points or lower, I think overall quality of life would decrease but that it would also drastically decrease existential risks.
Edit: On second thought, now that I think about nuclear and biological weapons, I might want to take that back while pointing out that these large threats were predominantly created by quite intelligent, well-intentioned and rational people.
If you decreased the intelligence of everyone to 100 IQ points or lower, that would probably eliminate all hope for a permanent escape from existential risk. Risk in this scenario might be lower per time unit in the near future, but total risk over all time would approach 100%.
Why do you think it's the nuclear weapons that keep the current peace, and not the memory of past wars, and more generally/recently cultural moral progress? This is related to your prediction in resource depletion scenario.
List of wars by death toll is very interesting.
There's little evidence for theory that threat of global thermonuclear war creates global peace.
That's a good point, but it would be more relevant if this were a policy proposal rather than an epistemic probe.
I don't see why this being an epistemic probe makes risk per near future time unit more relevant than total risk integrated over time.
The whole thing is kind of academic, because for any realistic policy there'd be specific groups who'd be made smarter than others, and risk effects depend on what those groups are.
You seem to be assuming that the relation between IQ and risk must be monotonic.
I think existential risk mitigation is better pursued by helping the most intelligent and rational efforts than by trying to raise the average intelligence or rationality.
That's a kind of the giant cheesecake fallacy. Capability increases risk caused by some people, but it also increases the power of other people to mitigate the risks. Knowing about the increase in the capability of these factors doesn't help you in deciding which of them wins.
And I will suggest in turn that you are guilty of the catchy fallacy name fallacy. The giant cheesecake fallacy was originally introduced as applying to those who anthropomorphize minds in general, often slipping from capability to motivation because a given motivation is common in humans.
I'm talking about a certain class of humans and not suggesting that they are actually motivated to bring about bad effects. Rather all it takes is for there to be problems where it is significantly easier to mess things up than to get it right.
I agree, this doesn't fall clearly under the original concept of giant cheesecake fallacy, but it points to a good non-specious generalization of that concept, for which I gave a self-contained explanation in my comment.
Aside from that, your reply addresses issues irrelevant to my critique of your assertion. It sounds like a soldier-argument.
It's not the giant cheesecake fallacy, but Vladimir Nesov is completely correct when he says:
Anyone arguing that existential risks are elevated by increasing intelligence must also account for the mitigating factor against existential risk that intelligence also plays.
That is rather easily accounted for, I would think. Attack is easier than defense. It is easier to build a bomb than to defend against bomb attacks; it is easier to build a laser than to defend against laser attacks - and so on.
This is true. Yet capability to attack isn't the same thing as actually attacking.
Even at our current level of intelligence, and the world is not ravaged by nuclear weapons or biological weapons. Maybe we are just lucky so far.
All else being equal, smarter people are probably less likely to attack with globally threatening weapons, particularly when mutually assured destruction is a factor. In cases of MAD, attack isn't exactly "easy" when you are ensuring your own destruction as well. There are some crazy people with nukes, but you have to be crazy and stupid to attack in the case of MAD, and nobody so far has that combination of craziness and stupidity. MAD is an IQ test that all humans with nukes have passed so far (the US bombing Japan was not under MAD).
I propose a study:
The participants are a sample of despots randomly assigned to two conditions. The control condition is given an IQ test and some nukes. The experimental condition is given intelligence enhancement, an IQ test, and some nukes. At the end of the experiment, scientists stationed on the moon will measure the effect of the intelligence manipulation on nuke usage.
But the US did bomb Japan. For each new existentially threatening tech, the first power to develop it won't be bound by MAD.
There could be cases when an older-generation technology can be used to assure destruction. Say, if the new tech doesn't prevent ICBMs and nuclear explosions, both sides will still be bound by MAD.
And notice that it didn't provoke a nuclear war, and the human race still exists. Nuclear weapons weren't an existential threat until multiple parties obtained them. If MAD isn't a concern in using a given weapon, it doesn't sound like much of an existential threat.
This is a problem, but not necessarily an existential risk, which is the topic under discussion. Existential risk has a particular meaning: it must be global, whereas the US bombing Japan was local.
If we assume that causing risk requires a certain intelligence level and mitigating risks requires a certain (higher) level, changing the distribution of intelligence in a way that enlarges both groups will not, in general, enlarge both by the same factor.
Obviously. A coin is also going to land on exactly one of the sides (but you don't know which one). Why do you pronounce this fact?
That statement shows a way in which the claim that increasing the number of intelligent people will increase rather than decrease risk might be supported.
How the heck is that a giant cheesecake fallacy?
Both are special cases of the following fallacy. A certain factor increases the strength of some possible positive effect, and also the strength of some possible negative effect, with the consequences of these effects taken in isolation being mutually exclusive. An argument is then given that since this factor increases the positive effect (negative effect), the consequences are going to be positive (negative), and therefore the factor itself is instrumentally desirable (undesirable). The argument doesn't recognize the other side of the possible consequences, ignoring the possibility that the opposite effect is going to dominate instead.
Maybe it has another existing name; the analogy seems useful.
Giant cheesecake is about the jump from capability to motive, usually in the presence of anthropomorphism or other reasons to assume the preference without thinking.
This sounds more like a generic problem of technophilia (phobia) - mostly just confirmation bias or standard filtering of arguments. It probably does need a name, though, like Appeal to Selected Possibilities or something like that.