Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation

20 [deleted] 18 June 2009 03:09PM

Jamais Cascio writes in the atlantic:

Pandemics. Global warming. Food shortages. No more fossil fuels. What are humans to do? The same thing the species has done before: evolve to meet the challenge. But this time we don’t have to rely on natural evolution to make us smart enough to survive. We can do it ourselves, right now, by harnessing technology and pharmacology to boost our intelligence. Is Google actually making us smarter? ...

 ... Modafinil isn’t the only example; on college campuses, the use of ADD drugs (such as Ritalin and Adderall) as study aids has become almost ubiquitous. But these enhancements are primitive. As the science improves, we could see other kinds of cognitive-modification drugs that boost recall, brain plasticity, even empathy and emotional intelligence. They would start as therapeutic treatments, but end up being used to make us “better than normal.”

Read the whole article here.

This relates to cognitive enhancement as existential risk mitigation, where Anders Sandberg wrote:

Would it actually reduce existential risks? I do not know. But given correlations between long-term orientation, cooperation and intelligence, it seems likely that it might help not just to discover risks, but also in ameliorating them. It might be that other noncognitive factors like fearfulness or some innate discounting rate are more powerful.

The main criticisms of this idea generated in the Less Wrong comments were:

The problem is not that people are stupid. The problem is that people simply don't give a damn. If you don't fix that, I doubt raising IQ will be anywhere near as helpful as you may think. (Psychohistorian)

Yes, this is the key problem that people don't really want to understand. (Robin Hanson)

Making people more rational and aware of cognitive biases material would help much more (many people)

These criticisms really boil down to the same thing: people love their cherished falsehoods! Of course, I cannot disagree with this statement. But it seems to me that smarter people have a lower tolerance for making utterly ridiculous claims in favour of their cherished falsehood, and will (to some extent) be protected from believing silly things that make them (individually) feel happier, but are highly unsupported by evidence. Case in point: religion. This study1 states that

Evidence is reviewed pointing to a negative relationship between intelligence and religious belief in the United States and Europe. It is shown that intelligence measured as psychometric g is negatively related to religious belief. We find that in a sample of 137 countries the correlation between national IQ and disbelief in God is 0.60.

Many people in the comments made the claim that making people more intelligent will, due to human self-deceiving tendencies, make people more deluded about the nature of the world. The data concerning religion detracts support from this hypothesis. There is also direct evidence to show that a whole list of human cognitive biases are more likely to be avoided by being more intelligent - though far from all (perhaps even far from most?) of them. This paper2 states:

In a further experiment, the authors nonetheless showed that cognitive ability does correlate with the tendency to avoid some rational thinking biases, specifically the tendency to display denominator neglect, probability matching rather than maximizing, belief bias, and matching bias on the 4-card selection task. The authors present a framework for predicting when cognitive ability will and will not correlate with a rational thinking tendency.

Anders Sandberg also suggested the following piece of evidence3 in favour of the hypothesis that increased intelligence leads to more rational political decisions:

Political theory has described a positive linkage between education, cognitive ability and democracy. This assumption is confirmed by positive correlations between education, cognitive ability, and positively valued political conditions (N=183−130). Longitudinal studies at the country level (N=94−16) allow the analysis of causal relationships. It is shown that in the second half of the 20th century, education and intelligence had a strong positive impact on democracy, rule of law and political liberty independent from wealth (GDP) and chosen country sample. One possible mediator of these relationships is the attainment of higher stages of moral judgment fostered by cognitive ability, which is necessary for the function of democratic rules in society. The other mediators for citizens as well as for leaders could be the increased competence and willingness to process and seek information necessary for political decisions due to greater cognitive ability. There are also weaker and less stable reverse effects of the rule of law and political freedom on cognitive ability.

Thus the hypothesis that increasing peoples' intelligence will make them believe fewer falsehoods and will make them vote for more effective government has at least two pieces of empirical evidence on its side.

 

 


1. Average intelligence predicts atheism rates across 137 nations, Richard Lynn,  John Harvey and Helmuth Nyborg, Intelligence Volume 37, Issue 1,

2. On the Relative Independence of Thinking Biases and Cognitive Ability, Keith E. Stanovich, Richard F. West, Journal of Personality and Social Psychology, 2008, Vol. 94, No. 4, 672–695

3. Relevance of education and intelligence for the political development of nations: Democracy, rule of law and political liberty, Heiner Rindermann, Intelligence, Volume 36, Issue 4

Comments (77)

Comment author: Arenamontanus 18 June 2009 06:28:04PM 11 points [-]

In many debates about cognition enhancement the claim is that it would be bad, because it would produce compounding effects - the rich would use it to get richer, producing a more unequal society. This claim hinges on the assumption that there would be an economic or social threshold to enhancer use, and that it would produce effects that were strongly in favour of just the individual taking the drug.

I think there is good reason to suspect that enhancement has positive externalities - lower costs due to stupidity, individual benefits that produce tax money, perhaps better governance, cooperation and more great ideas. In fact, it might be that these benefits are more powerful than the individual ones. If everybody got 1% smarter, we would not notice much improvement in everyday life, but the economy might grow a few percent and we would get slightly faster technological development and better governance. That might actually turn the problem into a free rider problem: unless you really want to be smarter taking the enhancer might be a cost to you (risk of side-effects, for example). So you might want everybody else to take the enhancers, and then reap the benefit without the cost.

Comment author: JulianMorrison 22 June 2009 03:03:12PM 0 points [-]

There's a historical IQ enhancer we can use to look for this effect: food.

Comment author: wuwei 18 June 2009 05:31:25PM *  4 points [-]

I think many of the most pressing existential risks (e.g. nanotech, biotech and AI accidents) come from the likely actions of moderately intelligent, well-intentioned, and rational humans (compared to the very low baseline). If that is right then increasing the number of such people will increase rather than decrease risk.

Comment author: HughRistik 18 June 2009 08:26:04PM 4 points [-]

I think many of the most pressing existential risks (e.g. nanotech, biotech and AI accidents) come from the likely actions of moderately intelligent, well-intentioned, and rational humans (compared to the very low baseline).

Could you elaborate a bit more on why you think this? Are there any historical examples you are thinking of?

Comment author: wuwei 19 June 2009 12:00:27AM *  3 points [-]

To answer your second question: No, there aren't any historical examples I am thinking of. Do you find many historical examples of existential risks?

Edit: Global nuclear warfare and biological weapons would be the best candidates I can think of.

Comment author: HughRistik 19 June 2009 05:01:48AM 1 point [-]

Could you answer my first question, too? Which are the intelligent, well-intentioned, and relatively rational humans you are thinking of? Scientists developing nanotech, biotech, and AI? Policy-makers? Who? How would an example disaster scenario unfold in your view?

Are you saying that the very development of nanotech, biotech, and AI would create an elevated level of existential risk? If so, I would agree. A common counter-argument I've heard is that whether we like it or not, someone is going to make progress in at least one of those areas, and that we should try to be the first movers rather than someone less scrupulous.

In terms of safety, using AI as an example:

World with no AI > World where relatively scrupulous people develop an AI > World where unscrupulous people develop an AI

Think about how the world would be if Russia or Germany had developed nukes before the US.

Global nuclear warfare and biological weapons would be the best candidates I can think of.

Intelligence did allow the development of nukes. Yet given that we already have them, global intelligence would probably decrease the risk of them being used.

Let's assume, for the sake of argument, that the mere development of future nanotech, biotech, and AI doesn't go horribly wrong and create an existential disaster. If so, then the existential risk will lie in how these technologies are used.

I will suggest that there is a certain threshold of intelligence greater than ours where everyone is smart enough not to do globally harmful stunts with nuclear weapons, biotech, nanotech, and AI and/or smart enough to create safeguards where small amounts of intelligent crazy people can't do so either. The trick will be getting to that level of intelligence without mishap.

Comment author: HughRistik 19 June 2009 05:47:04AM *  7 points [-]

I was reading the Wikipedia Cuban Missile Crisis article, and it does seem that intelligence helped avert catastrophe. There are multiple points where things could have gone wrong but didn't due to people being smart enough not to do something rash. I suggest that even greater intelligence might ensure that situations like this never develop or are resolved.

Here are some interesting parts:

That morning, a U-2 piloted by USAF Major Rudolf Anderson, departed its forward operating location at McCoy AFB, Florida, and at approximately 12:00 p.m. Eastern Standard Time, was shot down by an S-75 Dvina (NATO designation SA-2 Guideline) SAM launched from an emplacement in Cuba. The stress in negotiations between the USSR and the U.S. intensified, and only later was it learned that the decision to fire was made locally by an undetermined Soviet commander on his own authority.

If this guy had been smarter, maybe this mistake would never have been made.

We had to send a U-2 over to gain reconnaissance information on whether the Soviet missiles were becoming operational. We believed that if the U-2 was shot down that—the Cubans didn't have capabilities to shoot it down, the Soviets did—we believed if it was shot down, it would be shot down by a Soviet surface-to-air-missile unit, and that it would represent a decision by the Soviets to escalate the conflict. And therefore, before we sent the U-2 out, we agreed that if it was shot down we wouldn't meet, we'd simply attack. It was shot down on Friday [...]. Fortunately, we changed our mind, we thought "Well, it might have been an accident, we won't attack." Later we learned that Khrushchev had reasoned just as we did: we send over the U-2, if it was shot down, he reasoned we would believe it was an intentional escalation. And therefore, he issued orders to Pliyev, the Soviet commander in Cuba, to instruct all of his batteries not to shoot down the U-2.

Luckily, Kruschev and McNamara were smart enough not to escalate. Their intelligence protected against the risk caused by the stupid Soviet commander.

Arguably the most dangerous moment in the crisis was unrecognized until the Cuban Missile Crisis Havana conference in October 2002, attended by many of the veterans of the crisis, at which it was learned that on October 26, 1962 the USS Beale had tracked and dropped practice depth charges on the B-39, a Soviet Foxtrot-class submarine which was armed with a nuclear torpedo. Running out of air, the Soviet submarine was surrounded by American warships and desperately needed to surface. An argument broke out among three officers on the B-39, including submarine captain Valentin Savitsky, political officer Ivan Semonovich Maslennikov, and chief of staff of the submarine flotilla, Commander Vasiliy Arkhipov. An exhausted Savitsky became furious and ordered that the nuclear torpedo on board be made combat ready. Accounts differ about whether Commander Arkhipov convinced Savitsky not to make the attack, or whether Savitsky himself finally concluded that the only reasonable choice left open to him was to come to the surface.[29]

At the Cuban Missile Crisis Havana conference, Robert McNamara admitted that nuclear war had come much closer than people had thought. Thomas Blanton, director of the National Security Archive, said that "a guy called Vasili Arkhipov saved the world."

Basically, a stupid dude on the sub wanted to use the missile, but a smart dude stopped him.

Yes, existential risk ultimately came from the intelligent developers of nuclear weapons. Yet once the cat was out of the bag, existential risks came from people being stupid, and those risks were counteracted by people being smart. I would expect that more intelligence would be even more helpful in potential disaster situations like this.

The real risk seems to be from weapons developed by smart people falling into the hands of stupid people. Yet if even the stupidest people were smart enough not to play around with mutually assured destruction, then the world would be a safer place.

Comment author: Annoyance 22 June 2009 10:12:59PM 2 points [-]

What relationship does the kind of 'smartness' possessed by the individuals in question have with IQ?

I don't think there are good reasons for thinking they're one and the same.

Comment author: MichaelBishop 22 June 2009 11:27:14PM 1 point [-]

I agree with Annoyance here. My guess is that a higher IQ may help the individuals in the situations Hughristik describes, but this is not the type of evidence we should consider very convincing. In this example, I would guess that differences in the individual's desire and ability to think through the consequences of their actions is far more important than differences in there IQ. This may be explained by the incentives facing each individual.

Comment author: HughRistik 24 June 2009 12:04:01AM 4 points [-]

In this example, I would guess that differences in the individual's desire and ability to think through the consequences of their actions is far more important than differences in there IQ.

This may be true, but "ability to think through the consequences of actions" is probably not independent of general intelligence. People with higher g are better at thinking through everything. This is what the research I linked to (and much that I didn't link to) shows.

This graph from one of the articles shows that people with higher IQ are less likely to be unemployed, have illegitimate children, live in poverty, or be incarcerated. These life outcomes seem potentially related to considering consequences and planning for the long-term. If intelligence is related to positive individual life outcomes, then it would be unsurprising if it is also related to positive group or world outcomes.

In the case of avoiding use of nuclear weapons, there is probably only a certain threshold of intelligence necessary. Yet from the historical example of the Cuban Missile Crisis, the thinking involved wasn't always trivial:

We had to send a U-2 over to gain reconnaissance information on whether the Soviet missiles were becoming operational. We believed that if the U-2 was shot down that—the Cubans didn't have capabilities to shoot it down, the Soviets did—we believed if it was shot down, it would be shot down by a Soviet surface-to-air-missile unit, and that it would represent a decision by the Soviets to escalate the conflict. And therefore, before we sent the U-2 out, we agreed that if it was shot down we wouldn't meet, we'd simply attack. It was shot down on Friday [...]. Fortunately, we changed our mind, we thought "Well, it might have been an accident, we won't attack." Later we learned that Khrushchev had reasoned just as we did: we send over the U-2, if it was shot down, he reasoned we would believe it was an intentional escalation. And therefore, he issued orders to Pliyev, the Soviet commander in Cuba, to instruct all of his batteries not to shoot down the U-2.

Both sides were constantly guessing the reasoning of the other.

In short, we do have reasons to suspect a relationship between intelligence and restraint with existentially risky technologies. People with higher intelligence don't merely have greater "book smarts," they have better cognitive performance in general and better life and career outcomes on an individual level, which may also extrapolate to a group/world level. Will more research be necessary to make us confident in this notion? Of course, but our current knowledge of intelligence should establish it as probable.

Furthermore, people with higher intelligence probably have a better ability to guess the moves of other people with existentially risky technologies and navigate Prisoners' Dilemmas of mutually assured destruction, as we see in the historical example of the Cuban Missile Crisis. We don't have rigorous scientific evidence for this point yet, though I don't think it's a stretch, and hopefully we will never have a large sample size of existential crises.

Comment author: MichaelBishop 24 June 2009 03:43:17AM 0 points [-]

I'm not sure we have serious disagreements on this. Research on intelligence enhancement sounds like a good idea, for many reasons. I'm just choosing to emphasize that there are probably other much more effective approaches to reducing existential risks, and its by no means impossible that intelligence enhancement could increase existential risks.

Comment author: Annoyance 23 June 2009 04:26:37PM 0 points [-]

What about the inherent incentive that motivates people even in the absence of strong external factors?

Comment author: MichaelBishop 23 June 2009 09:56:45PM 0 points [-]

I'm not sure I understand you. Are you referring to the distinction between intrinsic and extrinsic motivation?

Comment author: Annoyance 24 June 2009 07:41:08PM 0 points [-]

More like a distinction between different types of intrinsic factors.

Comment author: HughRistik 22 June 2009 11:57:07PM *  0 points [-]

When I said "smartness," I was thinking of general intelligence, the g-factor. As it happens, g does have a high correlation with IQ (0.8 as I recall, though I can't find the source right now). g is a highly general factor related to better performance in many areas including career and general life tasks, not just in academic settings (see p. 342 for a summary of research), so we should hypothesize that nuclear missile restraint is related to g also.

Comment author: conchis 23 June 2009 04:40:33PM *  2 points [-]

As it happens, g does have a high correlation with IQ

Someone who knows the details of this is welcome to correct me if I'm wrong, but as I understand it g is a hypothetical construct derived via factor analysis on the components of IQ tests, so it will necessarily have a high correlation with those tests (provided the results of the components are themselves correlated).

Comment author: Annoyance 23 June 2009 04:48:42PM 1 point [-]

Correct. g is the degree to which performances on various subtypes of IQ tests are statistically correlated - the degree that performance on one predicts performance on another.

It's a very crude concept, and one that has not been reliably identified as being detectable without use of IQ tests, although several neurophysiologic properties have been suggested as indicating g.

Comment deleted 18 June 2009 06:07:48PM *  [-]
Comment author: steven0461 18 June 2009 09:52:00PM *  4 points [-]

believing lead in the water supply would decrease existential risks != advocating putting lead in the water supply

Comment author: wuwei 18 June 2009 11:56:09PM *  0 points [-]

If you decreased the intelligence of everyone to 100 IQ points or lower, I think overall quality of life would decrease but that it would also drastically decrease existential risks.

Edit: On second thought, now that I think about nuclear and biological weapons, I might want to take that back while pointing out that these large threats were predominantly created by quite intelligent, well-intentioned and rational people.

Comment author: steven0461 19 June 2009 12:17:05AM 8 points [-]

If you decreased the intelligence of everyone to 100 IQ points or lower, that would probably eliminate all hope for a permanent escape from existential risk. Risk in this scenario might be lower per time unit in the near future, but total risk over all time would approach 100%.

Comment deleted 19 June 2009 01:10:00AM [-]
Comment author: Vladimir_Nesov 19 June 2009 04:43:50AM *  1 point [-]

Why do you think it's the nuclear weapons that keep the current peace, and not the memory of past wars, and more generally/recently cultural moral progress? This is related to your prediction in resource depletion scenario.

Comment author: taw 19 June 2009 03:58:37AM 0 points [-]

List of wars by death toll is very interesting.

There's little evidence for theory that threat of global thermonuclear war creates global peace.

  • Even during the world wars, percentage of people who died of violence seems vastly smaller than in typical hunter gatherer societies.
  • There were long periods of peace before, most notably 1815-1914 where military technology was essentially equivalent to that of World War I. Before that 18th century was relatively bloodless too.
  • One of top ten most deadly wars happened just a few years ago. So even accepting the premise that thermonuclear threat prevents war, we face either wide proliferation, or it won't really do much to stop wars.
  • One of the countries with massive nuclear weapons stockpiles suffered total collapse. This might happen again in the future, in near future most likely to Pakistan or North Korea, but in longer term to any country.
  • Countries having nuclear weapons engaged in plenty of conventional wars, mostly on smaller scale, and fought each other by proxy.
Comment author: wuwei 19 June 2009 12:27:49AM 0 points [-]

That's a good point, but it would be more relevant if this were a policy proposal rather than an epistemic probe.

Comment author: steven0461 19 June 2009 12:37:50AM *  2 points [-]

I don't see why this being an epistemic probe makes risk per near future time unit more relevant than total risk integrated over time.

The whole thing is kind of academic, because for any realistic policy there'd be specific groups who'd be made smarter than others, and risk effects depend on what those groups are.

Comment author: wuwei 18 June 2009 07:08:55PM *  -1 points [-]

You seem to be assuming that the relation between IQ and risk must be monotonic.

I think existential risk mitigation is better pursued by helping the most intelligent and rational efforts than by trying to raise the average intelligence or rationality.

Comment author: Vladimir_Nesov 18 June 2009 05:59:27PM *  4 points [-]

That's a kind of the giant cheesecake fallacy. Capability increases risk caused by some people, but it also increases the power of other people to mitigate the risks. Knowing about the increase in the capability of these factors doesn't help you in deciding which of them wins.

Comment author: wuwei 18 June 2009 07:07:41PM *  5 points [-]

And I will suggest in turn that you are guilty of the catchy fallacy name fallacy. The giant cheesecake fallacy was originally introduced as applying to those who anthropomorphize minds in general, often slipping from capability to motivation because a given motivation is common in humans.

I'm talking about a certain class of humans and not suggesting that they are actually motivated to bring about bad effects. Rather all it takes is for there to be problems where it is significantly easier to mess things up than to get it right.

Comment author: Vladimir_Nesov 18 June 2009 07:15:21PM 0 points [-]

I agree, this doesn't fall clearly under the original concept of giant cheesecake fallacy, but it points to a good non-specious generalization of that concept, for which I gave a self-contained explanation in my comment.

Aside from that, your reply addresses issues irrelevant to my critique of your assertion. It sounds like a soldier-argument.

Comment author: HughRistik 18 June 2009 08:25:07PM *  0 points [-]

It's not the giant cheesecake fallacy, but Vladimir Nesov is completely correct when he says:

Capability increases risk caused by some people, but it also increases the power of other people to mitigate the risks. Knowing about the increase in the capability of these factors doesn't help you in deciding which of them wins.

Anyone arguing that existential risks are elevated by increasing intelligence must also account for the mitigating factor against existential risk that intelligence also plays.

Comment author: timtyler 19 June 2009 01:34:25AM 2 points [-]

That is rather easily accounted for, I would think. Attack is easier than defense. It is easier to build a bomb than to defend against bomb attacks; it is easier to build a laser than to defend against laser attacks - and so on.

Comment author: HughRistik 19 June 2009 05:02:24AM *  3 points [-]

This is true. Yet capability to attack isn't the same thing as actually attacking.

Even at our current level of intelligence, and the world is not ravaged by nuclear weapons or biological weapons. Maybe we are just lucky so far.

All else being equal, smarter people are probably less likely to attack with globally threatening weapons, particularly when mutually assured destruction is a factor. In cases of MAD, attack isn't exactly "easy" when you are ensuring your own destruction as well. There are some crazy people with nukes, but you have to be crazy and stupid to attack in the case of MAD, and nobody so far has that combination of craziness and stupidity. MAD is an IQ test that all humans with nukes have passed so far (the US bombing Japan was not under MAD).

I propose a study:

The participants are a sample of despots randomly assigned to two conditions. The control condition is given an IQ test and some nukes. The experimental condition is given intelligence enhancement, an IQ test, and some nukes. At the end of the experiment, scientists stationed on the moon will measure the effect of the intelligence manipulation on nuke usage.

Comment author: cousin_it 19 June 2009 10:12:30AM 1 point [-]

But the US did bomb Japan. For each new existentially threatening tech, the first power to develop it won't be bound by MAD.

Comment author: Vladimir_Golovin 19 June 2009 12:13:43PM 2 points [-]

There could be cases when an older-generation technology can be used to assure destruction. Say, if the new tech doesn't prevent ICBMs and nuclear explosions, both sides will still be bound by MAD.

Comment author: loqi 19 June 2009 04:30:12PM 2 points [-]

And notice that it didn't provoke a nuclear war, and the human race still exists. Nuclear weapons weren't an existential threat until multiple parties obtained them. If MAD isn't a concern in using a given weapon, it doesn't sound like much of an existential threat.

Comment author: HughRistik 19 June 2009 07:09:52PM 0 points [-]

This is a problem, but not necessarily an existential risk, which is the topic under discussion. Existential risk has a particular meaning: it must be global, whereas the US bombing Japan was local.

Comment author: saturn 18 June 2009 08:26:28PM 3 points [-]

If we assume that causing risk requires a certain intelligence level and mitigating risks requires a certain (higher) level, changing the distribution of intelligence in a way that enlarges both groups will not, in general, enlarge both by the same factor.

Comment author: Vladimir_Nesov 18 June 2009 08:32:59PM *  0 points [-]

Obviously. A coin is also going to land on exactly one of the sides (but you don't know which one). Why do you pronounce this fact?

Comment author: timtyler 19 June 2009 01:36:29AM 1 point [-]

That statement shows a way in which the claim that increasing the number of intelligent people will increase rather than decrease risk might be supported.

Comment author: Eliezer_Yudkowsky 18 June 2009 07:56:34PM 1 point [-]

How the heck is that a giant cheesecake fallacy?

Comment author: Vladimir_Nesov 18 June 2009 08:15:17PM *  2 points [-]

Both are special cases of the following fallacy. A certain factor increases the strength of some possible positive effect, and also the strength of some possible negative effect, with the consequences of these effects taken in isolation being mutually exclusive. An argument is then given that since this factor increases the positive effect (negative effect), the consequences are going to be positive (negative), and therefore the factor itself is instrumentally desirable (undesirable). The argument doesn't recognize the other side of the possible consequences, ignoring the possibility that the opposite effect is going to dominate instead.

Maybe it has another existing name; the analogy seems useful.

Comment author: Eliezer_Yudkowsky 19 June 2009 09:04:19AM 3 points [-]

Giant cheesecake is about the jump from capability to motive, usually in the presence of anthropomorphism or other reasons to assume the preference without thinking.

This sounds more like a generic problem of technophilia (phobia) - mostly just confirmation bias or standard filtering of arguments. It probably does need a name, though, like Appeal to Selected Possibilities or something like that.

Comment author: CronoDAS 19 June 2009 06:54:19AM *  2 points [-]

Many people in the comments made the claim that making people more intelligent will, due to human self-deceiving tendencies, make people more deluded about the nature of the world.

Well, what I meant to say was that, we can't take it for granted that making people smarter won't make them more biased, in the absence of data. It might not seem likely to happen, but we can't assign it a probability of "too small to matter" just yet.

(This post does, indeed, contain relevant data that suggests that smarter people believe fewer absurdities...)

Comment author: Arenamontanus 19 June 2009 04:03:45PM 8 points [-]

One bias that I think is common among smart, academically minded people like us is that the value of intelligence is overestimated. I certainly think we have some pretty good objective reasons to believe intelligence is good, but we also add biases because we are a self-selected group with a high "need for cognition" trait, in a social environment that rewards cleverness of a particular kind. In the population at large the desire for more IQ is noticeably lower (and I get far more spam about Viagra than Modafinil!).

If I were on the Hypothetical Enhancement Grants Council, I think I would actually support enhancement of communication and cooperative ability slightly more than pure cognition. More cognitive bang for the buck if you can network a lot of minds.

Comment author: loqi 18 June 2009 06:37:25PM 2 points [-]

Though I lean toward agreeing with the conclusion that increased IQ would mitigate existential risk, I've been somewhat skeptical of the assertions you've previously made to that effect. This post provides some pretty reasonable support for your position.

The statement "Can I find some empirical data showing a corellation between IQ and quality of government" does make me curious about your search strategy, though. Did you specifically look for contrary evidence? Are there any other correlations with IQ (besides the old "more scientists to kill us" argument) that might directly or indirectly contribute to risk, rather than reduce it?

Kudos and karma to anyone who can dig up evidence unambiguously contradicting Roko's hypothesis.

Comment deleted 18 June 2009 08:26:55PM [-]
Comment author: Annoyance 22 June 2009 03:49:39PM 3 points [-]

I did not actively look for contradictory evidence.

I hate to discourage you when you're otherwise doing quite well, but the above is a major, major error.

Due to the human tendency towards confirmation bias, it's vastly important that you try to get a sense of the totality of the evidence, with a heavy emphasis on the evidence that contradicts your beliefs. If you have to prioritize, look for the contradicting stuff first.

Comment deleted 22 June 2009 04:13:51PM [-]
Comment author: Annoyance 22 June 2009 09:08:47PM -2 points [-]

I'd start getting very cautious and go make damn sure I wasn't wrong.

You should be doing that anyway.

But as the situation is ... I am not particularly incentivized to do this

Interesting. Does it bother you that you are not strongly motivated to avoid error?

Comment author: Alicorn 22 June 2009 09:14:31PM *  7 points [-]

There is a legitimate question of what errors are worth the time to avoid. Roko made a perfectly sensible statement - that it's not his top priority right now to develop immense certitude about this proposition, but it would become a higher priority if the answer became more important. It is entirely possible to spend all of one's time attempting to avoid error (less time necessary to eat etc. to remain alive and eradicate more error in the long run); I notice that you choose to spend a fair amount of your time making smart remarks to others here instead of doing that. Does it bother you that you are at certain times motivated to do things other than avoid some possible instances of error?

Comment author: Annoyance 22 June 2009 10:10:48PM -2 points [-]

Positive errors can be avoided by the simple expedient of not committing them. That usually carries very little cost.

Comment author: Alicorn 22 June 2009 10:23:13PM 0 points [-]

I agree completely, but this doesn't seem to be Roko's situation: he's simply not performing the positive action of seeking out certain evidence.

Comment author: Annoyance 23 June 2009 04:31:45PM -2 points [-]

But that action is a necessary part of producing a conclusion.

Holding a belief, without first going through the stages of searching for relevant data, is a positive error - one that can be avoided by the simple expedient of not reaching a conclusion before an evaluation process is complete. That costs nothing.

Asserting a conclusion is costly, in more than one way.

Comment author: thomblake 23 June 2009 04:45:45PM 1 point [-]

Humans hold beliefs about all sorts of things based on little or no thought at all. It can't really be avoided. It might be an open question whether one should do something about unjustified beliefs one notices one holds. And I don't think there's anything inherently wrong with asserting an unjustified belief.

Of course, I'm even using 'unjustified' above tentatively - it would be better to say "insufficiently justified for the context" in which case the problem goes away - certainly seeing what looks like a flower is sufficient justification for the belief that there is a flower, if nothing turns on it.

Not sure which sort of case Roko's is, though.

Comment author: Vladimir_Nesov 23 June 2009 05:50:38PM *  0 points [-]

At each point, you may reach a conclusion with some uncertainty. You expect the conclusion (certainty) to change as you learn more. It would be an error to immediately jump to inadequate levels of certainty, but not to pronounce an uncertain conclusion.

Comment author: curious 18 June 2009 08:40:32PM 2 points [-]

there's also the possibility of causality in the other direction -- that good governance can raise the IQ of a population (through any number of mechanisms -- better nutrition, better health care, better education, etc).

Comment author: Unnamed 18 June 2009 11:34:16PM *  3 points [-]

The study showing a correlation between "IQ" and quality of government (reference 3) estimated IQ based on the performance of public school 4th and 8th graders on standardized tests in math and reading. With that measure, the opposite causal direction seems far more likely: high quality state government leads to better public schools and thus higher test scores (which the author uses as a proxy for IQ).

State IQ was estimated from the National Assessment of Educational Progress (NAEP) standardized tests for reading and math that are administered to a sample of public school children in each of the 50 states. ... State data were available for grades 4 and 8. ... For each year, for each test, the national mean and standard deviation was used to standardize the test to have a mean of 100 and a standard deviation of 15. This standardization places the scores on the typical metric for IQ tests. The means of the standardized reading scores for grades 4 and 8 were averaged across years as were the means of the standardized math scores. State IQ was defined as the average of mean reading and mean math scores.

Comment author: Arenamontanus 19 June 2009 03:55:52PM 1 point [-]

This is why papers like H. Rindermann, Relevance of Education and Intelligence for the Political Development of Nations: Democracy, Rule of Law and Political Liberty, Intelligence, v36 n4 p306-322 Jul-Aug 2008 are relevant. This one looks at lagged data, trying to infer how much effect schooling, GDP and IQ at time t1 affects schooling, GDP and IQ at time t2.

The bane of this type of studies is of course the raw scores - how much cognitive ability is actually measured by school scores, surveys, IQ tests or whatever means that are used - and whether averages is telling us something important. One could imagine a model where extreme outliers were the real force of progress (I doubt this one, given that IQ does seem to correlate with a lot of desirable things and likely has network effects, but the data is likely not strong enough to rule out an outlier theory).

Comment author: Yvain 18 June 2009 03:42:37PM *  3 points [-]

Really, really, really doubtful that correlations between national IQ and, well, anything prove anything besides that certain countries are generally better off than others. That correlation is probably just differentiating First World countries from Third World countries in general - the First World has better health and education, and also better government. Although I'm agnostic on the existence of racial IQ differences, those aren't what's going on here, considering the wide variation in success of countries with similar races.

Same with IQ versus religion within and between countries: it's probably just an artifact of religion vs. wealth correlations. I scanned those articles and I didn't see anything saying they'd adjusted for it; if there is, then I'll start getting excited.

Comment author: Arenamontanus 18 June 2009 06:18:58PM 4 points [-]

The national/regional IQ literature is messy, because there are so many possible (and even likely) feedback loops between wealth, schooling, nutrition, IQ and GDP. Not to mention the rather emotional views of many people on the topic, as well as the lousy quality of some popular datasets. Lots of clever statistical methods have been used, and IQ seems to retain a fair chunk of explanatory weight even after other factors have been taken into account. Some papers have even looked at staggered data to see if IQ works as a predictor of future good effects, which it apparently does.

Whether it would be best to improve IQ, health or wealth directly depends not just on which has the biggest effect, but also on how easy it is and how the feedbacks work.

Comment author: Drahflow 18 June 2009 04:38:40PM 1 point [-]

Or intelligent people are just better at getting wealthy.

Comment author: Annoyance 18 June 2009 03:20:37PM *  -1 points [-]

For every Voltaire, there are a hundred Newtons, Increase Mathers, and Descartes. And countless Michael Behes.

And that's just religion. There are more sacred cows than just the traditional religions, more golden idols than could be worshiped by a hundred thousand faiths. Human cognition is a sepulchre, white-washed walls concealing corruption within.

“Religion always leads to rhetorical despotism,” Leto said. “Before the Bene Gesserit, the Jesuits were the best at it…. You learn enough about rhetorical despotism from a study of the Bene Gesserit. Of course, they do not begin by deluding themselves with it…. It leads to self-fulfilling prophecy and justifications for all manner of obscenities. (... ) It shields evil behind walls of self-righteousness which are proof against all arguments against the evil..."

Comment author: Cyan 18 June 2009 03:57:38PM *  0 points [-]

Nice Heart of Darkness reference.

Comment author: gwern 18 June 2009 08:44:55PM 0 points [-]

Hm, where's the Conrad ref? I see a God Emperor of Dune ref (Dune seems pretty popular here, I've noticed), but not that.

Comment author: Cyan 18 June 2009 08:58:38PM *  1 point [-]

It's the whited sepulchre thing; it's one of the central themes of Heart of Darkness. (Google tells me the original quote is from Matthew 23:27).

Comment author: aausch 30 June 2009 09:45:28PM 1 point [-]

I am slow and lazy today, so please forgive if I am asking for the obvious:

Do the referenced studies control for the process of acquiring education/intelligence, and test for causality?

It seems that a plausible competing hypothesis for the correlation between intelligence and, for example, religious belief, are:

  • the process of acquiring intelligence leads to removal of biases, rather than actual possession of intelligence leading to removal of biases. If we change to a different process for acquiring intelligence, we may lose side effects.
  • the process of disposing of religious beliefs leads to a more measurable or noticeable level of intelligence.
  • the process of becoming educated in current education systems (and as a result better exposing existing intelligence aptitude) works at eradicating certain sets of beliefs and biases in students

It seems to me that differentiating between data that supports these hypothesis is incredibly hard, and I wonder if the referenced researchers went to the lengths required.

Comment author: aausch 01 July 2009 01:02:03AM 1 point [-]

Doh! I think missed the obvious.

This problem is related to the problem of producing FAI, according to the terms and assumptions that Eliezer has been using.

I'm willing to bet that making a human, with a broken value system, more intelligent (according to some measure of intelligence based on some kind of increased computational ability of the brain), suffers from much the same kinds of problems that throwing more computing power at an improperly designed AI does.

Comment author: AndrewKemendo 20 June 2009 11:29:05AM 0 points [-]

This comment seems to miss the idea:

What happens if such a complex system collapses? Disaster, of course. But don’t forget that we already depend upon enormously complex systems that we no longer even think of as technological. Urbanization, agriculture, and trade were at one time huge innovations. Their collapse (and all of them are now at risk, in different ways, as we have seen in recent months) would be an even greater catastrophe than the collapse of our growing webs of interconnected intelligence.

If in fact the future is what the rest of the article envisions, a world of accurate measures and prudent predictions, then the possibilities for collapse will become less and less.

Making the case that such largess will of course lead to the linear probability of increase in damage that would result in collapse, ignores in large part, if not the majority of the science behind cognitive development and AI science - risk mitigation and error elimination.

Comment author: Kevin 20 June 2009 03:59:32AM 0 points [-]