Here at Less Wrong, the Future of Humanity Institute and the Singularity Institute, a recurring theme is trying to steer the future of the planet away from disaster. Often, the best way to avert a particular disaster is quite hard for ordinary people to understand as it requires one to think through an argument in a cool, unemotional way; more often than not the best solution will be lost in a mass of low signal-to-noise ratio squabbling and/or emoting. Whatever the substance of the debate, the overall meta-problem is quite well captured by this catch from this month's rationality quotes:

"People are mostly sane enough, of course, in the affairs of common life: the getting of food, shelter, and so on. But the moment they attempt any depth or generality of thought, they go mad almost infallibly.

Attempting to target the meta-problem of getting people to be slightly less mad when it comes to abstract or general thought, especially public policy, is a tempting option. Robin Hanson's futarchy proposal is one way to combat this madness (which it does by removing most people from the policymaking loop). However, another important route to combating human idiocy is to find technologies that make humans smarter. Nick Bostrom proposed that we should work hard looking for ways to enhance the cognition of research scientists, because even a small increase in the average intelligence of research scientists would increase research output by a large amount, as there are lots of scientists. But improving the decisionmaking process of our society would probably have an even more profound effect; if we could improve the intelligence of the average voter by about one standard deviation, it is easy to speculate that the political decisionmaking process would work much better. For example, understanding simple logical arguments and simple quantitative analyses is stretching the capabilities of someone at IQ 100, so it seems that the marginal effect of overall IQ increases would be quite a large marginal increases in the probability that a politician was incentivized to focus on a logical argument over an emotionally appealing slander as the main focus of their campaign.

As a concrete example, consider the initial US reaction to rising oil prices and the need for US-produced energy: pushing corn ethanol, because a strong farming lobby liked the idea of having extra revenue. Now, if the *average voter* could understand the concept of photosynthetic efficiency, and could understand a simple numerical calculation showing how inefficient corn is at converting solar energy to stored energy in ethanol, this policy choice would have been dead in the water. But the average voter cannot do simple physics, whereas they can understand the emotional appeal of "support our local farmers!". Even today, there are still politicians who defend corn ethanol because they want to pander to local interest groups. Another concrete example is some of the more useless responses that the UK public has been engaging in - and being encouraged to engage in - to prevent global warming. People were encouraged to unplug their mobile phone chargers when the chargers weren't being used. David McKay had to wage a personal war against such idiocy - see this Guardian article. The universal response to my criticism of people advocating this was "it all adds up!". I quote:

There's a lack of numeracy in the public discussion of energy. Where people do use numbers, they select them to sound big and score points in arguments, rather than to aid thoughtful discussion.

Toby Ord has a project on efficient charity, he has worked out that the difference in outcomes per dollar for alleviating human suffering in Africa can vary by 3 orders of magnitude. But most people in the developed world don't know what an "order of magnitude" is, or why it is a useful concept. This efficient charity concept demonstrated that the derivative

d(Outcomes)/d(Average IQ)

may be extremely large, and may be subject to powerful threshold effects. In this case, there is probably an average IQ threshold above which the average person can easily understand the concept of efficient charity, and thus all the money gets given to the most efficient charities, and the amount of suffering-alleviation in Africa goes up by a factor of 1000, even though the average IQ of the donor community may only have jumped from 100 to 140, say.

It may well be the case that finding a cognitive enhancer suitable for general use is the best way to tackle the diverse array of risks we face. People with enhanced IQ would also probably find it easier (and be more willing) to absorb cognitive biases material; to see this, try and explain the concept of "cognitive biases" to someone who is unlucky enough to be of below average IQ, and then go an explain it to someone who is smarter than you. It is certainly the case that even people of below average IQ *do sometimes*, in favourable circumstances, take note of quantitative rational arguments, but in the maelstrom of politics such quantitative analyses get eaten alive by more emotive arguments like "SUPPORT OUR FARMERS!" or "SUPPORT OUR TROOPS!" or "EVOLUTION IS ONLY A THEORY!" or "IT ALL ADDS UP!".

New to LessWrong?

New Comment
244 comments, sorted by Click to highlight new comments since: Today at 9:26 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I think that most people who do not have severe cognitive deficiencies are capable of understanding what "efficient charities" are. I think that most people are quite capable of understanding the statement, "Ethanol will waste a lot of money and will still generate as much (or more) pollution than gasoline. To top it off, it will also raise the price of food products, both for you and for people who will actually starve as a result." Most issues like this, one can figure out what's going on by reading wikipedia for half an hour. Perhaps that takes a high IQ, but from my experience, when people are given clear and accurate arguments, they are generally capable of getting them. The problem is that they never bother seeking out decent arguments. They either just don't care, or they seek out arguments that support whatever their beliefs happen to be.

In other words, the problem is not that people are stupid. The problem is that people simply don't give a damn. If you don't fix that, I doubt raising IQ will be anywhere near as helpful as you may think.

I think that most people who do not have severe cognitive deficiencies are capable of understanding what "efficient charities" are

I have specific empirical evidence against this point from attempting to convince people on facebook causes to instead support more efficient causes. I am considering a top-level post on it.

7Mike Bishop15y
More detail would be wonderful!
5RobinHanson15y
Well be sure to clarify if they were really trying to understand. People who do not want to understand can look a lot like people unable to understand.
1Roko15y
How does one check for this? If someone has stated goal X and is performing action A, and I provide rock solid evidence that action B will better achieve that goal, and I am told to "shut up" etc, do we classify this as can't or won't?
0Psychohistorian15y
Does this mean they don't understand, they don't care, or they don't share your utility function? The fact that they disagree does not mean they don't understand.
0Roko15y
I was criticising them for behaving in a way that failed to optimally fulfill their goals. I may post some of the most classic responses.
3Roko15y
If a less intelligent person is presented with correct and only correct arguments, they may have a higher probability of voting in accordance with them. But often in reality they will be presented with "fake" arguments, especially by naughty politicians or religious leaders. For example, arguments like "evolution is only a theory" that are specifically designed to be persuasive without being true. Intelligence is required to tell the difference.
-1Annoyance15y
Intelligence, while useful, isn't what's required in that scenario. Skepticism, curiosity, and intellectual integrity are.
2Roko15y
I claim that intelligence - specifically IQ - helps people to tell the difference between sophistry and genuine arguments. This seems reasonable to me.
2Eliezer Yudkowsky15y
I'm not sure this is correct. I might endorse a statement to the effect that fluid g is necessary in order to learn the more advanced skills of distinguishing sophistry, but very few people, high-g or otherwise, actually learn such a skill.
0Roko15y
It is probably possible for a less intelligent person to learn (to some extent) to distinguish sophistry from solid argument. But it is much easier and comes more naturally for a smart person. The amount of mental determination required to do a task decreases as the task becomes easier; the amount of ability required to perform a task decreases as the determination to succeed increases. I suspect that it will be easier to persuade people to take a pill that makes them smarter than to persuade them to spend months or years studying critical thinking.
-3Annoyance15y
No, no, no! It's harder and more difficult for a smart person to learn this, because they're so much better at producing clever rationalizations to explain away their cognitive dissonance. The person it's easiest to fool is yourself, and the more IQ you have, the better you are at coming up with really convincing stupidity. Recognizing valid reasoning and forcing yourself to adhere to the standards that define it requires something IQ doesn't measure.
-5Annoyance15y
1RobinHanson15y
Yes, this is the key problem that people don't really want to understand. That is the problem futarchy is intended to solve.
1komponisto15y
Agree. Or, one might say: the problem is not so much one of intelligence as one of (surprise!) rationality.
2CronoDAS15y
Ditto... although being ill-informed can't help either. I once heard a certain political figure speak at a university. He said that when he gave speeches in areas in which the majority supported his political party, explaining what problems he was trying to solve, they would simply react as a supportive audience - but when he gave speeches in areas where his party was unpopular, they also approved of him, saying that they were horrified and angry because nobody had ever told them about this problem before. He concluded by saying that a Republican is a Democrat who doesn't know what's going on. More disturbingly, giving someone a list of falsehoods often causes people to later remember them as being true. (See also this Eliezer post.)
0Roko15y
Sure, but I will address rationality seperately. Consider, though, how hard it is and how long it takes to inform even a smart person about human biases.
-1SoullessAutomaton15y
As an aside, given that the pollutant du jour is atmospheric carbon, it's worth noting that burning ethanol is essentially carbon neutral. Ethanol also means not being dependent on foreign nations run by crazy religious whackjobs. Not that corn ethanol is a good idea overall, but it does have points to recommend it vs. petroleum. It just has... a lot of points to disrecommend it in general.
0Jack15y
Thing is liquid fuel (oil or otherwise) is traded on a mostly open, global, market. So the price of ethanol and gasoline are both dictated to a large extent by the supply produced by OPEC. So producing ethanol can only make us "independent" in the sense that it makes OPEC/Venezuelan/Russian oil a smaller fraction of global energy production. But as long as those sources constitute a decent fraction of global production (and you can't grow enough ethanol to change that) they can still drive up prices on a whim. I also think the jury is still out on the carbon neutrality thing (at least as far as I can tell surveying the research)- especially if you were already growing corn or had to chop down a rain forest to get your farm land.
-1SoullessAutomaton15y
Seriously? Do we actually trade fuel ethanol? Why would anyone want it, and if they did, how could we compete with Brazil's sugar cane ethanol? How odd. My impression is that the only reason corn ethanol is as cheap as gas is because of huge government subsidies to the corn growers. Perhaps the whole situation is even sillier than I realized. Also, I think the main idea of "energy independence" is that, if it came down to it, we could switch to completely internal energy sources and tell the rest of the world to go shove it. It'd be a diplomatic stick to beat people with in the sense of "we don't need you", not an otherwise significant on-going economic factor. The carbon in the ethanol was extracted from the atmosphere in the past couple years, burning it releases it back. That's pretty much the definition of carbon-neutral. Pretty much the same should be the case for anything else you do with the corn, including eating it. But, you're right, there are other considerations. Chopping down rainforest is absolutely not carbon-neutral.
1Jack15y
My point is just that the whole energy independence thing is a red herring since energy is traded on an open market. If we suddenly had to depend only on energy produced in the U.S. the resulting price increases would be prohibitive for everyone but the military (and we already have strategic oil reserves). The whole concept of energy independence is a political cudgel to turn energy politics into security politics by taking advantage of people's mercantilist intuitions about resources. But in a global free market those intuitions are wrong. Strictly speaking you don't get to decide where your energy comes from. Often it is cheaper to get it from nearby sources but it is all part of the same pricing system. So yeah, increased domestic production might make us more independent in the sense that foreign countries won't have quite the same ability to knock prices up but there isn't a magic line where suddenly we're "independent". Lets say we have 100 oil. 50 of it is produced in the Middle East and the U.S. uses 40 but currently produces 20. The remaining 30% is produced by other countries. An OPEC embargo leaves us with 50 oil about doubling the price (for convenience, usually supply curves are exponential but I'm not an economist I don't know what the price would really do beyond going up, a lot.). If the U.S. increases production to 40% prices pre-embargo will be lower (since we have a supply of 120) but prices will still increase by a similar amount when we lose 50 of that 120 supply leaving us with only 70 oil. If the whole world turned against us prices would triple (according to our invented price model) as we would have only 40 in supply where once we had 120. In order to make it so the rest of the world really couldn't affect us we'd have to be producing a preponderance of the world's energy such that foreign embargoes would only slightly affect total supply. Either that or we embargo foreign energy imports. But we'd be independent like chopping off your le
-1SoullessAutomaton15y
You are. I obviously haven't thought the issue through clearly and it's not something I care deeply about, since ethanol is silly for many other, larger reasons, therefore I won't waste your time with further questions. Definitely moving the whole "energy independence" thing to the "I have no idea" column for now.
0saturn15y
In short, we're "independent" when the cost of importing fossil fuels exceeds the cost of domestic production. The two ways this can happen are drastic technological improvements, and subsidies/tariffs on fuel resulting in economically wasteful overproduction. In the US we tend to give lip service to the former while actually implementing the latter.

If you are surrounded by money pumps, is it rational to bet with them, or correct their functioning?

0Roko15y
This is a good point. Instead of worrying about existential risks, I could join a good startup team/start my own startup and aim to make lots of money. However, what would happen if the world got through all of these challenges to a positive post singularity world? People would ask questions such as: "why did you not put effort into existential risk mitigation when you knew the dangers?" I would ask that question. So the answer to the question "is it rational" depends upon your goals.
2JamesAndrix15y
That's not quite what I meant, but I do believe Eliezer has advised anyone not working on 'risks' (or FAI?) to make as much money as they can and contribute to an organization that is. What I meant was that given a money pump, the straightforward thing to do is to pump it, not fix it in the hope that it will somehow benefit humanity. It seems to me that on LW, most believe that people are irrational in ways that should make people money pumps, but the reaction to this is to make extreme efforts to persuade people of things. Improving someone's intelligence or rationality is difficult if they're not already looking, but channeling away some of their funds or political capital will lessen the impact their irrationality can have.
5orthonormal15y
I think that most people are (unconsciously) rational enough to avoid being financially exploited in any but the obvious known ways (lotteries, advertising affecting preferences, etc) for which there's already a competitive market or regulation. Aside from those avenues, most people are too wary of being cheated to make good money pumps. What's not being covered as much are the ways that people irrationally contribute to the destruction of wealth and utility, by voting for ineffective or detrimental policies or by spreading harmful memes. (People don't typically have an evolved horror of doing these things.) A push towards rationality on those fronts can help everyone. If you disagree, do you have a particularly effective money-pump in mind? (Of course you might hesitate to share it.)
-4Annoyance15y
So he's chosen the "work the money pumps" option, then, rather than trying to correct them.
1JamesAndrix15y
If you believe that he believes that this is not the best use of that persons anti-risk time, then yes. But I think he's sincerely looking for the best way to mitigate those risks.

I do not know if this strategy would apply in the Western world; but in Africa, I think much could be gained simply by nutritional intervention. IQ is, to a good approximation, 50% genetic; much of the environmental effect is childhood nutrition; it follows that widespread distribution of vitamins might have a nice effect over time, in addition to the more usual benefits. It also seems possible that this might work in those strata of the American population that subsist mainly on fast food, although I expect the effect would be less - likely there are already patchwork government programs that distribute vitamins to the poor.

1Arenamontanus15y
Yes, in many places nutrition is a low-hanging fruit. My own favorite example is iodine supplementation, http://www.practicalethicsnews.com/practicalethics/2008/12/the-perfect-cog.html but vitamins, long-chained fatty acids and simply enough nutrients to allow full development are also pretty good. There is some debate of how much of the Flynn effect of increasing IQ scores is due to nutrition (probably not all, but likely a good chunk). It is an achievable way of enhancing people without triggering the normal anti-enhancement opinions. The main problem is that it is pretty long-term. The infants we save today will be putting their mark about two or more decades hence - they will not help us much with the problems we face before then. But this is a problem for most kinds of biological enhancement; developing it and getting people to accept it will take time. That is why gadgets are important - they diffuse much more rapidly.
-1Roko15y
Upvoted for comparing fat Americans with third world destitutes

I suppose the question is not whether it would be good, but rather how. Some quick brainstorming:

  • I think people are "smarter" now then they were, say, pre-scientific-method. So there may be more trainable ways-of-thinking that we can learn (for example, "best practices" for qualitative Bayesianism)

  • Software programs for individuals. Oh, maybe when you come across something you think is important while browsing the web you could highlight it and these things would be presented to you occasionally sort of like a "drill" to

... (read more)
4gwern15y
Congratulations, you've nearly reinvented spaced repetition! There is a great deal of writing on spaced repetition flashcard systems, so I won't inflict upon you my own writings; but the Wikipedia article will link you to the main programs (Anki, Mnemosyne, and SuperMemo) and some writeups of the topic. SR is a great technique; I love it dearly. Well, you could just improve your working memory. Unusually, working memory is plastic enough to be trainable by WM tasks. The WM exercise I'm most familiar with is Dual n-back. I practice it, but while I have noticed improvements, I'm unsure whether they repay the time I've put into it; SR systems have proven themselves as far as I'm concerned, but the jury is still out on dual n-back. Now that sounds interesting. But looking at this OpenCog link doesn't give me a good idea as to what PLN might do for note-taking (or really, in general); did you have any use-cases or examples?
1derekz15y
No specific use cases or examples, just throwing out ideas. On the one hand it would be cool if the notes one jots down could self-organize somehow, even a little bit. Now OpenCog is supposed by its creators to be a fully general knowledge representation system so maybe it's possible to use it as a sort of notation (like a probabilistic-logic version of mathematica? or maybe with a natural language front end of some kind? i think Ben Goertzel likes lojban so maybe an intermediate language like that) Anyway, it's not really a product spec just one possible sort of way someday to use machines to make people smarter. (but that was before I realized we were talking about pills to make people stop liking their favorite tv shows, heh)
0Henrik_Jonsson15y
While I agree that it it would be cool, anything that doesn't keep your notes exactly like you left them is likely to be more annoying than productive unless it is very cleverly done. (Remember Microsoft Clippy?) You'd probably need to tag at least some things, like persons and places.
3asciilifeform15y
I have been obsessively researching this idea for several years. One of my conclusions is that an intelligence-amplification tool must be "incestuously" user-modifiable ("turtles all the way down", possessing what programming language designers call reflectivity) in order to be of any profound use, at least to me personally. About six months ago, I resolved to do exactly that. While I would not yet claim "black belt" competence in it, Mathematica has already enabled me to perform feats which I would not have previously dared to contemplate, despite having worked in Common Lisp. Mathematica is famously proprietary and the runtime is bog-slow, but the language and development environment are currently are in a class of their own (at least from the standpoint of exploratory programming in search of solutions to ultra-hard problems.)
2SilasBarta15y
Could you give more examples about things you like about Mathematica? Years ago, I resolved to become an expert at it after reading A New Kind of Science (will you guys forgive me?) and like it for a while, but then noticed some things were needlessly complicated or refused to spit out the right results (long time ago so I can't give examples). Btw, I learned about Lisp after Mathematica, and was like, "wow, that must have been where Wolfram got the idea."
1asciilifeform15y
1) Mathematica's programming language does not confine you to a particular style of thinking. If you are a Lisp fancier, you can write entirely Lispy code. Likewise Haskell. There is even a capability for relatively painless dataflow programming. 2) Wolfram Inc. took great pains to make interfacing with the outside world from within the app as seamless as possible. For example, you can suck in a spreadsheet file directly into a multidimensional array. There is import and export capability for hundreds of formats, including obscure scientific and engineering ones. In case the built-in formats do not suffice, defining custom ones is surprisingly easy. 3) A non-headache-inducing replacement for regular expressions. Enough said. 4) Graphical objects (likewise audio and other streams) are first-class data types. They are able to appear as both the inputs and outputs of functions. 5) Lastly, and most importantly: fully interactive program development. The rest of the programming universe lives a life of endlessly repeated "compile and pray" cycles. Mathematica permits you to meaningfully evaluate and edit in place every line of code you write. I am otherwise an Emacs junkie, yet I have never felt the slightest desire to touch Emacs when working on Mathematica code. The programmer's traditional need to wade through and shovel giant piles of text from one place to another while writing code is almost entirely absent when working in this language. The downsides of Mathematica (slow, proprietary, expensive, etc.) are widely known. Thus far, the advantages have vastly outweighed the problems for my particular kind of work. However, I have found that I now feel extremely confined when forced to work in any other programming language. Perhaps this risk should be added to the list of disadvantages. Wolfram had (at least in the early days of Mathematica) a very interesting relationship with Lisp. He seems to have initially rejected many of its ideas, but it is clear that the
1derekz15y
Thanks for the motivation, by the way -- I have toyed with the idea of getting Mathematica many times in the past but the $2500 price tag dissuaded me. Now I see that they have a $295 "Home Edition", which is basically the full product for personal use. I bought it last night and started playing with it. Very nifty program.
0SilasBarta15y
I don't know wheter to applaud your ethical restraint, or pity your ignorance. I'll go with the first ;-)
1derekz15y
If you're wondering whether I'm aware that I can figure out how to steal software licenses, I am. ETA: I don't condemn those who believe that intellectual property rights are bad for society or immoral. I don't feel that way myself, though, so I act accordingly.
0SilasBarta15y
It's theoretically possible to believe in IP (on some level), but lack the will not to pluck the forbidden fruit.
1derekz15y
Cool stuff. Good luck with your research; if you come up with anything that works I'll be in line to be a customer!
1Roko15y
Sounds cool, but this is not quite what I was aiming at.
0asciilifeform15y
I am curious what you had in mind. Please elaborate.
2Roko15y
I had in mind average Joe the truck driver who cannot understand an argument like "Corn ethanol is a bad idea because the energy conversion efficiency of corn plants is extremely low, so the energy output of the process, including all the farming and processing, may be negative", but who instead falls victim to "Corn ethanol is good because you should SUPPORT OUR FARMERS!" You're talking about enhancing the efficiency of the smartest people (like you), I'm talking about enhancing the efficiency of the average person.
-1derekz15y
Well if you are really only interested in raising the average person's "IQ" by 10 points, it's pretty hard to change human nature (so maybe Bostrom was on the right track). Perhaps if somehow video games could embed some lesson about rationality in amongst the dumb slaughter, that could help a little -- but people would probably just buy the games without the boring stuff instead.
0Roko15y
The problem with all of these is that they are all likely to be adopted mostly by the minority of people who are already very smart, whereas this post is aiming at something for the average intelligence people who comprise the majority of the population.
1asciilifeform15y
I cannot pin down this idea as rigorously as I would like, but there seems to exist such a trait as liking to think abstractly, and that this trait is mostly orthogonal to IQ as we understand it (although a "you must be this tall to ride" effect applies.) With that in mind, I do not think that any but the most outlandishly powerful and at the same time effortless intelligence amplifier will be of much interest to the bulk of the population.
1Roko15y
I did not address the issue of actually getting people to take cognitive enhancers in my post. It is a huge can of worms that would take at least a whole post to get into. Let's concentrate on the hypothetical here: IF we could get people to do this, then it would be a good thing.
-1derekz15y
I'm still baffled about what you are getting at here. Apparently training people to think better is too hard for you, so I guess you want a pill or something. But there is no evidence that any pill can raise the average person's IQ by 10 points (which kind of makes sense, if some simple chemical balance adjustment could have such a dramatic effect on fitness it would be quite surprising). Are you researching a sci fi novel or something? What good does wishing for magical pills do?
3Roko15y
Well we haven't looked very hard, and I am trying to advocate that more research is urgently needed in this area, along with people like Nick Bostrom. See The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement "a greater level of mental activity might also enable us to apply our brains more effectively to process information and solve problems. The brain, however, requires extra energy when we exert mental effort, reducing the normally tightly regulated blood glucose level by about 5 per cent (0.2 mmol/l) for short (<15 min) efforts and more for longer exertions.¹⁵ Conversely, increasing blood glucose levels has been shown to improve cognitive performance in demanding tasks."
1derekz15y
If the point of this essay was to advocate pharmaceutical research, it might have been more effective to say so, it would have made the process of digesting it smoother. Given the other responses I think I am not alone in failing to guess that this was pretty much your sole target. I don't object to such research; a Bostrom article saying "it might not be impossible to have some effect" is weak support for a 10 IQ point avergage-gain pill, but that's not a reason to avoid looking for one. Never know what you'll find. I'm still not clear what the takeaway from this essay is for a lesswrong reader, though, unless it is to suggest that we should experiment ourselves with the available chemicals. I've tried many of the ones that are obtainable. Despite its popularity, I found piracetam to have no noticeable effect even after taking it for extended periods of time. Modafinil is the most noticeable of all; it doesn't seem to do much for me while I'm well-rested but does remove some of the sluggishness that can come with fatigue, although I think the results on an IQ test would be unnoticeable (maybe a 6 hour test, something to highlight endurance, could show a measurable difference). Picamilone has a subtler effect that I'm not sure how to characterize. I'm thinking of trying Xanthinol NIcotinate, but have not yet done so. Because of the small effects I do not use these things as a component of my general lifestyle, both for money reasons and the general uncertainty of long-term effects (also mild but sometimes unpleasant side effects). The effects of other more common drugs like caffeine and other stimulants are probably stronger than any of the "weird" stuff, and are widely known. Thinking beyond IQ, there are of course many drugs with cognitive effects that could be useful on an occasional-use basis, but that's beyond the scope of this discussion.
1Roko15y
Well, there may be tactics other than pharmacology: we might have nutritional interventions or perhaps something like transcranial magnetic stimulation, or even something we haven't thought of yet. But I should emphasize that the sole criterion for such interventions would be that it would be feasible to get lots of people to use them. This article is not a "here's something you can do to enhance your own life today!" type article, it is a discussion of existential risk reduction via mass IQ increase. I may well write some "how to" articles, too though.
1asciilifeform15y
Please read this short review of the state of the art of chemical intelligence enhancement. We probably cannot reliably guarantee 10 added points for every subject yet. Quite far from it, in fact. But there are some promising leads. Others have made these points before, but I will summarize: fitness in a prehistoric environment is a very different thing from fitness in the world of today; prehistoric resource constraints (let's pick, for instance, the scarcity of refined sugars) bear no resemblance to those of today; certain refinements may be trivial from the standpoint of modern engineering but inaccessible to biological evolution, or at the very least ended up unreachable from a particular local maximum. Consider, for example, the rarity of evolved wheels.
1arundelo15y
I think this is called need for cognition. (I first saw this phrase somewhere here on LW.)

I have tried to research the economic benefits of cognition enhancement, and they are quite possibly substantial. But I think Roko is right about the wider political ramifications.

One relevant reference may be: H. Rindermann, Relevance of Education and Intelligence for the Political Development of Nations: Democracy, Rule of Law and Political Liberty, Intelligence, v36 n4 p306-322 Jul-Aug 2008 argues (using cross-lagged data) that education and cognitive ability has bigger positive effects on democracy, rule of law and political liberty than GDP. There ar... (read more)

Nick Bostrom proposed that we should work hard looking for ways to enhance the cognition of research scientists, because even a small increase in the average intelligence of research scientists would increase research output by a large amount, because there are lots of scientists.

I wonder about this. Isn't it the case [translation: I'm sure I read in some general-audience psychology book once] that for just about every human activity, scientific research included, there's a certain level above which differences in intelligence, at least in the sense of ... (read more)

1Arenamontanus15y
There are many traits that would be useful for research and other fields, such as energy, better time management, social ability etc. Intelligence is important for problem-solving in domains where standard rules have not been defined, which might be particularly true in some reasearch. However, it is hard to measure the impact of such ability directly. David Lubinski and Camilla Persson Benbow, Study of Mathematically Precocious Youth After 35 Years, Perspectives on Psychological Science, 1,316-343 www.vanderbilt.edu/Peabody/SMPY/DoingPsychScience2006.pdf has some intriguing data. They followed up the top percent scorers and compared the uppermost and lowest quartile of this already elite group. Unsurprisingly they were on average doing great, and the top group also earned more and had about six times the rate of tenure at top US universities. But that could just be pure competitive ability rather than any individually or socially useful outcome. The interesting result was that the number of doctorates and percent earning patents was about twice in the top quartile. Doctorates and patents are after all a form of measure of actually having achieved something, and presumably a society is better off if bright people produce more patentable ideas. This IMHO strengthens the idea that we would see gains from cognition enhancement even among the brightest. However, I think the biggest economic and social impact will be due to intelligence among the great mass of people - reduction of costs and friction due to stupidity, short term thinking, mistakes and other limitations, increased benefits from better cooperation (smart people do better on iterated prisoners dilemma games and have longer time horizons) and ability to manage more complex systems. A "emotional intelligence enhancer" might be socially beneficial too - there is no reason to think "pure" cognitive function is the end of things we might rationally want to see others enhance.
-1Roko15y
If you think increased IQ doesn't lead to better research, just ask a few Nobel prize winners what their IQ is. A gold star will be awarded to anyone who finds some data on Nobel prize winners and IQ.
2gjm15y
What I said is not "increased IQ doesn't lead to better research" but that perhaps among people already smart enough to be any good at scientific research increased IQ might make little difference. Even if that's true, though, there might be substantial benefit in increasing IQ among people who would like to do scientific research but aren't quite good enough at it for it to be a sensible career. But that's not what AIUI Nick Bostrom was suggesting.
1Roko15y
I doubt specifically this statement. I think that research outcomes in many areas are probably super linear in IQ, i.e. going from 140 to 150 could make the difference between so-so research and groundbreaking research. Consider whether Bostrom would have founded the FHI if his IQ had been 10 points lower.
0HughRistik15y
The relationship of IQ to scientific achievement might be a step function. I am curious about how this was measured.
1Mike Bishop15y
What sort of mechanism would produce a step function? Sounds highly unlikely to me. Added: I would expect the curve to be smooth.
1Roko15y
The mechanism is likely to be that a smarter researcher sees solutions intuitively, whereas a dumber one has to try lots of things that don't work before getting to the correct solution; this would produce super linear speedup I think, because as you get smarter you avoid more and more wasted effort. There's also the issue of status producing more motivation, which produces more achievement, which produces more status. This adds a significant nonlinearity.
0Mike Bishop15y
Miscommunication. My point was only that I expect the function that describes the relationship to be a smooth curve. I wouldn't be too surprised if the relationship between IQ and research productivity is stronger at the high end than in the middle.
0HughRistik15y
Sounds unlikely to me too, but it could explain the phenomenon underlying glm's quote (that above a certain threshold intelligence doesn't make much of a difference in "effectiveness"), assuming that the result is valid (which I would want to know how "effectiveness" was measured).
0Mike Bishop15y
Your question about how they measured "effectiveness" is right on. My guess is that marginal benefits to IQ depend on the task, and the IQ range. For tasks of medium difficulty, the marginal benefits of IQ will probably increase as one goes from the low-IQ to average, flatten out and then decrease as one gets to a very high range. But higher IQ allows you to efficiently attempt more much more difficult (and arguably important) tasks.
0Roko15y
My guess is that it is superlinear. Look at, e.g. Von Neumann.
2Arenamontanus15y
Human ability generally seems to be power-law distributed - the "80-20" rule often hold in research. I'm just checking out Murray's "Human Accomplishment", and this is the impression I get from his data - whether it is valid data remains to be seen. But this might have many other causes, from Matthew effects where widely cited people become even more cited (and maybe get great research environments) to multiplicator effects where productivity is due to multiplicative effects of more or less random factors - only a few gets a lot of them, and the result is a lognormal distribution. IQ, as ability to make rational inferences in new domains, may be just one of these factors. Low IQ certainly precludes much scientific achievement. There are also selection effects where getting into the right schools or professions require overcoming IQ-loaded hurdles. The real benefits of IQ among geniuses might be smaller than other factors - but having more people with high IQ will certainly not decrease the pool of potential geniuses.
-1Annoyance15y
Logical fallacy: those Nobel prize winners do not have increased IQ. Presumably they have high IQ. If Nobel prize winners all have very high IQs, that tells us that high IQ is a necessary - but not necessarily sufficient - requirement for winning Nobel prizes. And that itself tells us little about what's needed for quality research, even presuming that all Nobels are awarded for quality research. (I happen to know that they aren't, but that's another story.) What are the Type I and Type II error rates of the Nobel prize award process?
1Mike Bishop15y
IMO, the more important question is whether the overall system of incentives for scientists is effective.
0Mike Bishop15y
Imagine we invented a pill which increased everyone's performance on IQ tests by one standard deviation with no side effects (note, I don't expect to see this soon). Further, imagine that all current scientists began taking it. What benefits would you expect to see? Let me be more specific, assume no funding changes, even though smarter scientists would almost certainly get more funding: how much would Science and Nature have to expand if they did not raise the bar for publication? My estimate: 20% with a 95% confidence interval of [3%, 100%]
-3Annoyance15y
None, actually. I expect the foolish would come up with even cleverer ways of deluding themselves than before, which would make it even harder for them to distinguish truth from their own cherished beliefs. The wise, who already possessed the ability to override their primitive associational thinking, would have a better ability to grasp complex theories and work through their implications. But they would be vastly outnumbered by the fools - thus, no overall improvement and a possible overall harm. Not everyone who is socially recognized as a 'scientist' can actually put the principles of science into practice.
3Mike Bishop15y
Let me get this straight, you believe that a majority of scientists would do worse science if they had a higher IQ? I doubt many scientists would agree. Do you agree that you are among a small minority holding this position? Does this imply that a majority of scientists would do better science if they took a pill which lowered their IQ without side effects? I know correlation does not imply causation, but do you agree that there is a positive correlation between IQ and quantity and quality of an individuals scientific publications? Edited to fix a typo
0Annoyance15y
No. Considered across all individuals? Only a very weak one. I suggest limiting the question to scientists. In that case, the answer for 'quantity' would be "not strongly at all", and 'quality' is so difficult to define as to be useless for this investigation.
2Mike Bishop15y
You claim higher IQ would hurt most scientists, but a lower IQ would not help. This implies a majority of scientists have the ideal IQ for furthering science. To me this sounds like an impossible coincidence. I might look for research on what predicts a scientists research productivity. GRE scores may be more common than IQ. Can we make terms for a bet? I claim that net of all controls, GRE or IQ scores will having nontrivial positive relationship with research productivity. "Quality" is difficult to measure, but you give up too quickly. e.g. citations, impact factor of journal of publication
0Annoyance15y
I need to clarify. Quite a lot of 'scientists' are terrible at putting the scientific method into practice. I try to exclude those people from the category whenever possible. I do acknowledge, though, that this will frequently lead to confusion. A lower IQ of scientists overall would make progress slower, but generally wouldn't impede the self-correcting properties of the method. The so-called scientists who don't or can't put the method into practice would have their ability to make clever but specious arguments impaired. Possibly the reduced nonsense-sensing of the scientists would still be more than enough to identify and exclude the reduced levels of nonsense. With reduced IQ across scientists and 'scientists' both, it's entirely possible that there would be more scientific progress for the field as a whole. There are a number of necessary but not sufficient factors involved, and non-lethal but cumulatively-damaging factors as well. It's not obvious to me that the properties measured by IQ are equally distributed across the positive and negative factors; I suspect they lend themselves to the negative more than the positive.
0JGWeissman15y
I agree with most of your comment, but "Do you agree that you (are) among a small minority holding this position?" is social pressure in place of a real argument. The truth is not determined by voting.
1Mike Bishop15y
The truth is not determined by voting, but the truth is often positively correlated with peoples' opinions. It is rational to weigh other people's opinions. If I disagree with someone, I must ask myself why I am more likely to be correct than they are.
1JGWeissman15y
Annoyance had already explained his reasons for his position, and you explained reasons for yours in the rest of your comment. Once we are discussing those reasons directly, there is no need to use majority opinion as a proxy for the relative strength of those reasons.
-1Mike Bishop15y
I disagree. People's opinions are evidence and deserve weight. Smarter, more rational, people's opinions deserve more weight. The opinions of scientists who specialize in a relevant subject deserve still more weight. Why shouldn't we consider this type of evidence?
0JGWeissman15y
First, I should point out that what I initially objected to was an appeal to an assertion of raw majority, with no weighting based on rationality, intelligence, or specialization, and in which uninformed opinions are likely to drown out evidence-informed expert opinions. Second, the reason the opinion of a specialist can be strong evidence is that the specialist is likely to have access to evidence not generally available or known, and have superior ability to process that evidence. So, when someone discovers that a specialist disagrees with them, they should seek to learn the evidence and arguments that informed the specialist's opinions, and then evaluate them on their merits. Ideally, at this point, the specialist's opinion is no longer evidence, as the fundamental evidence it represents is already accounted for. As a practical matter, the specialists opinion still counts to the extent that a person is uncertain that they have learned of all the specialist's evidence and understood all the arguments. You have not argued that this uncertainty is and will likely remain significant in this case. Rather, it seemed that you were trying to dismiss an idea because it is unpopular.
0Mike Bishop15y
I think our disagreement is relatively small. a few remaining points: * People don't have the time or the ability to learn all the relevant evidence and arguments on every issue. Hell, I don't have time to learn all the relevant evidence and arguments on every issue in my discipline, nevermind subjects that I know little about. * Sometimes we mainly care about what the answer is, not why. * I don't always have time to explain all my reasons, so citing the fact that others agree with me is easier, and depending on the context, may be every bit as useful.
0JGWeissman15y
In these cases, where you don't care, or can't be bothered to explain, the reasons for a position, it seems you lack either the time or the interest to seriously debate the issue. This can be a valid point when you have to make policy decisions about complicated issues, but it does not apply to your appeal to the majority that I objected to.
0Mike Bishop15y
I didn't just appeal to the majority, I mentioned scientists' opinions explicitly in the sentence previous to the one you are objecting to. You've argued that appealing to other peoples beliefs has few benefits (which I dispute) but unless I'm missing something you haven't named a single cost. I'm sure there are some, but I'll let you name them if you choose. I'm starting to think that you primarily objected to the tone of my language. You don't really want to stop people from discussing what other people believe.
0JGWeissman15y
You mentioned scientists' opinions not about the subjects that they study, but about the impact of intelligence on the quality of their work, which they are not likely to know more about than anyone else. If you had mentioned the opinions of psychologists who had studied the effects of intelligence on scientific productivity, that would be the sort of support you are claiming. Further, you weren't even talking about a survey of scientists' opinions, or other evidence about what they think; you just asserted what you think they think. Now, you could make the argument that the scientists would think that for the same reasons you do, or because you believe it is really true and they would notice, but in this case your beliefs about their opinions is not additional evidence. Well, I suppose I have not explicitly stated it, but the primary cost is that it displaces discussion of the more fundamental evidence about the issue that is supposedly informing the majority or expert opinion. And you yourself argued elsewhere that in the political process of voting that attempts to aggregate opinions, "Voters are often uninformed about how policy affects their lives". Even with expert opinions, it can be hard to understand what the expert thinks. I have seen people go horribly wrong by applying an expert's idea out of context. If you don't understand an expert's reasoning because it is too complicated, you probably don't understand their position well enough to generalize it. What I object to is using a discussion of what other people believe to shut down discussion of an opposing belief.
0Vladimir_Nesov15y
This is a stronger modesty argument, as distinct from simply taking the majority opinion as one of the pieces of evidence for arriving at your own conclusion.
-1Annoyance15y
Logical fallacy: stating a contingent proposition as a universal principle.
0Mike Bishop15y
"sharp people still distinguish themselves by not assuming more than needed to keep the conversation going"
-1Annoyance15y
Sometimes the conversation shouldn't be permitted to continue. Are we looking to facilitate social interaction, or use rational argument to discover truth? The two are often, even usually, incompatible.
0Mike Bishop15y
An interesting claim, please explain why you believe this to be true?
1Annoyance15y
The two are compatible only when the preferred social feedback standards match the standards of rational thought. All other social standards necessarily come into conflict. Thus, all else being equal, a randomly-chosen standard is quite unlikely to be compatible with rationality. In actual groups, the standards aren't chosen randomly. But humans being what they are, they usually involve primate social dynamics and associational reasoning, neither of which lend themselves to the search for truth. Generally they involve social/political 'games' and power struggles.
-1Vladimir_Nesov15y
Consensus is valid evidence (but not the only evidence).
-1Annoyance15y
Valid evidence of what? Evidence is only valid or invalid in terms of the evaluation of related claims. What is being examined? That determines, in part, what is valid evidence to be considered.
-1Roko15y
This argument is vulnerable to the reversal test. For lay people and scientists alike. Evolution designed our brains with in-built self-deception mechanisms; it did not design those mechanisms to continue to operate optimally if the intelligence of the person concerned is artificially increased. Actually, now that I review this comment, I would replace this with "it is reasonable to expect that increasing intelligence will, to some extent, affect our in-built self-deception, but it may be either negative or positive", we should look at evidence to see what actually happens.
2CronoDAS15y
It could also disrupt them in the wrong direction; there's no particular reason to assume that becoming "smarter" won't just make us better self-deceivers. As Michael Shermer writes, "Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons."
3Roko15y
This is plausible in the individual case, but in a large group of people, each with randomly chosen cherished falsehoods, I claim that increasing the average intelligence parameter will increase the degree to which the group as a whole has true beliefs.
-1Annoyance15y
Cherished falsehoods are unlikely to be random. In groups that aren't artificially selected at random from the entirety of humanity, error will tend to be correlated with others'. There are also deep flaws in humanity as a whole, most especially on some issues. Should we decide to believe in ghosts because most human beings share that belief, or should we rely on rational analysis and the accumulation of evidence (data derived directly from the phenomena in question, not other people's opinions)?
0Annoyance15y
No. Your argument is specious. Evolution 'designed' us with all sorts of things 'in mind' that no longer apply. That doesn't mean that any arbitrary aspect of our lives will have an influence if it's changed on any other aspect. If the environmental factors / traits have no relationship with the trait we're interested in, we have no initial reason to think that changing the conditions will affect the trait. Consider the absurdity of taking your argumentative structure seriously: "Nature designed us to have full heads of hair. Nature also gave us a sense of sight, which it did not design to operate optimally in hairless conditions. It is therefore reasonable to expect that shaving the head will, to some extent, disrupt our visual acuity."
0Roko15y
This criticism is valid if we think that the trait we vary is irrelevant to the effect we are considering. But we have already established that intelligence is likely to affect our ability to self-deceive. For example, we could fairly easily establish that inhaling large quantities of soot is likely to affect our lungs in some way, then apply this argument to get the conclusion that pollution is probably slightly harmful (with some small degree of certainty). Essentially this argument says: if you perform a random intervention J that you have reason to believe will affect evolved system S, it will probably reduce the functioning of S, unless J was specifically designed to improve the functioning of S. Stated like this I don't find this style of argument unsound; smoking, pollution, obesity, etc are all cases in point.
-1Annoyance15y
No, the criticism is valid if we have no reason to think that the traits will be causally linked. You're making another logical fallacy - confusing two statements whose logical structure renders them non-equivalent. (thinking trait is ~relevant) != ~(thinking trait is relevant)
0Roko15y
see edited comment above
0pjeby15y
Not if the original function of (verbal) "intelligence" was to improve our ability to deceive... and I strongly suspect this to be the case. After all, it doesn't take a ton of (verbal) intelligence to hunt and gather.
2Roko15y
If we evolved ever more complex ways of lying, then we must also have evolved ever more complex ways of detecting lies. It is highly plausible that increasing intelligence will increase both of these functions.
0pjeby15y
Good point. Of course, that mechanism is for detecting other people's lies, and there is some evidence that it's specific to ideas and/or people you already disagree with or are suspicious of... meaning that increased intelligence doesn't necessarily relate. One of the central themes in the book I'm working on is that brains are much better at convincing themselves they've thought things through, when in actuality no real thinking has taken place at all. Looking for problems with something you already believe is a good example of that: nobody does it until they have a good enough reason to actually think it through, as opposed to assuming they already did it, or not even noticing what it is they believe in the first place.
-6Annoyance15y
-1Mike Bishop15y
Roko was arguing somewhat casually but I don't think he is actually reasoning casually. Its fine to discourage this type of comment with a downvote, but starting your reply with the words "Logical fallacy" is unnecessarily harsh in my opinion.
1thomblake15y
Roko's comment seems to contain a logical fallacy. While there might be a reason to make the distinction between the reasoning going on in Roko's argument and the reasoning going on in Roko's head, I have no access to the latter and so must evaluate the former. I don't see what's wrong with Annoyance pointing that out, and calling a fallacious argument fallacious is hardly 'harsh'; at least, it's no harsher than is called for.
-1Annoyance15y
Even if you had access to the latter, that has no bearing on your evaluation of the former. It's the explicit claims that we're looking at, the ones that are actually communicated, not what the person meant inside their head or what we think they might mean.
-1Mike Bishop15y
I encourage efforts to maintain high standards of reasoning, and fairly explicit reasoning. In evaluating harshness, we need to strike a balance between at least three goals: 1. clarity of thought, 2. creating proper incentives for quality contributions which requires punishing mistakes / undesirable contributions, and 3. creating a friendly and respectful atmosphere. For the record, calling Annoyance's comment, "unnecessarily harsh" was meant to be a minor criticism. There are many factors to consider, in this case I would have suggested that Annoyance replace "Logical fallacy" with "Nitpick." Also see my new comment for Annoyance.

Do you think you might be underestimating the capabilities of the statistically average person of 100 IQ?

Now, if the average voter could understand the concept of photosynthetic efficiency, and could understand a simple numerical calculation showing how inefficient corn is at converting solar energy to stored energy in ethanol, this policy choice would have been dead in the water.

There's an obvious point you're overlooking here.

Plants are, indeed, only about 3% efficient at converting the energy in sunlight into chemical energy, and that's before the l... (read more)

4PhilGoetz15y
There are many studies - 1 to 2 dozen - showing that producing a gallon of corn ethanol takes more energy than is contained in a gallon of corn ethanol. Usually, the conclusion is that it takes n=1 to 1.4 times as much energy. There are other studies claiming the opposite; they fail to take into account factors such as irrigation and transportation costs. I wrote to Wired magazine and to somewhere else (I forget where) to correct their outrageously-incorrect assertions about corn ethanol. (Wired underreported n by 2 orders of magnitude, which is disturbing because this wasn't like ordinary irresponsible journalism where someone took one "expert's" numbers uncritically. The figure they gave for corn ethanol efficiency was AFAIK much, much higher than those of even the most biased ethanol advocates.) My responses were unpublished. It's even worse journalism when you make an extreme error on a point important to public policy, and then someone points it out to you, and gives you a dozen literature citations, and you don't correct it.
0Roko15y
Also, for the less wrong pedant community, phil meant that it takes 1 to 1.4 times more energy input where "energy input" EXCLUDES the solar energy that the corn plants absorb to produce 1 litre of corn ethanol than is contained in 1 litre of corn ethanol.
0Roko15y
Can you give us some references?
-6Annoyance15y
1Roko15y
The crux of the argument is that it costs energy to harvest the corn and process it. When you look at the numbers you see that it just doesn't add up. Furthermore, if you compare corn ethanol to solar power you see why the low conversion efficiency is so damning, especially when you do the math for how much land you'd have to cover with corn to serve US energy needs. Going off the top of my head, this area is greater than the whole of the US.
1Mike Bishop15y
If people listened to intelligent and careful thinkers they wouldn't need to understand it themselves. Whether this is an easier or harder route is unclear to me.
6CronoDAS15y
The problem is that, in general, there's no good way for a layman to tell the difference between Carl Sagan and Immanuel Velikovsky, except by comparing them to other people who claim to be experts in a field. A book of internally consistent lies, such as Chariots of the Gods? will seem as plausible as any book written about real history to someone who doesn't already know that it's a book of lies.
2Mike Bishop15y
That sounds like a promising strategy to me. At least it is far better than what people currently do, which is adopt what their friends think, or ideas they find appealing for other reasons. No doubt it would be better if more people were capable of evaluating scientific theory and evidence themselves, but imagine how much better things would be if people simply asked themselves, "Which is the relevant community of experts, how are opinions on this issue distributed amongst the experts, how reliable have similar experts been in the past? e.g. chemists are generally less wrong about chemistry than psychologists are about psychology. This would be a step in the right direction.
-1Annoyance15y
That's not quite true. There are ways of evaluating an expert - but people don't like them, don't implement them, and don't try to find out what they are. Many, many people who have the social status and authority of experts simply don't know what they're talking about. They can be detected by an earnest and diligent inquiry, combined with a healthy and balanced skepticism. Doctors are a prime example.
1CronoDAS15y
Unfortunately, many of those ways are equivalent to "become an expert yourself". :(
-2SoullessAutomaton15y
But how do you know when you've become an expert? Turtles all the way down!
2komponisto15y
Indeed. Now that I think about it, perhaps the real problem here is that the marginal social status payoff from an increase in IQ is too low (perhaps even negative in some cases); in other words, IQ doesn't buy one enough status. So the question is whether it is easier to fix this than just to raise the IQ baseline.
0Mike Bishop15y
How does increasing "the marginal social status payoff from an increase in IQ" help? I'm not saying it would hurt, but it seems less direct and less important than increasing the marginal social status payoff from having and acting on unbiased beliefs about the world because this is something people can change fairly easily.
3asciilifeform15y
The implication may be that persons with high IQ are often prevented from putting it to a meaningful use due to the way societies are structured: a statement I agree with.
0Mike Bishop15y
Do you mean that organizations aren't very good at selecting the best person for each job. I agree with that statement, but its about much, much, more than IQ. It is a tough nut to crack but I have given some thought to how we could improve honest signaling of people's skills.
4asciilifeform15y
Actually, no. What I mean is that human society isn't very good at realizing that it would be in its best interest to assign as many high-IQ persons as possible the job of "being themselves" full-time and freely developing their ideas - without having to justify their short-term benefit. Hell, forget "as many as possible", we don't even have a Bell Labs any more.
3komponisto15y
This, I think, is a special case of what I meant. A simple, crude, way to put the general point is that people don't defer enough to those who are smarter. If they did, smart folks would be held in higher esteem by society, and indeed would consequently have greater autonomy.
0Mike Bishop15y
How should society implement this? I repeat my claim that other personal characteristics are as important as IQ.
0asciilifeform15y
I do not know of a working society-wide solution. Establishing research institutes in the tradition of Bell Labs would be a good start, though.
1komponisto15y
That may well be right. I'm willing to accept that the distinction between "I.Q." and other measures of "smartness" is orthogonal to the point I was making.
0CronoDAS15y
You're absolutely right about corn ethanol not being much of a solution - indeed, you can't power the U.S. on just corn ethanol, but burning corn-derived ethanol does provide a net gain in useful energy. It's just not nearly enough energy to make a difference. Finally, the biggest problem is that there are generally better things to do with grown corn than to turn it into fuel for engines, such as turn it into food for humans or other animals...
0Annoyance15y
Plants are also much better at converting sunlight into chemical energy than any system we can build. But the issue isn't how well they store energy, but at how efficiently we can use the energy they store. You can't efficiently fuel an electricity-generating plant with corn - trying to use plant energy to power our civilization is hopeless.
2timtyler15y
That is totally incorrect. Plants are 1-2%. Good panels are around 20% - with experimental ones well beyond that. That's because most solar energy occurs at wavelengths unsuitable for photosynthesis.
-2Annoyance15y
Good panels are only that good under laboratory conditions, and require massive expenditures of energy to construct in the first place. Plants are self-replicating. Equally as important, they produce chemical energy directly. Without an efficient way to produce and store hydrogen using electrical power, there's no alternative for chemical fuels.
6Alicorn15y
"Plants are self-replicating"? In theory, will corn grow without our help? Sure! In practice? Not if you want it in neat, harvestable rows; not if you don't want it to compete with weeds; not if you want it to have a high per-acre yield; not if you want to control which seeds get to turn into plants next generation; not if you don't want crows to eat it; not if you want it to stick to your property and not take over the neighbor's alfalfa; and not if you take all of the plant's kernels and turn them into car fuel. We don't settle for the replication rate of wild plants, so it's just not the case that they're "free". There's a legitimate question of whether it's costlier (along any given dimension or overall) to produce ethanol than to produce a solar panel which will generate the same amount of power over its useful life, and I don't know the answer, but please let's not extrapolate from the fact that plants sometimes grow unattended to the mistaken conclusion that corn has a negligible input cost.
-4Annoyance15y
I didn't suggest that corn has a negligible input cost. Please do not pester us with non-sequiturs.
2Roko15y
But there are efficient ways to turn electricity into chemical energy, like a li-ion battery. Best solar panel is at 50.7% efficiency as far as I know. Plants also require energy to be produced. Solar panels harvest more energy on their lifetime than they take to produce, by a factor of about 10 I seem to recall.
1timtyler15y
After getting the facts so totally wrong, you are supposed to remain in embarassed silence, not argue the toss with still more dubious claims: http://en.wikipedia.org/wiki/Electrolysis_of_water#Efficiency

Interestingly, this may be actually happening. It's fairly clear that people today are taller than they once were...

0Roko15y
Yes, but not quickly enough; we want a SUPER-FLYNN effect.
0Annoyance15y
Increasing intelligence without increasing people's ability to use it properly isn't going to help much. Consider how poorly physicists do when trying to evaluate paranormal claims, as opposed to magicians.

My observations here on LW, thanks to the karma system, lead me to believe there is no threshold effect. People always have great difficulty following the ideas of someone a level above them, regardless of what level they are at. Eliezer's posts are so friggin' long because they are designed to be understood by people a level below him.

I suspect, as I've said repeatedly on LW, that increasing the baseline of intelligence would only lead us to construct a more elaborate society, with more complicated problems, and an even greater chance of catastrophic fa... (read more)

4Roko15y
Do you think that current levels of intelligence are precisely optimal for reducing existential risks, where my definition of "existential risk" is the one given by Bostrom? What reason would there be for this remarkable co-incidence? If not, then presumably you think we should start deliberately making people have lower IQ?
3loqi15y
It seems that you should also be asking how we can reduce the level of intelligence.

It's interesting speculation but it assumes that people use all of their current intelligence. There is still the problem of akrasia - a lot of people are perfectly capable of becoming 'smarter' if only they cared to think about things at all. Sure, they could still go mad infallibly but it would be better than not even trying.

Are you implying that more IQ may help in overcoming akrasia?

2Roko15y
All other things being equal, increasing IQ will make people better at telling the difference between rational argument and sophistry, and at understanding marginally more complex arguments. Decreasing akrasia for the general population is a different issue; the first thought that comes to mind is that increasing people's IQ with fixed motivation ought to improve things.
2hrishimittal15y
Related post and discussion over at OB - http://www.overcomingbias.com/2009/06/lazy-hurt-less-than-stupid.html
0[anonymous]15y
Not a sure thing. More intelligent population may get better at sophistry as well.

if we could improve the intelligence of the average voter by 10 IQ points, imagine how much saner the political process would look

It's highly non-obvious that it would have significant effects. Political process is imperfect but very pragmatic - what makes a lot of sense as there's only as much good an improved political process can do, and breaking it can cause horrible suffering. So current approach of gradual tweaks is a very safe alternative, even if it offends people's idealistic sensibilities.

3Mike Bishop15y
IQ is positively correlated with sharing economists' views on economic policy Therefore it seems likely people would vote differently and that this would translate into policy changes. I would expect other belief changes would be likely if IQ were increased.
2Arenamontanus15y
Here is a simple model. Assume you need a certain intelligence to understand a crucial, policy-affecting idea (we can make this a fuzzy border and talk about this in distribution to make it more realistic later). If you are below this level your policy choices will depend on taking up plausible-sounding arguments from others, but it will be uncorrelated to the truth. Left alone such a population will describe some form of random walk with amplification, ending up with a random decision. If you are above the critical level your views will be somewhat correlated with the truth. Since you are affecting others when you engage in political discourse, whether over the breakfast table or on TV, you will have an impact on other people, increasing their chance of agreeing with you. This biases the random walk of public opinion slightly in favour of truth. From most models of political agreement formation I have seen, even a pretty small minority that get biased in a certain direction can sway a large group that just picks views based on neighbours. This would suggest that increasing the set of people smart enough to get the truth would substantially increase the likelihood of a correct group decision.
1ChrisHibbert15y
My rebuttal to is to point at the work of Tullock and Buchanon on Public Choice theory. Basically, the take away is that politicians and bureaucrats respond to incentives. If the voting public were smarter, politicians' behavior would be different during elections, and the politicians would try to make their behavior in office look different. But they would still have an incentive to look like they were addressing problems rather than an incentive to actually solve them. It's much harder than 10 IQ points to align those outcomes. And bureaucrats and middle managers in government would still face the same incentives about obfuscating results, multiplying staffing and budgets, and ensuring that projects and bureaucracies have staying power.
1Mike Bishop15y
Public choice theory is important, but I still think there is good reason to believe increasing average IQ by such a huge amount would help. First, because better informed voters improves the incentives for politicians. Second, because the relatively bad incentives politicians face is not the only constraint on better goverment.
-1asciilifeform15y
The effects may well be profound if sufficiently increased intelligence will produce changes in an individual's values and goal system, as I suspect it might. At the risk of "argument from fictional evidence", I would like to bring up Poul Anderson's Brain Wave, an exploration of this idea (among others.)

More intelligence also means more competence at doing potentially world-destroying things, like AI/upload/nano/supervirus research. It does seem to me like the anti-risk effect from intelligence enhancement would somewhat outweigh the pro-risk effect, but I'm not sure.

7AngryParsley15y
If more intelligence is bad, is less good? Do you think current levels of intelligence are optimal? If so, that would be an amazing coincidence.
3Peter_de_Blanc15y
I don't think that current levels of intelligence are optimal, but if they were, it wouldn't be a coincidence. Humans are adaptation-executers, and genes make implicit assumptions about their environment. In particular, certain adaptations might be disrupted by changing the average intelligence.
3AngryParsley15y
If you had the option to increase your intelligence, would you decline because you were worried about certain adaptations being disrupted? The modern world is so different from our EEA that I can't buy your argument. Disrupting adaptations can be a good thing. Birth control helps prevent overpopulation. Courts help settle disputes without violence. Even rational thought involves recognizing and changing (disrupting?) the typical thought patterns of our adapted brains.
4Vladimir_Nesov15y
The fact that modern world changed our values in a way that ancient people won't appreciate on reflection is a bad thing for the ancient people. To us, it'd be bad if we reverted some of these changes, and likewise if we introduced new changes that have negative side effects from the current point of view (on reflection). It's hard to "increase intelligence" without wreaking some of the values, brain isn't designed for upgrade. It's the same problem as with trying to change emotions.
4Roko15y
I don't think that +1 standard deviation in IQ would have this effect. I am not talking about turing people into superintelligent Jupiter brains, you know...
2Mike Bishop15y
I definitely think that values, however defined, would change significantly with such a 10 point IQ change (btw, I consider this very large). And I think it would probably be a good thing.
0[anonymous]15y
I bet we're already smarter (in the ways relevant to this point) than we were in the ancestral environment, though.
2steven046115y
From the upvotes it seems people think this is some sort of devastating counter-point. Yes, if more intelligence is bad, less is good. No, current levels of intelligence are not optimal. If there's a knockdown argument against the idea that increased average intelligence might cause net increased risk, it should go something like "people will just do the same thing, but slower". I think this works but, again, I'm not sure. Either way, the net decrease in risks from intelligence enhancement is less than would seem to be the case if you considered just the upsides.
-6Annoyance15y
0Roko15y
Yes, of course current levels are optimal, because god created us with just the right amount of intelligence ;-0
2Arenamontanus15y
More intelligence means bigger scope for action, and more ability to get desired outcomes. Whether more intelligence increases risk depends on the distribution of accidentally bad outcomes in the new scope (and how many old bad outcomes can be avoided), and whether people will do malign things. On average very few people seem to be malign, so the main issue is likely more the issue of new risks. Looking at the great deliberate human-made disasters of the past suggests that they were often more of a systemic nature (societies allowing nasty people or social processes to run their course; e.g. democides and genocides) than due to individuals or groups successfully breaking rules (e.g. terrorism). This is actually a reason to support cognitive enhancement if it can produce more resilient societies less prone to systemic risks.
0steven046115y
One possibility I have in mind is if current rationalist ideas need a certain amount of time to slosh around and pervade the population before technology (fed by intelligence) grows enough for them to really start mattering.
1gwern15y
It is a hard subject to argue about. If I wanted to criticize you, I could say that higher IQs make uploads less economically attractive as they will start off with a smaller advantage, and higher IQs likely would make uploads intrinsically more difficult. And if I wanted to criticize my criticism, I could say that by making people in general smarter, we increase the cost of labor, which makes mechanical substitution ever more attractive (and mechanical substitution for reasonably complicated tasks implies development of AI/uploads).
0HughRistik15y
Yes, increasing intelligence would increase the variance of quality of outcomes for humanity. And the hope is that intelligence would also increase the mean quality of outcome, such that the expected value would be higher.

I'm not sure intelligence enhancement alone is sufficient. It'd be better to first do rationality enhancement and then intelligence enhancement. Of course that's also much harder to implement but who said it would be easy?

It sounds like you think intelligence enhancement would result in rationality enhancement. I'm inclined to agree that there is a modest correlation but doubt that it's enough to warrant your conclusion.

0Roko15y
All things considered, it seems that giving rationality training is much less likely to work than just telling people that if they take a pill it will make them smarter (and therefore richer).
0wuwei15y
I suspect you aren't sufficiently taking into account the magnitude of people's irrationality and the non-monotonicity of rationality's rewards. I agree that intelligence enhancement would have greater overall effects than rationality enhancement, but rationality's effects will be more careful and targeted -- and therefore more likely to work as existential risk mitigation.
4Roko15y
I agree that a world where everyone had good critical thinking skills would be much safer. But getting there is super-tough. Learning is something most people HATE. Rationality - especially stuff involving probability theory, logic, statistics and some basic evolutionary science - requires IQ 100 as a basic prerequisite in my estimation. I will discuss the ways we could get to a rational world, but this post is about merely a more intelligent world.
0Mike Bishop15y
Could you elaborate on the shape of the rewards to rationality?
1gwern15y
This was covered in some LW posts a while ago (which I cannot be arsed to look up and link); the paradigmatic example in those posts, I think, was a LWer who used to be a theist and have a theist girlfriend, but reading OB/LW stuff convinced him of the irrationality of God. Then his girlfriend left his hell-bound hide for greener pastures, and his life is in general poorer than when he started reading OB/LW and striving to be more rational. The suggestion is that rationality/irrationality is like a U: you can be well-off as a bible-thumper, and well-off as a stone-cold Bayesian atheist, but the middle is unhappy.
2Eliezer Yudkowsky15y
I'm not sure this is a fair statement. He did say he wouldn't go back if he had the choice.
0wuwei15y
Increases in rationality can sometimes lead with some regularity to decreasing knowledge or utility (hopefully only temporarily and in limited domains).