All of Arenamontanus's Comments + Replies

We do not assume mirrors. As you say, there are big limits due to conservation of etendué. We are assuming (if I remember right) photovoltaic conversion into electricity and/or microwave beams received by rectennas. Now, all that conversion back and forth induces losses, but they are not orders of magnitude large.

In the years since we wrote that paper I have become much more fond of solar thermal conversion (use the whole spectrum rather than just part of it), and lightweight statite-style foil Dyson swarms rather than heavier collectors. The solar thermal... (read more)

3weverka
The conservation of etendué is merely a particular version of the second law of thermodynamics.  Now, You are trying to invoke a multistep photovoltaic/microwave/rectenna method of concentrating energy, but you are still violating the second law of thermodynamics.   If one could concentrate the energy as you propose, one could build a perpetual motion machine.

It seems to me that the real issue is rational weighing of reference classes when using multiple models. I want to assign them weights so that they form a good ensemble to build my forecasting distribution from, and these weights should ideally reflect my prior of them being relevant and good, model complexity, and perhaps that their biases are countered by other reference classes. In the computationally best of all possible world I go down the branching rabbit hole and also make probabilistic estimates of the weights. I could also wing it.

The problem is t... (read more)

2Davidmanheim
I don't think that weights are the right answer - not that they aren't better than nothing, but as the Tesla case shows, the actual answer is having a useful model with which to apply reference classes. For example, once you have a model of stock prices as random walks, the useful priors are over the volatility rather than price, or rather, the difference between implied options volatility and post-hoc realized volatility for the stock, and other similar stocks. (And if your model is stochastic volatility with jumps, you want priors over the inputs to that.) At that point, you can usefully use the reference classes, and which one to use isn't nearly as critical. In general, I strongly expect that in "difficult" domains, causal understanding combined with outside view and reference classes will outperform simply using "better" reference classes naively.

I have been baking for a long time, but it took a surprisingly long while to get to this practical "not a ritual" stage. My problem was that I approached it as an academic subject: an expert tells you what you need to know when you ask, and then you try it. But the people around me knew how to bake in a practical, non-theoretical sense. So while my mother would immediately tell me how to fix a too runny batter and the importance of quickly working a pie dough, she could not explain why that worked in terms that I could understand. Much frustratio... (read more)

Awesome find! I really like the paper.

I had been looking at Fisher information myself during the weekend, noting that it might be a way of estimating uncertainty in the estimation using the Cramer-Rao bound (but quickly finding that the algebra got the better of me; it *might* be analytically solvable, but messy work).

I tried doing a PCA of the judgments, to see if there was any pattern in how the predictions were judged. However, the variance of the principal components did not decline fast. The first component explains just 14% of the variance, the next ones 11%, 9%, 8%... It is not like there are some very dominant low-dimensional or clustering explanation for the pattern of good or bad predictions.

No clear patterns when I plotted the predictions in PCA-space: https://www.dropbox.com/s/1jvhzcn6ngsw67a/kurzweilpredict2019.png?dl=0 (In this plot colour denotes mean a... (read more)

7Stuart_Armstrong
Plot visualised:

Another nice example of how this is a known result but not presented in the academic literature:

https://constancecrozier.com/2020/04/16/forecasting-s-curves-is-hard/

The fundamental problem is not even distinguishing exponential from logistic: even if you *know* it is logistic, the parameters that you typically care about (inflexion point location and asymptote) are badly behaved until after the inflection point. As pointed out in the related twitter thread, you gain little information about the latter two in the early phase and only information about the ... (read more)

4Stuart_Armstrong
Did a minor edit to reflect this.

I think the argument can be reformulated like this: space has very large absolute amounts of some resources - matter, energy, distance (distance is a kind of resource useful for isolation/safety). The average density of these resources is very low (solar in space is within an order of magnitude of solar on Earth) and for matter it is often low-grade (Earth's geophysics has created convenient ores). Hence matter and energy collection will only be profitable if (1) access gets cheap, (2) one can use automated collection with a very low marginal cost - p... (read more)

2Stuart_Armstrong
I agree with that reformulation. Weren't the early british and french colonies in North America driven by geopolitics rather than economics?

Overall, typographic innovations like all typography are better the less they stand out yet do their work. At least in somewhat academic text with references and notation subscripting appears to blend right in. I suspect the strength of the proposal is that one can flexibly apply it for readers and tone: sometimes it makes sense to say "I~2020~ thought", sometimes "I thought in 2020".

I am seriously planning to use it for inflation adjustment in my book, and may (publisher and test-readers willing) apply it more broadly in the text.

3gwern
Yes, this relies heavily on the fact that subscripts are small/compact and can borrow meaning from their STEM uses. Doing it as superscripts, for example, probably wouldn't work as well, because we don't use superscripts for this sort of thing & already use superscripts heavily for other things like footnotes, while some entirely new symbol or layout is asking to fail & would make it harder to fall back to natural language. (If you did it as, say, a third column, or used some sort of 2-column layout like in some formal languages.) How are you doing inflation adjustment? I mocked up a bunch of possibilities and I wasn't satisfied with any of them. If you suppress one of the years, you risk confusing the reader given that it's a new convention, but if you provide all the variables, it ensure comprehension but is busy & intrusive.
Answer by Arenamontanus
190

Looking back at our paper, I think the weakest points are (1) we handwave the accelerator a bit too much (I now think laser launching is the way to go), and (2) we also handwave the retro-rockets (it is hard to scale down nuclear rockets; I think a detachable laser retro-rocket is better now). I am less concerned about planetary disassembly and building destination infrastructure: this is standard extrapolation of automation, robotics and APM.

However, our paper mostly deals with sending a civilization's seeds everywhere, it does not deal with near ter... (read more)

I have not seen any papers about it, but did look around a bit while writing the paper.

However, a colleague and me analysed laser acceleration and it looks even better. Especially since one can do non-rigid lens systems to enable longer boosting. We developed the idea a fair bit but have not written it up yet.

I would suspect laser is the way to go.

Another domain may be aviation. In the US, from the Wright brothers in 1903 to the Air Commerce Act 1926 it took 23 years.

Wikipedia: "In the early years of the 20th century aviation in America was not regulated. There were frequent accidents, during the pre-war exhibition era (1910–16) and especially during the barnstorming decade of the 1920s. Many aviation leaders of the time believed that federal regulation was necessary to give the public confidence in the safety of air transportation. Opponents of this view included those who distrusted governme... (read more)

0Lumifer
The world spins faster now. The consumer drones are coming of age right before our eyes and I doubt it will take 20 years for their regulations to stabilize.

S. Jay Olson's work on expanding civilizations is very relevant here, e.g. https://arxiv.org/abs/1608.07522 and https://arxiv.org/abs/1512.01521 That work suggests that even non-hidden civilizations will be fairly close to their light front.

Now, the METI application: if this scenario is true, then sending messages so that the expanding civilization notices us might be risky if they can quieten down and silently englobe or surprise us. (Surprise is likely more effective than englobement, since spamming the sky with quiet relativistic probes is hard to stop)... (read more)

It would be neat to actually make an implementation of this to show sceptics. It seems to be within the reach of a MSc project or so. The hard part is representing 2-5.

5gwern
Since this is a Gridworld model, if you used Reinforce.js, you could demonstrate it in-browser, both with tabular Q-learning but also with some other algorithms like Deep Q-learning. It looks like if you already know JS, it shouldn't be hard at all to implement this problem... (Incidentally, I think the easiest way to 'fix' the surveillance camera is to add a second conditional to the termination condition: simply terminate on line of sight being obstructed or a block being pushed into the hole.)
0Stuart_Armstrong
I would suggest modelling it as "B outputs 'down' -> B goes down iff B active", and similarly for other directions (up, left, and right), "A output 'sleep' -> B inactive", and "A sees block in lower right: output 'sleep'" or something like that.
2Stuart_Armstrong
Why, Anders, thank you for volunteering! ;-)

I think you will find this discussed in the Hanson-Yudkowsky foom debate. Robin thinks that distributed networks of intelligence (also known as economies) are indeed a more likely outcome than a single node bootstrapping itself to extreme intelligence. He has some evidence from the study of firms, which is a real-world example of how economies of scale can produce chunky but networked smart entities. As a bonus, they tend to benefit from playing somewhat nicely with the other entities.

The problem is that while this is a nice argument, would we want to bet... (read more)

I remember that we were joking at the NYC Singularity Summit workshop a few years back that maybe we should provide AI researchers with heroin and philosophers to slow them down.

As far as I have noticed, there are few if any voices in the academic/nearby AI safety community that promote slowing AI research as the best (or even a good) option. People talking about relinquishment or slowing seem to be far outside the main discourse, typically people who have only a passing acquaintance with the topic or a broader technology scepticism.

The best antidote is to start thinking about the details of how one would actually go about it: that generally shows why differential development is sensible.

I recently gave a talk at an academic science fiction conference about whether sf is useful for thinking about the ethics of cognitive enhancement. I think some of the conclusions are applicable to point 9 too:

(1) Bioethics can work in a "prophetic" and a "regulatory" mode. The first is big picture, proactive and open-ended, dealing with the overall aims we ought to have, possibilities, and values. It is open for speculation. The regulatory mode is about ethical governance of current or near-term practices. Ethicists formulate guidelin... (read more)

Well, 70 years of 1/37 risk still has 13% chance of showing zero wars. Could happen. (Since we are talking about smaller ones rather than WWIII anthropics doesn't distort the probabilities measurably.)

One could buy a Pinker improvement scenario and yet be concerned about a heavy tail due to nuclear or bio warfare of existential importance. The median cases might decline and the rate of events go down, yet the tail get nastier.

5Stuart_Armstrong
Indeed. This is not a proof of the "long peace", just showing the paper doesn't disprove it.

This is incidentally another way of explaining the effect. Consider the standard diagram of the joint probability density and how it relates to correlation. Now take a bite out of the upper right corner of big X and big Y events: unless the joint density started in a really strange shape this will tend to make the correlation negative.

Sarunas
120

This is known as Berkson's paradox and it is ubiquitous. A lot of people have written about it and its implications, e.g. Yvain (underlying reasons why anti-correlations arise are very similar).

It is pretty cute. I did a few Matlab runs with power-law distributed hazards, and the effect holds up well: http://aleph.se/andart2/uncategorized/anthropic-negatives/

Neat. The minimal example would be if each risk had 50% chance of happening: then the observable correlation coefficient would be -0.5 (not -1, since there is 1/3 chance to get neither risk). If the chance of no disaster happening is N/(N+2), then the correlation will be -1/(N+1).

It is interesting to note that many insurance copula methods are used to make size-dependent correlations, but these are nearly always of the type of stronger positive correlations in the tail. This suggests - unsurprisingly - that insurance does not encounter much anthropic risk.

4Stuart_Armstrong
When I read this, my first reaction was "I have to show this comment to Anders" ^_^

In some journals there is a text box with up to four take home message sentences summarizing what the paper gives us. It is even easier to skim than the abstract, and typically stated in easy (for the discipline) language. I quite like it, although one should recognize that many papers have official conclusions that are a bit at variance with the actual content (or just a biased glass half-full/half-empty interpretation).

The standard formula you are typically taught in science is IMRaD: Introduction, Methods, Results, and Discussion. This of course mainly works for papers that are experimental, but I have always found it a useful zeroth iteration for structure when writing reviews and more philosophical papers: (1) explain what it is about, why it is important, and what others have done. (2) explain how the problem is or can be studied/solved. (3) explain what this tells us. (4) explain what this new knowledge means in the large, the limitations of what we have done and le... (read more)

1buybuydandavis
The OP wrote: Ugh! The vomitous mass of facts and details. I can't stand articles like that. A little quote starts ringing through my mind "When you talk like this, I can't help but wonder, do you have a point?" This is closer to what I would advise. Start with motivating the reader by identifying a known problem and your contribution to the solution for it. Let him know what's in the pot of gold at the end of the rainbow, so that he might want to get there. Up front, tell him the payoff of reading the paper. Then he might be motivated to continue reading. Then describe the path you'll be taking him, so that he can track the progress to that pot of gold. The path should include a formulation of problem, a description of current approaches, a description of your own approach, a comparison of the basic approaches of each, a comparison of the performance of each, and a summary of what was found in the pot of gold and how we found it. The history of the problem and it's solutions are something you might add in a longer paper. I can't stand articles that leave me wondering where they're going and why. It goes beyond motivating with a payoff to simply being able to follow what is being presented. If I don't know where we're going and why, it's very hard for me to follow and evaluate the paper. If you're not going to give me a map, at least identify a purpose.

Actually, when I did my calculations my appreciation of Szilard increased. He was playing a very clever game.

Basically, in order to make a cobalt bomb you need 50 tons of neutrons absorbed into cobalt. The only way of doing that requires a humongous hydrogen bomb. Note when Szilard did his talk: before the official announcement of the hydrogen bomb. The people who could point out the problem with the design would be revealing quite sensitive nuclear secrets if they said anything - the neutron yield of hydrogen bombs was very closely guarded, and was only e... (read more)

0turchin
Thank you for clarification of your position. I think that you need not to move the bomb to stratospere. Smith in Doomsday men gave estimate that doomsday cobalt bomb should weight near the weight of lincor Missuri that is 70 000 tonn. So you could detonate it on spot - and the enegry of explosion will bring isotopes to upper atmosphere. Also, if we go in technical details about global radiological contamonation, I think it would be better to use not only cobalt but other isotopes. Gold was discussed as another one. But the best could be some kind of heavy gas like radon because it is does not (as I think) solve in the see but tend to stay in lower atmosphere. It is not a fact but just my opinion about making nuclear doomsday divice more efective, and while I think this partilur opinion is wrong, some one who really wants to make such device could find the ways to make it much more effective , taking different isotopes as blanket of the bomb.

Given that overconfidence is one of the big causes of bad policy, maybe a world without Hitler would have worse policies if Stuart's guesses at the end were true. It would possibly be overconfident about niceness, negotiations, democracy and supra-national institutions. On the other hand, it might be more cautious about developing nuclear weapons. So maybe it would be more vulnerable to nasty totalitarian surprises, but have a slighly better safety against nuclear GCRs.

As a non-historian I don't know how to properly judge historical what-ifs well: not only... (read more)

It seems that the bargaining for mu will be dependent on your priors about what games will be played. That might help fix the initial mu-bargaining.

I think this is very needed. When reviewing singularity models for a paper I wrote I could not find many readily citable references to certain areas that I know exist as "folklore". I don't like mentioning such ideas because it makes it look (to outsiders) as I have come up with them, and the insiders would likely think I was trying to steal credit.

There are whole fields like friendly AI theory that need a big review. Both to actually gather what has been understood, and in order to make it accessible to outsiders so that the community thinking ... (read more)

The way to an authoritative paper is not just to have the right co-authors but mainly having very good arguments, cover previous research well and ensure that it is out early in an emerging field. That way it will get cited and used. In fact, one strong reason to write this paper now is that if you don't do it, somebody else (and perhaps much worse) will do it.

Actually, if you do the experiment a number of times and always get suspicious hindrances, then you have good empirical evidence that something anthropic is going on... and that you likely have self-destroyed yourself in a lot of universes.

-4Zachary_Kurtz
False actually. If you do the experiment a number of times and always get "suspicious" hindrances, then all you have is a lot of confirmational biases if you assume that the reason is anthropic. Confirmation can't provide definitive empirical proof, only "dis-comfirmation" can. This is especially true when your underlying assumption is unobservable, like multiverse theory.

The linked article is problematic. There is a pretty agreed on correlation between IQ and income (the image obscures this). In the case of wealth the article claims that there is a non-linear relationship that makes really smart people have a low wealth level. But this is due to the author fitting a third degree polynomial to the data! I am pretty convinced it is a case of overfitting. See my critique post for more details.

There is one study that demonstrated that among top 1% SAT scorers investigated some years after testing, the upper quartile produces about twice the number of patents as the lower one (and about 6 times the average, if I remember right). That seems to imply that having more really top performers might produce more useful goods even if the vast majority of them never invent anything great.

Even a tiny shift upwards of everybody's IQ has a pretty impressive multiplicative effect at the high end.

Interpersonal skills are more important for job success than IQ... (read more)

0PhilGoetz
IIRC, the SAT doesn't have enough questions to distinguish an upper 1/4 of 1%. At least, the reported scores don't go higher than "99th percentile".
4gwern
This could just reflect winner-take-all dynamics. Only a few people can get into Harvard. Only a few people can become tenured professors, only a few mentored by major figures, only a few access to resources etc. Success builds on success; if you have a patent, it's easier to get another. A small difference at the beginning (your 'upper quartile') can snowball. I would bet that being in the upper quartile is only weakly correlated with being smarter than the rest of that 1%. No organized tests like college admissions uses straight IQ, but they do use SAT scores. That says something, I think.

I would be happy. The low end of the intelligence scale have on average pretty bad lives (higher risks of accidents, illness, crime, bad school outcomes, less income and lower life satisfaction), so on purely utilitarian grounds it would be good. But their inefficiency and costs also reduce the overall economy and cost a lot of tax money directly or indirectly. Hence I would be better off with them smarter - it might reduce my competitive advantage a bit, but I think the faster economic growth would balance that. A lot of our market value reside in our unique skills rather than general skills anyway.

The definition of illness is one of the perennials in the philosophy of medicine. Robert Freitas has a nice list in the first chapter of Nanomedicine ( http://www.nanomedicine.com/NMI/1.2.2.htm ) which is by no means exhaustive.

In practice, the typical "down-on-the-surgery-floor" approach is to judge whether a condition impairs "normal functioning". This is relative to everyday life and the kind of life the patient tries to live - and of course contains a lot of subjective judgements. Another good rule of thumb is that illness impairs ... (read more)

Sad news, but a very brave and positive response. If I ever end up in a comparable situation I wish I can handle it with this level of poise.

It is worth noting that people are far more flexible in what constitutes a life worth living than most normals believe. Brickman, Coates and Janoff-Bullman (1978) famously argued that individuals who had become paraplegic or quadriplegic within the previous year reported only slightly lower levels of life satisfaction than healthy individuals (and lottery winners also converged on their setpoint). This is particularl... (read more)

I think my most valuable skill is my ability to build models of problems and systems. Not necessarily great and complete models, but at least something that encapsulates a bit of what seems to be going on and produces output that can be compared with the system. A few iterations of modelling/comparison/correction and I have usually at least learned something useful. It works both for napkin calculations or software simulations. It is a great tool for understanding many systems or checking intuitions.

Others have mentioned the skill of "letting go"... (read more)

When I visited Beijing a few years back, I could not access Wikipedia due to censorship. This made me aware of how often I unconsciously checked things on the site - the annoyance of not getting the page made me note a previously unseen offloading habit.

I expect that many offloading methods work like this. We do not notice that we use them, and that adds to their usefulness. They do not waste our attention or cognition. But it also means that we are less likely to critically examine the activity. Is the information reliable? Are we paying an acceptable pr... (read more)

One bias that I think is common among smart, academically minded people like us is that the value of intelligence is overestimated. I certainly think we have some pretty good objective reasons to believe intelligence is good, but we also add biases because we are a self-selected group with a high "need for cognition" trait, in a social environment that rewards cleverness of a particular kind. In the population at large the desire for more IQ is noticeably lower (and I get far more spam about Viagra than Modafinil!).

If I were on the Hypothetical... (read more)

This is why papers like H. Rindermann, Relevance of Education and Intelligence for the Political Development of Nations: Democracy, Rule of Law and Political Liberty, Intelligence, v36 n4 p306-322 Jul-Aug 2008 are relevant. This one looks at lagged data, trying to infer how much effect schooling, GDP and IQ at time t1 affects schooling, GDP and IQ at time t2.

The bane of this type of studies is of course the raw scores - how much cognitive ability is actually measured by school scores, surveys, IQ tests or whatever means that are used - and whether average... (read more)

0Roko
Thanks Anders. It occurs to me at this point that having a personal Anders to back you up with relevant references when in a tight spot is a significant cognitive enhancement.

In many debates about cognition enhancement the claim is that it would be bad, because it would produce compounding effects - the rich would use it to get richer, producing a more unequal society. This claim hinges on the assumption that there would be an economic or social threshold to enhancer use, and that it would produce effects that were strongly in favour of just the individual taking the drug.

I think there is good reason to suspect that enhancement has positive externalities - lower costs due to stupidity, individual benefits that produce tax money... (read more)

1JulianMorrison
There's a historical IQ enhancer we can use to look for this effect: food.

The national/regional IQ literature is messy, because there are so many possible (and even likely) feedback loops between wealth, schooling, nutrition, IQ and GDP. Not to mention the rather emotional views of many people on the topic, as well as the lousy quality of some popular datasets. Lots of clever statistical methods have been used, and IQ seems to retain a fair chunk of explanatory weight even after other factors have been taken into account. Some papers have even looked at staggered data to see if IQ works as a predictor of future good effects, whi... (read more)

Charles H. Hillman, Kirk I. Erickson & Arthur F. Kramer, Be smart, exercise your heart: exercise effects on brain and cognition, Nature Reviews Neuroscience 9, 58-65 (January 2008) especially suggest aerobic fitness training as being important.

The Klingberg group in Sweden have done somewhat similar experiments, with positive results in children with or without ADHD. See their publications: http://www.klingberglab.se/pub.html

0SoullessAutomaton
Yes, I found their work while crawling cites as I mentioned earlier. They seemed to deal with improving working memory capacity, as opposed to fluid intelligence. These may be related, but aren't the same thing. Did I overlook any publications that were about intelligence instead?

I found a power point from Kevin Warwick by googling for "Reading/Swatting -6" that included the data, but only lose references to the studies. I'll email him and ask.

0timtyler
Also, Kevin presents his results here: http://video.google.com/videoplay?docid=-8080230062336457573 1:35:00.

Even small glucose levels can apparently have significant effects. I have some papers in my library arguing that the memory-enhancing effects of adrenaline (which doesn't cross the blood brain barrier) are mediated by the glucose increase it causes. One of them demonstrated that a glucose-mimetic molecule also acted as an enhancer. Overall, the data seems pretty convincing that getting a suitable dose of glucose is enhancing, but the effect has an inverted-U curve - there is an individual and task dependent optimal level.

Overall, drug responses are very in... (read more)

The quick answer is that most stimulants make animals and people use well-learned stimulus-response responses more than considering the situation and figuring out an appropriate response, and often makes them impulsive - when it partially looks like a situation when you should do "A", the A response is hard to resist. A classic case was the US airforce friendly fire incident blamed on dexamphetamine. This is where the improved response inhibition of modafinil comes in. See

Turner DC, Clark L, Dowson J, Robbins TW, Sahakian BJ. Modafinil improves ... (read more)

I think this is a good idea, although SNPs might be overdoing it (for now; soon it will be cheap enough to sequence the whole genome and run whatever tests we like). There is a dearth of data on cognitive enhancers in real settings, and a real need to see what actually works for who and for what.

What I would like to see is volunteers testing themselves on a number of dimensions including IQ, working memory span, big 5 personality, ideally a bunch of biomarkers. In particular it would be good if we could get neurotransmitter levels, but to my knowledge th... (read more)

(this is a rough sketch based on my research, which involves reviewing cognition enhancement literature)

Improving cognitive abilities can be done in a variety of ways, from excercise to drugs to computer games to asking clever people. The core question one should always ask is: what is my bottleneck? Usually there are a few faculties or traits that limit us the most, and these are the ones that ought to be dealt with first. Getting a better memory is useless if the real problem is lack of attention or the wrong priorities.

Training working memory using suit... (read more)

2RHollerith
Will Arenamontanus or someone else please elaborate on the problem with amphetamines not shared with modafinil? To tell me that amphetamines "impair considered choice" is not enough to inform a session with a search engine. I am aware that amphetamines can impair certain kinds of judgement, but was told that that happens only at doses high enough to cause euphoria. Thanks.

As I remarked in another comment, exercise has documented effect. It is rational to do not just for health but for cognition (so why don't I exercise?

Well, why don't you? And everyone else who complains about their "somehow" not exercising. It's a common complaint, even here on LW, where one might expect people to have already risen above such elementary failures of rationality.

This is not a rhetorical question. I speak as someone who does exercise, as a matter of course, every day, and have done for my entire adult life. (Before then, I wasn'... (read more)

4taw
What would be mechanism of action of sugar for a healthy individual? Blood glucose levels are kept in a pretty narrow band, so eating sugar generates insulin spike, and unless you just exercised and have depleted muscle glycogen storage it gets converted straight into fat. Insulin spikes also cause sleepiness. Modafinil works extremely badly on me - it masks lack of sleep well enough, but it makes my mental performance extremely low, and makes me very irritable and unfriendly. Basically I get all side effects of sleep deprivation except I'm not aware of needing some sleep. I have mixed experience with caffeine and amphetamine-like drugs. They seem to be useful for tiredness and focus enhancements to a degree.

Exercise has demonstrated good effect on memory and a bunch of other mental stats; the cause apepars to be the release of neural growth factors (and likely better circulation and general health).

0SoullessAutomaton
Do you have a citation with more details on this, or at least recall what kind of exercise? i.e., low-intensity endurance, high-intensity strength building, cardiovascular improvement, &c.?
-1CannibalSmith
Sooo, pumping iron makes me smarter?

Yes, in many places nutrition is a low-hanging fruit. My own favorite example is iodine supplementation, http://www.practicalethicsnews.com/practicalethics/2008/12/the-perfect-cog.html but vitamins, long-chained fatty acids and simply enough nutrients to allow full development are also pretty good. There is some debate of how much of the Flynn effect of increasing IQ scores is due to nutrition (probably not all, but likely a good chunk). It is an achievable way of enhancing people without triggering the normal anti-enhancement opinions.

The main problem is ... (read more)

I have tried to research the economic benefits of cognition enhancement, and they are quite possibly substantial. But I think Roko is right about the wider political ramifications.

One relevant reference may be: H. Rindermann, Relevance of Education and Intelligence for the Political Development of Nations: Democracy, Rule of Law and Political Liberty, Intelligence, v36 n4 p306-322 Jul-Aug 2008 argues (using cross-lagged data) that education and cognitive ability has bigger positive effects on democracy, rule of law and political liberty than GDP. There ar... (read more)

More intelligence means bigger scope for action, and more ability to get desired outcomes. Whether more intelligence increases risk depends on the distribution of accidentally bad outcomes in the new scope (and how many old bad outcomes can be avoided), and whether people will do malign things. On average very few people seem to be malign, so the main issue is likely more the issue of new risks.

Looking at the great deliberate human-made disasters of the past suggests that they were often more of a systemic nature (societies allowing nasty people or social... (read more)

0steven0461
One possibility I have in mind is if current rationalist ideas need a certain amount of time to slosh around and pervade the population before technology (fed by intelligence) grows enough for them to really start mattering.
Load More