Comment author: turchin 08 December 2012 07:56:59AM 0 points [-]

He said: «However, when I calculated the necessary amount of cobalt and from that the necessary yield of the bomb I found that they were definitely in the very, very impractical range (many thousands of tons of metal, at least 960 megatons of yield).« http://www.overcomingbias.com/2009/10/self-assured-destruction.html

But it is only 10 times more when Tzar bomb and it could and should be made as a stationery device, not a transportable bomb. Tipical nuclear reactor weight several thosand tones. So one cobalt bomb will be as heavy and complex as nuclear reactor, and so it is fisable.

How it could happened that you more beleive Sandberg when calculations of Szillrd?

Comment author: Arenamontanus 08 December 2012 03:58:43PM 2 points [-]

Actually, when I did my calculations my appreciation of Szilard increased. He was playing a very clever game.

Basically, in order to make a cobalt bomb you need 50 tons of neutrons absorbed into cobalt. The only way of doing that requires a humongous hydrogen bomb. Note when Szilard did his talk: before the official announcement of the hydrogen bomb. The people who could point out the problem with the design would be revealing quite sensitive nuclear secrets if they said anything - the neutron yield of hydrogen bombs was very closely guarded, and was only eventually reverse-engineered by studies of fallout isotopes (to the great annoyance of the US, apparently).

Szilard knew that 1) he was not revealing anything secret, 2) criticising his idea required revealing secrets, and 3) the bomb was impractical, so even if somebody tried they would not get a superweapon thanks to his speech.

I think cobalt bombs can be done; but you need an Orion drive to launch them into the stratosphere. The fallout will not be even, leaving significant gaps. And due to rapid gamma absorption in sea water the oceans will be semi-safe. Just wash your fishing boat so fallout does not build up, and you have a good chance of survival.

Basically, if you want to cause an xrisk by poisoning the biosphere, you need to focus on breaking a key link rather than generic poisoning. Nukes for deliberate nuclear winter or weapons that poison the oxygen-production of the oceans are likely more effective than any fallout-bomb.

Comment author: Arenamontanus 16 January 2012 10:35:53PM 2 points [-]

Given that overconfidence is one of the big causes of bad policy, maybe a world without Hitler would have worse policies if Stuart's guesses at the end were true. It would possibly be overconfident about niceness, negotiations, democracy and supra-national institutions. On the other hand, it might be more cautious about developing nuclear weapons. So maybe it would be more vulnerable to nasty totalitarian surprises, but have a slighly better safety against nuclear GCRs.

As a non-historian I don't know how to properly judge historical what-ifs well: not only am I uncertain about how to analyse the counterfactual methodology itself, but I am uncertain about what historical data we need to know in order to do a proper counterfactual. But looking at how different worldviews depend on particular historical events and doing at least some estimate of how robust those events were, might indeed tell us a bit about where we might have ended up with contingent world-views.

In my own field of human enhancement ethics it is pretty clear that some of the halo effect of Nazism and its defeat in WWII led to a very strong negative value association that is relatively arbitrary but affects current policies. Had they been doing bad sociology instead we might have been decrying sinister social engineering, while happily selecting the genes of our children. If there had been an anti-USSR WWII the same might have happened.

Comment author: Arenamontanus 27 October 2010 01:20:49PM 0 points [-]

It seems that the bargaining for mu will be dependent on your priors about what games will be played. That might help fix the initial mu-bargaining.

Comment author: Arenamontanus 19 November 2009 06:36:18PM 4 points [-]

I think this is very needed. When reviewing singularity models for a paper I wrote I could not find many readily citable references to certain areas that I know exist as "folklore". I don't like mentioning such ideas because it makes it look (to outsiders) as I have come up with them, and the insiders would likely think I was trying to steal credit.

There are whole fields like friendly AI theory that need a big review. Both to actually gather what has been understood, and in order to make it accessible to outsiders so that the community thinking about it can grow and deepen.

Whether this is a crowdsourcable project is another matter, but at the very least crowdsourcing raw input for later expert paper construction sounds like a good idea. I would expect that eventually it would need to boil down to one or two main authors doing most of the job, and a set of co-authors for speciality skills and prestige. But since this community is less driven by publish-or-perish and more by rationality concerns I expect ordering of co-authors may be less important.

Comment author: RobinHanson 18 November 2009 02:25:31PM *  5 points [-]

A problem with this proposal is whether this paper can be seen as authorative. A critic might worry that if they study and respond to this paper they will be told it does not represent the best pro-Singularity arguments. So the paper would need to be endorsed enough to gain enough status to become worth criticizing.

Comment author: Arenamontanus 19 November 2009 06:25:14PM 6 points [-]

The way to an authoritative paper is not just to have the right co-authors but mainly having very good arguments, cover previous research well and ensure that it is out early in an emerging field. That way it will get cited and used. In fact, one strong reason to write this paper now is that if you don't do it, somebody else (and perhaps much worse) will do it.

Comment author: Zachary_Kurtz 06 November 2009 02:55:05PM 0 points [-]

We discussed this at the last NYC OB/LW meetup. I'm becoming more in love with the "anthropic speculations." Of course, its impossible to prove empirically until the universe is already destroyed.

Comment author: Arenamontanus 09 November 2009 05:00:59PM 0 points [-]

Actually, if you do the experiment a number of times and always get suspicious hindrances, then you have good empirical evidence that something anthropic is going on... and that you likely have self-destroyed yourself in a lot of universes.

Comment author: AdeleneDawner 22 October 2009 12:42:35AM 4 points [-]

I'm reasonably sure that high IQ (i.e. over 140) is not particularly well correlated with outstanding achievement. I am almost certain that extremely high IQ's are not a prerequisite for extraordinary achievement, though there may be some specific fields where this does not hold true (say, theoretical physics).

I remember reading that the optimal IQ for success in life is actually about 130, but can't find a source for that now. I did find this though, which seems to support your claim.

I think that having the general population's IQ raised would have such wide-ranging effects that looking at society as it is now isn't a very good indicator of what that would be like. Society as it is now isn't set up to support people with very high IQs (or even get the most out of the IQs that people have to begin with), so I'm pretty sure there would be changes to all kinds of things to fix that.

Comment author: Arenamontanus 22 October 2009 01:02:06AM 1 point [-]

The linked article is problematic. There is a pretty agreed on correlation between IQ and income (the image obscures this). In the case of wealth the article claims that there is a non-linear relationship that makes really smart people have a low wealth level. But this is due to the author fitting a third degree polynomial to the data! I am pretty convinced it is a case of overfitting. See <a href="http://www.aleph.se/andart/archives/2007/04/cubic_terms_make_smart_people_bankrupt.html">my critique post for more details</a>.

Comment author: Psychohistorian 22 October 2009 12:25:59AM 4 points [-]

I'm reasonably sure that high IQ (i.e. over 140) is not particularly well correlated with outstanding achievement. I am almost certain that extremely high IQ's are not a prerequisite for extraordinary achievement, though there may be some specific fields where this does not hold true (say, theoretical physics).

If someone with an IQ of 180 has a thousand times the chance of making some incredible breakthrough compared to someone with an IQ of 140, shifting from 1% of the people having IQ > 140 to having 25%+ of the people having an IQ over 140 would still probably generate a great deal of breakthroughs.

Comment author: Arenamontanus 22 October 2009 12:56:55AM 4 points [-]

There is one study that demonstrated that among top 1% SAT scorers investigated some years after testing, the upper quartile produces about twice the number of patents as the lower one (and about 6 times the average, if I remember right). That seems to imply that having more really top performers might produce more useful goods even if the vast majority of them never invent anything great.

Even a tiny shift upwards of everybody's IQ has a pretty impressive multiplicative effect at the high end.

Interpersonal skills are more important for job success than IQ, but I doubt great skills will produce goods useful across society in the same way as an invention does. A high EQ person probably just makes the local social network better, which has a relatively limited overall effect.

Comment author: PhilGoetz 21 October 2009 10:35:30PM *  4 points [-]

Suppose you are a smart person, and someone developed a drug that makes anyone who takes it 30 IQ points smarter. The FDA rules that this drug can be given only to people with an IQ at least 20 points below yours.

Would you be happy about this development?

Comment author: Arenamontanus 22 October 2009 12:50:28AM 3 points [-]

I would be happy. The low end of the intelligence scale have on average pretty bad lives (higher risks of accidents, illness, crime, bad school outcomes, less income and lower life satisfaction), so on purely utilitarian grounds it would be good. But their inefficiency and costs also reduce the overall economy and cost a lot of tax money directly or indirectly. Hence I would be better off with them smarter - it might reduce my competitive advantage a bit, but I think the faster economic growth would balance that. A lot of our market value reside in our unique skills rather than general skills anyway.

Comment author: Arenamontanus 21 October 2009 01:06:51PM 3 points [-]

The definition of illness is one of the perennials in the philosophy of medicine. Robert Freitas has a nice list in the first chapter of Nanomedicine ( http://www.nanomedicine.com/NMI/1.2.2.htm ) which is by no means exhaustive.

In practice, the typical "down-on-the-surgery-floor" approach is to judge whether a condition impairs "normal functioning". This is relative to everyday life and the kind of life the patient tries to live - and of course contains a lot of subjective judgements. Another good rule of thumb is that illness impairs the flexibility of someone - they have fewer possibilities.

Personally I prefer Freitas volitional model, where we give strong weight to the desires and goals of the patient. If I want to fly and could somehow be cured of weight, then that should be allowed. However, seeing medical interventions as allowed is not the same as claiming they have to be supported by everybody else (positive and negative rights and all that). There is much truth in saying that illness is what a society thinks we should be altruistic and pay for others, while health improvements beyond that tend to be up to the individual.

The problem is that altruism pool is limited (and quite possibly due to murky evolutionary psychology - consider Robin's "Showing that you care" paper) and shared resources limited, while the space of possible medical interventions is growing and human wants of course nearly unbounded. Hence there is a constant struggle for stakeholders to bring their conditions into the realm of altruism and obligatory treatment.

The problem is that we currently also roughly identify the category of illness treatments with allowable treatments (with some exceptions like preventative medicine, cosmetic surgery etc.) and the non-illness treatments as not allowed (doping, enhancement). This might be a reaction to rein in the costs and illness category, but also concerns that non-altruist medicine would be socially bad. I have strong suspicions this is misguided and actually decreases human happiness.

In the end, the goal of medicine should always be human flourishing, not health. Health is instrumental for living a good life, but what kind of health is needed depends very much on individual life projects. I believe that in the future we are going to see much more of a health pluralism.

View more: Prev | Next