Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Anders_Sandberg 22 September 2008 05:19:00PM 2 points [-]

I did a calculation here:
and concluded that I would start to believe there was something to the universe-destroying scenario after about 30 clear, uncorrelated mishaps (even when taking a certain probability of foul play into account).

Comment author: Anders_Sandberg 22 September 2008 04:05:18PM 3 points [-]

I like Roko's suggestion that we should look at how many doomsayers actually predicted a danger (and how *early*). We should also look at how many dangers occurred with no prediction at all (the Cameroon lake eruptions come to mind).

Overall, the human error rate is pretty high: http://panko.shidler.hawaii.edu/HumanErr/ Getting the error rate under 0.5% per statement/action seems very unlikely, unless one deliberately puts it into a system that *forces* several iterations of checking and correction (Panko's data suggests that error checking typically finds about 80% of the errors). For scientific papers/arguments one bad per thousand is probably conservative (My friend Mikael claimed the number of erroneous maths papers are far less than this level because of the peculiarities of the field, but I wonder how many orders of magnitude they can buy).

At least to me this seems to suggest that in the absence of any other evidence, assigning a prior probability much less than 1/1000 to any event we regard as extremely unlikely is overconfident. Of course, as soon as we have a bit of evidence (cosmic rays, knowledge of physics) we can start using smaller priors. But uninformative priors are always going to be odd and silly.

Comment author: Anders_Sandberg 24 June 2008 06:55:00PM 0 points [-]

A new report (Steven B. Giddings and Michelangelo M. Mangano, Astrophysical implications of hypothetical stable TeV-scale black holes, arXiv:0806.3381 ) does a much better job at dealing with the black hole risk than the old "report" Eliezer rightly slammed. It doesn't rely on Hawking radiation (but has a pretty nice section showing why it is very likely) but instead calculates how well black holes can be captured by planets, white dwarves and neutron stars (based on AFAIK well-understood physics, besides the multidimensional gravity one has to assume in order to get the threat in the first place). The derivation does not assume that Eddington luminosity slows accretion and does a good job at examining how fast black holes can be slowed - it turns out that white dwarves and neutron stars are good at slowing them. This is used to show how dangerously fast planetary accretion rates are incompatible with the observed lifetime of white dwarves and neutron stars.

The best argument for Hawking radiation IMHO is that particle physics is time-reversible, so if there exist particle collisions producing black holes there ought to exist black holes decaying into particles.

Comment author: Anders_Sandberg 19 October 2007 01:13:16AM 0 points [-]

If this is not a hoax or she does a Leary, we will have her around for a long time. Maybe one day she will even grow up. But seriously, I think Eli is right. In a way, given that I consider cryonics likely to be worthwhile, she has demonstrated that she might be more mature than I am.

To get back to the topic of this blog, cryonics and cognitive biases is a fine subject. There is a lot of biases to go around here, on all sides.

Comment author: Anders_Sandberg 18 October 2007 09:03:44AM 3 points [-]

"If intelligence is an ability to act in the world, if it refer to some external reality, and if this reality is almost infinitely malleable, then intelligence cannot be purely innate or genetic."

This misses the No Free Lunch theorems, which state that there is no learning system that outperforms any other in general. Yes, full human intelligence, AI superintelligence, earthworms and selecting actions at random are just as good. The trick is "in general", since that covers an infinity of patternless possible worlds. Worlds with (to us) learnable and understandable patterns is a minuscule minority.

Clearly intelligence needs input from an external world. But it has been shaped by millions of years of evolution within a particular kind of world, and there is quite a bit of information in our genes about how to make a brain that can process this kind of world. Beings that are born with perfectly general brains will not learn how to deal with the world until it is too late, compared to beings with more specialised brains. This is actually a source of our biases, since the built in biases that reduce learning time may not be perfectly aligned with the real world or the new world we currently inhabit.

Conversely, it should not be strange that there is variation in the genes that enable our brains to form and that this produces different biases, different levels of adaptivity and different "styles" of brains. Just think of trying to set the optimal learning rate, discount rate and exploration rate of reinforcement agents.

I agree with Watson that it would be very surprising if intelligence-related genes were perfectly equally distributed. At the same time there are a lot of traits that are surprisingly equally distributed. At the same time, the interplay between genetics, environment, schooling, nutrition, rich and complex societies etc. is complex and accounts for a lot. We honestly do not understand it and its limits at present.

Comment author: Anders_Sandberg 16 October 2007 11:31:00PM 1 point [-]

People have apparently argued for a 300 to 30,000 years storage limit due to free radicals due to cosmic rays, but the uncertainty is pretty big. Cosmic rays and background radiation are likely not as much a problem as carbon-14 and potassium-40 atoms anyway, not to mention the freezing damage. http://www.cryonics.org/1chapter2.html has a bit of discussion of this. The quick way of estimating the damage is to assume it is time compressed, so that the accumulated yearly dose is given as an acute dose.

Comment author: Anders_Sandberg 16 October 2007 11:38:44AM 3 points [-]

I think Kaj has a good point. In a current paper I'm discussing the Fermi paradox and the possibility of self-replicating interstellar killing machines. Should I mention Saberhagen's berserkers? In this case my choice was pretty easy, since beyond the basic concept his novels don't contain that much of actual relevance to my paper, so I just credit him with the concept and move on.

The example of _Metamorphosis of Prime Intellect_ seems deeper, since it would be a example of something that can be described entirely theoretically but becomes more vivid and clearly understandable in the light of a fictional example. But I suspect the problem here is the vividness: it would produce a bias towards increasing risk estimates for that particular problem as a side effect of making the problem itself clearer. Sometimes that might be worth it, especially if the analysis is strong enough to rein in wild risk estimates, but quite often it might be counterproductive.

There is also a variant of absurdity bias in referring to sf: many people tend to regard the whole argument as sf if there is an sf reference in it. I noticed that some listeners to my talk on berserkers did indeed not take the issue of whether there are civilization-killers out there very seriously, while they might be concerned about other "normal" existential risks (and of course, many existential risks are regarded as sf in the first place).

Maybe a rule of thumb is to limit fiction references to where they 1) say something directly relevant, 2) there is a valid reason for crediting them, 3) the biasing effects do not reduce the ability to think rationally about the argument too much.

Comment author: Anders_Sandberg 16 October 2007 09:41:16AM 5 points [-]

Another reason people overvalue science fiction is the availability bias due to the authors who got things right. Jules Verne had a fairly accurate time for going from the Earth to the Moon, Clarke predicted/invented geostationary satelites, John Brunner predicted computer worms. But of course this leaves out all space pirates using slide rules for astrogation (while their robots serve rum), rays from unknown parts of the electromagnetic spectrum and gravity-shielding cavorite. There is a vast number of quite erroneous predictions.

I have collected a list of sf stories involving cognition enhancement. They are all over the place in terms of plausibility, and I was honestly surprised by how little useful ideas of the impact of enhancement they had. Maybe it is easier to figure out the impact of spaceflight. I think the list might be useful as a list of things we might want to invent and common tropes surrounding enhancement rather than any start for analysis of what might actually happen.

Still, sf might be useful in the same sense that ordinary novels are: creating scenarios and showing more or less possible actions or ways of relate to events. There are a few studies showing that reading ordinary novels improves empathy, and perhaps sf might improve "future empathy", our ability to consider situations far away from our here-and-now situation.

Comment author: Anders_Sandberg 15 October 2007 11:09:52AM 1 point [-]

In think the "death gives meaning to life" meme is a great example of "standard wisdom". It is apparently paradoxical (right form to be "deep"), it provides a comfortable consolation for a nasty situation. But I have seldom seen any deep defense for it in the bioethical literature. Even people who strongly support it and ought to work very hard to demonstrate to fellow philosophers that it is a true statement seem to be content to just rattle it off as self-evident (or that people not feeling it in their guts are simply superficial).

Being a hopeless empiricist I would like to check whether people today feel life being less meaningful than a century ago, and whether people in countries with short life expectancy feel more meaning than in countries with none. I'm pretty certain the later is not true, and the first looks iffy (hard to check, and lots of confounders like changed social and cultural values, though). I did some statistics on the current state, http://www.aleph.se/andart/archives/2006/12/a_long_and_happy_life.html and found no link between longer life and ennui, at least on a national level.

Comment author: Anders_Sandberg 14 October 2007 08:18:27PM 28 points [-]

I have played with the idea of writing a "wisdom generator" program for a long time. A lot of "wise" statements seem to follow a small set of formulaic rules, and it would not be too hard to make a program that randomly generated wise sayings. A typical rule is to create a paradox ("Seek freedom and become captive of your desires. Seek discipline and find your liberty") or just use a nice chiasm or reversal ("The heart of a fool is in his mouth, but the mouth of the wise man is in his heart"). This seems to fit in with your theory: the structure given by the form is enough to trigger recognition that a wise saying will now arrive. If the conclusion is weird or unfamiliar, so much the better.

Currently reading Raymond Smullyan's _The Tao is Silent_, and I'm struck by how much less wise taoism seems when it is clearly explained.

View more: Next