Comment author: Christian_Szegedy 22 August 2009 02:40:56AM *  2 points [-]

I admit that your analysis is quite convincing, but will play the devil's advocate just for fun:

1) We see a lot of cataclysmic events in our universe, the source of which are at least uncertain. It is definitely a possibility that some of them could originate from super-advanced civilizations going up in flame. (Maybe due to accidents or deliberate effort)

2) Maybe the minority that does not approve trickling down the narrow branch is even less inclined to witness the spectacular death of the elite and live on in a resource-exhausted section of the universe and therefore decides to play along.

3) Even if a small risk-averse minority of the civilization is left behind, when it reaches a certain size again, large part of it will decide again to go down the narrow path so it won't grow significantly over time.

4) If the minority becomes so extremely conservative and risk-averse (due to selection after some iterations of 3) then it necessarily means that it has also lost its ambitions to colonize the galaxy and will just stagnate along a few star systems and will try to hide from other civilizations to avoid any possible conflicts, so we would have difficulties to detect them.

Comment author: Simon_Jester 22 August 2009 03:21:26AM 1 point [-]

Good points. However: (1) Most of the cataclysms we see are either fairly explicable (supernovae) or seem to occur only at remote points in spacetime, early in the evolution of the universe, when the emergence of intelligent life would have been very unlikely. Quasars and gamma ray bursts cannot plausibly be industrial accidents in my opinion, and supernovae need not be industrial accidents.

(2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that "99.9999% death plus 0.0001% superman" is inferior to "continued mortal existence."

(3)Again possible, but there will be a selection effect over time. Eventually, the remaining people (who, you will notice, live in a universe where people who try to ascend to godhood always die) will no longer think ascending to godhood is a good idea. Maybe the ancients were right and there really is a small chance that the ascent process works and doesn't kill you, but you have never seen it work, and you have seen your civilization nearly exterminated by the power-hungry fools who tried it the last ten times.

At what point do you decide that it's more likely that the ancients did the math wrong and the procedure just flat out does not work?

(4)The minority might have no problems with risks that do not have a track record of killing everybody. However, you have a point: a rational civilization that expects the galaxy to be heavily populated might be well advised to hide.

Comment author: Christian_Szegedy 21 August 2009 11:17:45PM *  3 points [-]

Another possible resolution of the Fermi paradox based on the many world interpretation of QM:

Let us assume that advanced civilizations find overwhelming evidence for the many world hypothesis as the true, infallible theory of physics. Additionally, assume that there is a quantum mechanical process that has a huge payoff at a very small probability: the equivalent of a cosmic lottery, where the chances of obliteration are close to 1, the chance of winning is close to zero, but the payoff is HUGE. It is like going into a room, where you win a billion dollar with p=1:1000000 and die a sudden, painless death at p= 999999:1000000. Still, for the many world hypothesis is true, you will experience the winning for sure.

Now imagine that at some point of its existence every very advanced civilization faces the decision to make the leap of face in the many world interpretation: start the machine that obliterates them in almost every branches of the Everett-multiverse, while letting them live on in a few branches with a huge amount of increased resources (energy/ computronium/ whatever) Since they know that their only subjective experience will be of getting the payoff at a negligible risk, they will choose the path of trickling down in some of the much narrower Everett-branches.

However, it would mean for any outsider civilizations are that they simply vanish from their branch of the Universe at a very high probability. Since every advanced civilization would be faced with the above extremely seducing way of gaining cheap resources, the probability that two of them will share the same universe will get infinitesimally small.

Comment author: Simon_Jester 22 August 2009 01:33:25AM 2 points [-]

To our perspective, this is from (2): all advanced civilizations die off in massive industrial accidents; God alone knows what they thought they were trying to accomplish.

Also, wouldn't there still be people who chose to stay behind? Unless we're talking about something that blows up entire solar systems, it would remain possible for members of the advanced civilization to opt out of this very tempting choice. And I feel confident that for at least some civilizations, there will be people who refuse to bite and say "OK, you guys go inhabit a tiny subset of all universes as gods; we will stay behind and occupy all remaining universes as mortals."

If this process keeps going on for a while, you end up with a residual civilization composed overwhelmingly of people who harbor strong memes against taking extremely low-probability, high-payoff risks, even if the probability arithmetic indicates doing so.

For your proposal to work, it has to be an all-or-nothing thing that affects every member of the species, or affects a broad enough area that the people who aren't interested have no choice but to play along because there's no escape from the blast radius of the "might make you God, probably kills you" machine. The former is unlikely because it requires technomagic; the latter strikes me as possible only if it triggers events we could detect at long range.

Comment author: Alicorn 21 August 2009 09:04:50PM 2 points [-]

6) Faster than light travel is not physically possible, the other civilizations all originated far away, and the other civilizations are all composed of people who don't like to live in generational spaceships their entire lives.

Comment author: Simon_Jester 22 August 2009 01:26:19AM 0 points [-]

This is my hypothesis (3c), with an implicit overlay of (3a).

Comment author: Jonathan_Graehl 21 August 2009 11:19:42PM 1 point [-]

It's sticky sweet candy for the mind. Why not share it?

Comment author: Simon_Jester 22 August 2009 01:24:31AM 3 points [-]

Here goes:

Alternate explanations for rarity of intelligence:

3a) Interstellar travel is prohibitively difficult. The fact that the galaxy isn't obviously awash in intelligence is a sign that FTL travel is impossible or extremely unfeasible.

Barring technology indistinguishable from magic, building any kind of STL colonizer would involve a great investment of resources for a questionable return; intelligent beings might just look at the numbers and decide not to bother. At most, the typical modern civilization might send probes out to the nearest stellar neighbors. If the cost of sending a ton of cargo to Alpha Centauri is say, 0.0001% of your civilization's annual GDP, you're not likely to see anyone sending million-ton colony ships to Alpha Centauri. In which case intelligent life might be relatively common in the galaxy without any of it coming here; even the more ambitious cultures that actually did bother to make the trip to the nearest stars would tend to peter out over time rather than going through exponential expansion.


3b) Interstellar colonization is prohibitively difficult. If sending an STL colony expedition to another star is hard, sending one with a large enough logistics base to terraform a planet will be exponentially harder.

There are something on the order of 1000 stars within 50 to 60 light years of us. Assuming more or less uniform stellar densities, if the probability of a habitable planet appearing around any given star is much less than 0.1%, it's likely that such planets will remain permanently out of reach for a sublight colony ship. In that case, spreading one's civilization throughout the galaxy depends on being able to terraform planets across interstellar distances before setting up a large population on those worlds. Even if travel across short (~10 ly) interstellar distances is not prohibitively difficult, there might still be little or no incentive to colonize the available worlds beyond one's own star system. After all, if you're going to live in a climate-controlled bunker on an uninhabitable rock where you can't step outside without being freeze-dried or boiled alive, you might as well do it somewhere closer to home.

NOTE: This amounts to "super-difficult life," but it does not require that there are few intelligent species in the galaxy. If the emergence of life is (for lack of a better term) super-duper-difficult, or if most planets are inhospitable enough to make it impossible, then we could have many thousands of intelligent species in the galaxy without any of them being likely to reach each other.


3c) Interstellar colonization might be "psychologically" difficult. For instance, what if the next logical step in the evolution of modern civilization is an AI singularity, possibly coupled with some kind of uploading of consciousness into machines? Either way, our descendants of 200 years from now might well be, to our eyes, a civilization of robots. To a society of strong AIs, interstellar colonization is liable to look a little different. Traveling to even the nearest stars, you will be cut off from the rest of your civilization by a transmission gap on the order of 10^20 cycles just because of the lightspeed limit.*

That might sound like an even worse idea to them than spending a long lifetime in cryogenic storage and having a twenty year round trip communication cycle with Earth does to us. In which case they're likely to stay at home and come up with elaborate social activities or simulations to spend their time, because interstellar colonization is just too unpleasant to bear considering.

*Assuming roughly 1 THz computing, for relatively near stellar neighbors. This estimate is probably too low, but I need some numbers and I am nowhere near an expert on artificial intelligence or the probable limits of computer technology.

Comment author: Furcas 21 August 2009 09:05:06PM *  1 point [-]

I've upvoted this comment, but I disagree.

What should make this an effective horror story, as you put it, is that it's based on the very real possibility that there are people whose brains are wired in such a way that they can't be happy and rational at the same time. In order to more effectively 'scare' the reader, the author attempts to convince us that this is more than a possibility by making an argument by fictional example, the example being the main character.

My beef with the story is that this example is way too unlikely to be convincing as an argument (and therefore scary as a horror story). If there are people who can't possibly be rational and happy, I'm pretty sure it's not because they're incapable of keeping their tongues under control in order to start a relationship on the right foot.

Comment author: Simon_Jester 22 August 2009 01:18:48AM 0 points [-]

I dunno. I mean, a lot of horror stories that are famous for being good talk about stuff that can never be and should never be, but that nonetheless (in-story) is. I think it's that sense of a comforting belief about the world being violated that makes a good horror story, even if the prior probability of that belief being wrong is low.

Comment author: Furcas 19 August 2009 05:27:09AM *  9 points [-]

I have to say I'm surprised by the amount of praise this story is getting.

The main character seems convinced that the difficulty she experiences in interacting pleasantly with members of the opposite sex and possibly starting a relationship with someone less rational than she is, is due to her inability to delude herself, or even to compartmentalize.

But it's not. It's due to her inability to shut up once in a while. Instead of working on changing her entire psyche, couldn't she have simply made an effort to, you know, control the way she behaves?

Epistemic rationality has nothing to do with extreme honesty towards other individuals, or with showing contempt for irrationalists, or even with feeling contempt for them. The greatest epistemic rationalist on Earth could have a happy relationship with a Young Earth Creationist; all s/he'd have to do is either refrain from criticism, or be very polite and gentle about it.

Also, I wasn't very impressed with the classification of Richard Dawkins (and those like him) as a "a Type-1-and-higher retard". What he is is a good Type-1-and-higher thinker who cares about the truth and therefore to whom avoiding self-deception is advantageous.

Comment author: Simon_Jester 21 August 2009 08:15:25PM 15 points [-]

I think you're misreading the story. It's not an argument in favor of irrationality, it's a horror story. The catch is that it's a good horror story, directed at the rationalist community. Like most good horror stories, it plays off a specific fear of its audience.

You may be immune to the lingering dread created by looking at all those foolish happy people around you and wondering if maybe you are the one doing something wrong. Or the fear that even if you act as rationally as you can, you could still box yourself into a trap you won't be able to think your way back out of. But quite a few of your peers are not so immune. I know I'm not, and that story managed to scare me pretty effectively.

The protagonist isn't an ideal rationalist, and the story isn't trying to assert that this is what the ideal rationalist does. Instead, the protagonist is an adolescent proto-rationalist, of a type many of us are familiar with, with her social instincts sucking her into a trap that a lot of us can understand well enough to dread.

And so there's a reason she thinks and acts like a Hollywood stereotype of an intelligent person is that, especially when they're just barely at the age of being able to really think at all. Where do you think Hollywood got the idea for the stereotype in the first place?

I submit that the reason so many of the average people think intelligent people act that way is because they lose social contact with the geniuses in high school, which is when they do think and act like that.

For a lot of the smartest people, being socially functional is a learned skill that comes late and not easily.

Comment author: taw 21 August 2009 05:13:52PM 0 points [-]

By law of conservation of evidence if detecting alien civilization makes them more likely, not detecting them after sustained effort makes them less likely, right?

Counterevidence for 2 - there are extremely few sustained reversals of either life or civilization. Toba bottleneck seems like the most likely near-reversal, and it happened before modern civilization. You would need to postulate extremely high likelihood of collapse if you suggest that emergence is very frequent, and still civilizations aren't around. If only 90% of civilizations collapse (what seems vastly higher proportion than we have any reason to believe), then if civilizations are likely, they should still be plentiful. Hypothesis 2 would only work if emergence is very likely, and then fast extinction is nearly inevitable. After civilization starts spreading widely across star systems extinction seems extremely unlikely.

Counterevidence for 3 - some models suggest that advanced civilizations would have spread extremely quickly across galaxy by geological timescales. That leaves us with:

  • Advanced civilizations are numerous but were all created extremely recently, last 0.1% of galaxy's lifetime or so (extremely unlikely to the point that we can ignore it)
  • We suck at detection so much that we cannot even detect galaxy-wide civilization (seems unlikely, do you postulate that?)
  • These models are really bad, and advanced civilizations tend to be contained to spread extremely slowly (more plausible, these models have no empirical support)
  • 3 is false and there are few or no other advanced civilization in the galaxy (what I find most likely), either by not arising in the first place or extinction.

My rating of probabilities is 1 >> 3 >> 2. And yes, I'm aware existential risks are widely believed here - I don't share this belief at all.

Comment author: Simon_Jester 21 August 2009 08:03:37PM *  1 point [-]

Countercounterevidence for 3: what are the assumptions made by those models of interstellar colonization?

Do they assume fusion power? We don't know if industrial fusion power works economically enough to power starships. Likewise for nanotech-type von Neumann machines and other tools of space colonization.

The adjustable parameters in any model for interstellar colonization are defined by the limits of capability for a technological civilization. And we don't actually know the limits, because we haven't gotten close enough to those limits to probe them yet. If the future looks like the more optimistic hard science fiction authors suggest, then the galaxy should be full of intelligence and we should be able to spot the drive flares of Orion-powered ships flitting around, or the construction of Dyson spheres by the more ambitious species. We should be able to see something, at any rate.

But if the future doesn't look like that, if there's no way to build cost-effective fusion reactors and the only really worthwhile sustainable power source is solar, if there are hard limits on what nanotech is capable of that limit its industrial applications, and so on... the barrier to entry for a planetary civilization hoping to go galactic may be so high that even with thousands of intelligent species to make the attempt, none of them make it.

This ties back into the hypotheses I left out of my post for the sake of brevity; I'm now considering throwing them in to explain my reasoning a little better. But I'm still not sure I should do it without invitation, because they are on the long side.

Comment author: Simon_Jester 21 August 2009 04:56:49PM *  1 point [-]

One thing that caught my eye is the presentation of "Universe is not filled with technical civilizations..." as data against the hypothesis of modern civilizations being probable.

It occurs to me that this could mean any of three things, which only one of which indicates that modern civilizations are improbable.

1) Modern civilizations are in fact as rare as they appear to be because they are unlikely to emerge. This is the interpretation used by this article.

2) Modern civilizations collapse quickly back to a premodern state, either by fighting a very destructive war, by high-probability natural disasters, by running out of critical resources, or by a cataclysmic industrial accident such as major climate change or a Gray Goo event.

This would undermine an attempt to judge the odds of modern civilizations emerging based on a small sample size. If (2) is true, the fact that we haven't seen a modern civilization doesn't mean it doesn't exist; it's more likely to mean that it didn't last long enough to appear on our metaphorical radar. All we know with high confidence is that there haven't been any modern civilizations on Earth before us, which places an upper bound on the likely range of probabilities for it to happen; Earth may be a late bloomer, but it's unlikely to be such a late bloomer that three or four civilizations would have had time to emerge before we got here.

3) The apparent rarity of modern civilizations could just be a sign that we are bad at detecting them. We know that alien civilizations haven't visited us in the historic past, that they haven't colonized Earth before we got here, and that they haven't beamed detectable transmissions at us, but those quite plausibly be explained by other factors. Some hypotheses come to mind for me, but I removed them for the sake of brevity; they are available if anyone's interested.


Anyway, where I was going with all this: I can see a lot of alternate interpretations to explain the fact that we haven't detected evidence of modern civilizations in our galaxy, some of which would make it hard to infer anything about the likelihood of civilizations emerging from the history of our own planet. That doesn't mean I think that considering the problem isn't worthwhile, though.

View more: Prev