All of Simon_Jester's Comments + Replies

Re: Nanotech That's exactly my point: if nanotech performs as advertised by its starriest-eyed advocates, then interstellar colonization can be done with small payloads and energy is cheap enough that they can be launched easily. That is a very big "if," and not one we can shrug off or assume in advance as the underlying principle of all our models.

What if nanotech turns out to have many of the same limits as its closest natural analogue, biological cells? Biotech is great for doing chemistry, but not so great for assembling industrial machinery ... (read more)

The way you put it does seem to disparage biologists, yes. The biologists are doing work that is qualitatively different from what physicists do, and that produces results the physicists never will (without the aforementioned thousand tons of computronium, at least). In a very real sense, biologists are exploring an entirely different ideaspace from the one the physicists live in. No amount of investigation into physics in isolation would have given us the theory of evolution, for instance.

And weirdly, I'm not a biologist; I'm an apprentice physicist. I still recognize that they're doing something I'm not, rather than something that I might get around to by just doing enough physics to make their results obvious.

This is profoundly misleading. Physicists already have a good handle on how the things biological systems are made of work, but it's a moot point because trying to explain the details of how living things operate in terms of subatomic particles is a waste of time. Unless you've got a thousand tons of computronium tucked away in your back pocket, you're never going to be able to produce useful results in biology purely by using the results of physics.

Therefore, the actual study of biology is largely separate from physics, except for the very indirect route ... (read more)

1Gavin
The ultimate goal of physics is to break things down until we discover the simplest, most basic rules that govern the universe. The goals of biology do not lead down what you call the "indirect route." As you state, Biology abstracts away the low-level physics and tries to understand the extremely complicated interactions that take place at a higher level. Biology attempts to classify and understand all of the species, their systems, their subsystems, their biochemistry, and their interspecies and environmental interactions. The possible sum total of biological knowledge is an essentially limitless dataset, what I might call the "Almanac of Life." I'm not sure quite where you think we disagree. I don't see anything in our two posts that's contradictory--unless you find the use of the word "Almanac" disparaging to biologists? I hope it's clear that it wasn't a literal use -- biology clearly isn't a yearly book of tabular data, so perhaps the simile is inapt.

I wouldn't have assigned much of a prior probability to either of those common sociobiological beliefs, myself. It would hardly surprise me if they were both complete nonsense.

So what do you mean when you say that these beliefs are "standard" or "widely held?" Obviously, I am not a representative sample of the population, so I may have no opinion on a widely held belief. But I'm not aware of strong evidence that these beliefs are widely held, or at any rate are more widely held than the evidence would warrant.

Or, with tongue firmly in cheek, I claim that I'm presenting counterevidence for the common belief that [insert proposition here] is a common belief...

The catch is that complex models are also usually very wrong. Most possible models of reality are wrong, because there are an infinite legion of models and only one reality. And if you try too hard to create a perfectly nuanced and detailed model, because you fear your bias in favor of simple mathematical models, there's a risk. You can fall prey to the opposing bias: the temptation to add an epicycle to your model instead of rethinking your premises. As one of the wiser teachers of one of my wiser teachers said, you can always come up with a function tha... (read more)

From a social psych standpoint, it's very interesting: why do people come up with something, then fail to use it in ways that we would consider obvious and beneficial?

I think a lot of it is hidden infrastructure we don't see, both mental and physical. People need tools to build things, and tools to come up with new ideas: the rules of logic and mathematics may describe the universe, but they are themselves mental tools. Go back to Hellenic civilization and you find a lot of the raw materials for the Industrial Revolution, what was missing? There are a lot ... (read more)

1PhilGoetz
Well, coal was missing... slaves may have been a big factor; it's probably not coincidental that industrialization started in England and the northeast US and, AFAIK, didn't spread to the US south until after the civil war - but somebody should fact check this. (BTW, I'd love to see an alternate history in which slavery is gotten rid of by economic incentives and government subsidization of the development of mechanized agriculture. Well, I say I'd love to, but it would probably be as exciting as an Ayn Rand novel.) ... but yes, ways of thinking were probably what was lacking. One important way of thinking was that, for a very long time before the 18th century, change was seen as bad. The word "innovator" was usually preceded by the word "rash". There was a great chain of being with peasants at the bottom, God at the top, and the King up near the top; and anybody who wanted to change things was a dangerous revolutionary. The very idea that things could improve here on Earth was vaguely heretical. The idea that economies could grow was not fully in place. I think it's also not coincidental that the industrial revolution didn't start until Adam Smith's ideas replaced mercantilist thought. Pre-Smith, people assumed that the total amount of wealth on Earth was fixed.

I know of no confirmed historical evidence of wheelbarrows being used until around the time of the Peloponnesian War in Greece, and as I understand it they subsequently vanished in the Greco-Roman world for roughly 1600 years until being reintroduced in the Middle Ages. Likewise, wheelbarrows are not evident in Chinese history until the first or second century AD.

So wheelbarrows are an application of wheels, but they're a much later application of the technology, one that did not arise historically for two to four millennia after the invention of the two o... (read more)

1thomblake
Excellent points, but I think: and: are not inconsistent, and are both true.

To make this calculation in a MWI multiverse, you still have to place a zero (or extremely small negative) value on all the branches where you die and take most or all of your species with you. You don't experience them, so they don't matter, right? That's a specialized form of a general question which amounts to "does the universe go away when I'm not looking at it?"

If one can make rational decisions about a universe that doesn't contain oneself in it (and life insurance policies, high-level decorations for valor, and the like suggest this is p... (read more)

A machine-phase civilization might still find (3a) or (3b) an issue depending on whether nanotech pans out. We think it will, but we don't really know, and a lot of technologies turn out to be profoundly less capable than the optimists expect them to be in their infancy. Science fiction authors in the '40s and '50s were predicting that atomic power sources would be strongly miniaturized (amusingly, more so than computing devices); that never happened and it looks like the minimum size for a reasonably safe nuclear reactor really is a large piece of industr... (read more)

1randallsquared
Sorry for taking so long on this; I forgot to check back using a browser that can see red envelopes (I usually read lesswrong with elinks). I think if nanotech does what its greatest enthusiasts expect, the minimum size of the industrial base will be in the 1-10 ton range. However, if we're assuming that level of nanotech, anyone who wants will be able to launch their own expedition, personally, without any particular help other than downloading GNU/Spaceship. If nanotech works as advertised, it turns construction into a programming project. Also, if we limit ourselves to predictions made in the 50s with no assumptions of new science, I think we'll find that the predictions are reasonable, technically, and the main reason we don't have nuclear cars and basement reactors now involve politics. Molecular manufacturing probably cannot be contained this way, since it doesn't require a limited resource that's easy to detect from a distance. Others have defined singleton, so I assume you're happy with that. :)
-1gwern
'singleton' as I've seen it used seems to be one possible Singularity in which a single AI absorbs everyone and everything into itself in a single colossal entity. We'd probably consider it a Bad Ending.

Your aliens are assigning zero weight to their own death, as opposed to a negative weight. While this may be logical, I can certainly imagine a broadly rational intelligent species that doesn't do it.

Consider the problems with doing so. Suppose that Omega offers to give a friend of yours a wonderful life if you let him zap you out of existence. A wonderful life for a friend of yours clearly has a positive weight, but I'd expect you to say "no," because you are assigning a negative weight to death. If you assign a zero weight to an outcome involvi... (read more)

0Christian_Szegedy
Hmmm, it seems that most of your arguments are in plain probability-theoretical terms: what is the expected utility assuming certain probabilities of certain outcomes. During the arguments you compute expected values. The whole point of my example was that assuming a many world view of the universe (i.e. multiverse), using the above decision procedures is questionable at best in some situations. In classical probability theoristic view, you won't experience your payoff at all if you don't win. In a MWT framework, you will experience it for sure. (Of course the rest of the world sees a high chance of your loosing, but why should that bother you?) I definitely would not gamble my life on 1:1000000 chances, but if Omega would convince me that MWI is definitely correct and the game is set up in a way that I will experience my payoff for sure in some branches of the multiverse, then it would be quite different from a simple gamble. I think it is a quite an interesting case where human intuition and MWI clashes, simply because it contradicts our everyday beliefs on our physical reality. I don't say that the above would be an easy decision for me, but I don't think you can just compute expected value to make the choice. The choice is really more about subjective values: what is more important to you: your subjective experience or saturating the Multiverse branches with your copies. "Finally, your additional motivation raises a question in its own right: why haven't we encountered an Omega Civilization yet?" That one is easy: The assumption I purposefully made that going omega is a "high risk" (a misleading word, but maybe the closest) process meaning that even if some civilizations went omega, the outsiders (i.e. us) will see them simply wiped out in an overwhelming number of Everett-branches, i.e. with very high probability for us. Therefore we have to wait a huge number of civilizations going omega before we experience them having attained Omega status. Still, if w

Good points. However: (1) Most of the cataclysms we see are either fairly explicable (supernovae) or seem to occur only at remote points in spacetime, early in the evolution of the universe, when the emergence of intelligent life would have been very unlikely. Quasars and gamma ray bursts cannot plausibly be industrial accidents in my opinion, and supernovae need not be industrial accidents.

(2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that "99.9999% death plus 0.0001% superman" is infe... (read more)

0Christian_Szegedy
(2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that "99.9999% death plus 0.0001% superman" is inferior to "continued mortal existence." You have to keep in mind that subjective experience will be 100% superman. The whole idea is that the MWI is true and completely convincingly demonstrated by other means as well. It is like if someone would tell you: you enter this room and all you will experience is that you leave the room with one billion dollars. I think it is a seducing prospect. Yet another analogue: Assume that you have the choice between the following two scenarios: 1) You get replicated million times and all the copies will lead an existence in hopeless poverty 2) You continue your current existence as a single copy but in luxury The absolute reference frame may be different but the relative difference between the two outcomes is very similar to those of the above alternative. Possible additional motivation could be given by knowing that if you don't do that and wait a very very long time, the cumulative risk that you experience some other civilization going superman and obliterating you will raise above a certain threshold. For single civilizations the chance of experiencing it would be negligible but for a universe filled with aspiring civilizations, the chance of experiencing at least one of them going omega could become a significant risk after a while.

To our perspective, this is from (2): all advanced civilizations die off in massive industrial accidents; God alone knows what they thought they were trying to accomplish.

Also, wouldn't there still be people who chose to stay behind? Unless we're talking about something that blows up entire solar systems, it would remain possible for members of the advanced civilization to opt out of this very tempting choice. And I feel confident that for at least some civilizations, there will be people who refuse to bite and say "OK, you guys go inhabit a tiny subs... (read more)

2Christian_Szegedy
I admit that your analysis is quite convincing, but will play the devil's advocate just for fun: 1) We see a lot of cataclysmic events in our universe, the source of which are at least uncertain. It is definitely a possibility that some of them could originate from super-advanced civilizations going up in flame. (Maybe due to accidents or deliberate effort) 2) Maybe the minority that does not approve trickling down the narrow branch is even less inclined to witness the spectacular death of the elite and live on in a resource-exhausted section of the universe and therefore decides to play along. 3) Even if a small risk-averse minority of the civilization is left behind, when it reaches a certain size again, large part of it will decide again to go down the narrow path so it won't grow significantly over time. 4) If the minority becomes so extremely conservative and risk-averse (due to selection after some iterations of 3) then it necessarily means that it has also lost its ambitions to colonize the galaxy and will just stagnate along a few star systems and will try to hide from other civilizations to avoid any possible conflicts, so we would have difficulties to detect them.

This is my hypothesis (3c), with an implicit overlay of (3a).

Here goes:

Alternate explanations for rarity of intelligence:

3a) Interstellar travel is prohibitively difficult. The fact that the galaxy isn't obviously awash in intelligence is a sign that FTL travel is impossible or extremely unfeasible.

Barring technology indistinguishable from magic, building any kind of STL colonizer would involve a great investment of resources for a questionable return; intelligent beings might just look at the numbers and decide not to bother. At most, the typical modern civilization might send probes out to the nearest stellar neig... (read more)

0randallsquared
For a machine-phase civilization, the only one of these that seems plausible is 3c, but I can't think of any reason why no one in a given civilization would want to leave, and assuming growth of any kind, resource pressure alone will eventually drive expansion. If the need for civilization is so psychologically strong, copies can be shipped and revived only after specialized systems have built enough infrastructure to support them. It seems far more likely to me, given the emergence of multiple civilizations in a galaxy, that some technical advance inevitably destroys them. Nanomedicine malfunction or singleton seem like the best bets to me just now, which would suggest that the best defenses are spreading out and technical systems' heterogeneity.

I dunno. I mean, a lot of horror stories that are famous for being good talk about stuff that can never be and should never be, but that nonetheless (in-story) is. I think it's that sense of a comforting belief about the world being violated that makes a good horror story, even if the prior probability of that belief being wrong is low.

I think you're misreading the story. It's not an argument in favor of irrationality, it's a horror story. The catch is that it's a good horror story, directed at the rationalist community. Like most good horror stories, it plays off a specific fear of its audience.

You may be immune to the lingering dread created by looking at all those foolish happy people around you and wondering if maybe you are the one doing something wrong. Or the fear that even if you act as rationally as you can, you could still box yourself into a trap you won't be able to think you... (read more)

1Furcas
I've upvoted this comment, but I disagree. What should make this an effective horror story, as you put it, is that it's based on the very real possibility that there are people whose brains are wired in such a way that they can't be happy and rational at the same time. In order to more effectively 'scare' the reader, the author attempts to convince us that this is more than a possibility by making an argument by fictional example, the example being the main character. My beef with the story is that this example is way too unlikely to be convincing as an argument (and therefore scary as a horror story). If there are people who can't possibly be rational and happy, I'm pretty sure it's not because they're incapable of keeping their tongues under control in order to start a relationship on the right foot.
6Nominull
Agreed. I went in expecting a parable against rationality, and about halfway through I realized I was reading existential horror (the best kind). The writing isn't great and the points are made hamhandedly, but there is the core of a good story here.

Countercounterevidence for 3: what are the assumptions made by those models of interstellar colonization?

Do they assume fusion power? We don't know if industrial fusion power works economically enough to power starships. Likewise for nanotech-type von Neumann machines and other tools of space colonization.

The adjustable parameters in any model for interstellar colonization are defined by the limits of capability for a technological civilization. And we don't actually know the limits, because we haven't gotten close enough to those limits to probe them yet... (read more)

1Jonathan_Graehl
It's sticky sweet candy for the mind. Why not share it?

One thing that caught my eye is the presentation of "Universe is not filled with technical civilizations..." as data against the hypothesis of modern civilizations being probable.

It occurs to me that this could mean any of three things, which only one of which indicates that modern civilizations are improbable.

1) Modern civilizations are in fact as rare as they appear to be because they are unlikely to emerge. This is the interpretation used by this article.

2) Modern civilizations collapse quickly back to a premodern state, either by fighting a v... (read more)

1Neil
There's also some assumption here that civilisations either collpase or conquer the galaxy, but that ignores another possibility - that civilisations might quickly reach a plateau technologically and in terms of size. The reasons this could be the case is that civilisations must always solve their problems of growth and sustainability long before they have the technology to move beyond their home planet, and once they have done so, there ceases to be any imperative toward off-world expansion, and without ever increasing economies of scale, technological developments taper off.
3Christian_Szegedy
Another possible resolution of the Fermi paradox based on the many world interpretation of QM: Let us assume that advanced civilizations find overwhelming evidence for the many world hypothesis as the true, infallible theory of physics. Additionally, assume that there is a quantum mechanical process that has a huge payoff at a very small probability: the equivalent of a cosmic lottery, where the chances of obliteration are close to 1, the chance of winning is close to zero, but the payoff is HUGE. It is like going into a room, where you win a billion dollar with p=1:1000000 and die a sudden, painless death at p= 999999:1000000. Still, for the many world hypothesis is true, you will experience the winning for sure. Now imagine that at some point of its existence every very advanced civilization faces the decision to make the leap of face in the many world interpretation: start the machine that obliterates them in almost every branches of the Everett-multiverse, while letting them live on in a few branches with a huge amount of increased resources (energy/ computronium/ whatever) Since they know that their only subjective experience will be of getting the payoff at a negligible risk, they will choose the path of trickling down in some of the much narrower Everett-branches. However, it would mean for any outsider civilizations are that they simply vanish from their branch of the Universe at a very high probability. Since every advanced civilization would be faced with the above extremely seducing way of gaining cheap resources, the probability that two of them will share the same universe will get infinitesimally small.
-4Christian_Szegedy
Here is another variant: If civilizations achieve a certain sophistication, they necessarily decipher the purpose of the universe and once they understand its true meaning and that they are just a superfluous side-effect, they simply commit suicide. Here is a blog entry of mine elaborating on this hypothesis: http://arachnism.blogspot.com/2009/05/spiritual-explanation-to-fermi-paradox.html
8Scott Alexander
4) There is a very easy and unavoidable way to destroy the universe (or make it inhospitable) using technology, and any technological civilization will inevitably do so at a certain pretty early point in its history. Therefore, only one technological civilization per universe ever exists, and we should not be surprised to find ourselves to be the first. 5) The Dark Lords of the Matrix are only interested in running one civilization in our particular sim.
0taw
By law of conservation of evidence if detecting alien civilization makes them more likely, not detecting them after sustained effort makes them less likely, right? Counterevidence for 2 - there are extremely few sustained reversals of either life or civilization. Toba bottleneck seems like the most likely near-reversal, and it happened before modern civilization. You would need to postulate extremely high likelihood of collapse if you suggest that emergence is very frequent, and still civilizations aren't around. If only 90% of civilizations collapse (what seems vastly higher proportion than we have any reason to believe), then if civilizations are likely, they should still be plentiful. Hypothesis 2 would only work if emergence is very likely, and then fast extinction is nearly inevitable. After civilization starts spreading widely across star systems extinction seems extremely unlikely. Counterevidence for 3 - some models suggest that advanced civilizations would have spread extremely quickly across galaxy by geological timescales. That leaves us with: * Advanced civilizations are numerous but were all created extremely recently, last 0.1% of galaxy's lifetime or so (extremely unlikely to the point that we can ignore it) * We suck at detection so much that we cannot even detect galaxy-wide civilization (seems unlikely, do you postulate that?) * These models are really bad, and advanced civilizations tend to be contained to spread extremely slowly (more plausible, these models have no empirical support) * 3 is false and there are few or no other advanced civilization in the galaxy (what I find most likely), either by not arising in the first place or extinction. My rating of probabilities is 1 >> 3 >> 2. And yes, I'm aware existential risks are widely believed here - I don't share this belief at all.
1Alicorn
"Sometimes I think the surest sign that intelligent life exists elsewhere in the universe is that none of it has tried to contact us."