In order to greatly reduce X-risk, design self-replicating spacecraft without AGI
tl/dr: If we'd build a working self-replicating spacecraft, that'd prove we're past the Great Filter. Therefore, certainty we can do that would eliminate much existential risk. It is a potentially highly visible project that gives publicity to reasons not to include AGI. Therefore, serious design work on a self-replicating spacecraft should have a high priority.
I'm assuming you've read Stuart_Armstrong's excellent recent article on the Great Filter. In the discussion thread for that, RussellThor observed:
if we make a simple replicator and have it successfully reach another solar system (with possibly habitable planets) then that would seem to demonstrate that the filter is behind us.
If that is obvious to you, skip to the next subheading.
The evolution from intelligent spacefaring species to producer of self-replicating spacecraft (henceforth SRS, used in the plural) is inevitable, if SRS are possible. This is simply because the matter and negentropy available in the wider universe is a staggeringly vast resource of staggering value. Even species who are unlikely to ever visit and colonize other stars in the form that evolution gave them (this includes us) can make use of these resources. For example, if we could build on (or out of) empty planets supercomputers that receive computation tasks by laser beam and output results the same way, we would be economically compelled to do so simply because those supercomputers could handle computational tasks that no computer on Earth could complete in less than the time it takes that laser beam to travel forth and back. That supercomputer would not need to run even a weak AI to be worth more than the cost of sending the probe that builds it.
Without a doubt there are countless more possible uses for these, shall we say, exoresources. If Dyson bubbles or mind uploads or multistellar hypertelescopes or terraforming are possible, each of these alone create another huge incentive to build SRS. Even mere self-replicating refineries that break up planets into more readily accessible resources for future generations to draw from would be an excellent investment. But the obvious existence of this supercomputer incentive is already reason enough to do it.
All the Great Filter debate boils down to the question of how improbable our existence really is. If we're probable, many intelligent species capable of very basic space travel should exist. If we're not, they shouldn't. We know there doesn't appear to be any species inside a large fraction of our light cone so capable of space travel it has sent out SRS. So the only way we could be probable is if there's a Great Filter ahead of us, stopping us (and everyone else capable of basic space travel) from becoming the kind of species that sends out SRS. If we became such a species, we'd know we're past the Filter and while we still wouldn't know how improbable which of the conditions that allowed for our existence was, we'd know that when putting them all together, they multiply into some very small probability of our existence, and a very small probability of any comparable species existing in a large section of our light cone.
LW users generally seem to think SRS are doable and that means we're quite improbable, i.e. the Filter is behind us. But lots of people are less sure, and even more people haven't thought about it. The original formulation of the Drake equation included a lifespan of civilizations partly to account for the intuition that a Great Filter type event could be coming in the future. We could be more sure than we are now, and make a lot of people much more sure than they are now, about our position in reference to that Filter. And that'd have some interesting consequences.
How knowing we're past the Great Filter reduces X-risk
The single largest X-risk we've successfully eliminated is the impact of an asteroid large enough to destroy us entirely. And we didn't do that by moving any asteroids; we simply mapped all of the big ones. We now know there's no asteroid that is both large enough to kill us off and coming soon enough that we can't do anything about it. Hindsight bias tells us this was never a big threat - but look ten years back and you'll find The Big Asteroid on every list of global catastrophic risks, usually near the top. We eliminated that risk simply by observation and deduction, by finding out it did not exist rather than removing it.
Obviously a working SRS that gives humanity outposts in other solar systems would reduce most types of X-risk. But even just knowing we could build one should decrease confidence in the ability of X-risks to take us out entirely. After all, if as Bostrom argues, the possibility that the Filter is ahead of is increases the probability of any X-risk, the knowledge that it is not ahead of us has to be evidence against all of them except those that could kill a Type 3 civilization. And if, as Bostrom says in that same paper, finding life elsewhere that is closer to our stage of development is worse news than finding life further from it, to increase the distance between us and either type of life decreases the badness of the existence of either.
Of course we'd only be certain if we had actually built and sent such a spacecraft. But in order to gain confidence we're past the filter, and to gain a greater lead to life possibly discovered elsewhere, a design that is agreed to be workable would go most of the way. If it is clear enough that someone with enough capital could claim incredible gains by doing that, we can be sure enough someone eventually (e.g. Elon Musk after SpaceX's IPO around 2035) will do that, giving high confidence we've passed the filter.
I'm not sure what would happen if we could say (with more confidence than currently) that we're probably the species that's furthest ahead at least in this galaxy. But if that's true, I don't just want to believe it, I want everyone else to believe it too, because it seems like a fairly important fact. And an SRS design would help do that.
We'd be more sure we're becoming a Type 3 civilization, so we should then begin to think about what type of risk could kill that, and UFAI would probably be more pronounced on that list than it is on the current geocentric ones.
What if we find out SRS are impossible at our pre-AGI level of technology? We still wouldn't know if an AI could do it. But even knowing our own inability would be very useful information, especially about the dangerousness of vatrious types of X-risk.
How easily this X-risk reducing knowledge can be attained
Armstrong and Sandberg claim the feasibility of self-replicating spacecraft has been a settled matter since the Freitag design of 1980. But that paper, while impressively detailed and a great read, glosses over the exact computing abilities such a system would need, does not mention hardening against interstellar radiation, assumes fusion drives and probably has a bunch of other problems that I'm not qualified to discover. I haven't looked at all the papers that cite it (yet), but the ones I've seen seem to agree self-replicating spacecraft are plausible. Sandberg has some good research questions that I agree need to be answered, but never seems to waver from his assumption that SRS are basically possible, although he's aware of the gaps in knowledge that preclude such an assumption from being safe.
There are certainly some questions that I'm not sure we can answer. For example:
- Can we build fission-powered spacecraft (let alone more speculative designs) that will survive the interstellar environment for decades or centuries?
- How can we be certain to avoid mutations that grow outside of our control, and eventually devour Earth?
- Can communication between SRS and colonies, especially software updates, be made secure enough?
- Can a finite number of probe designs (to be included on any of them) provide a vehicle for every type of journey we'd want the SRS network to make?
- Can a fiinite number of colony designs provide a blueprint for every source of matter and negentropy we'd want to develop?
- What is the ethical way to treat any life the SRS network might encounter?
But all of these except for the last one, and Sandberg's questions, are engineering questions and those tend to be answerable. If not, remember, we don't need to have a functioning SRS to manage X-risk, any reduction of uncertainty around their feasibility already helps. And again, the only design I could find that gives any detail at all is from a single guy writing in 1980. If we merely do better than he did (find or rule out a few of the remaining obstacles), we already help ascertain our level of X-risk. Compare the asteroid detection analogy: We couldn't be certain that we wouldn't be hit by an asteroid until we looked at all of them, but getting started with part of the search space was a very valuable thing to do anyway.
Freitag and others use to assume SRS should be run by some type of AGI. Sandberg says SRS without AGI, with what he calls "lower order intelligence", "might be adequate". I disagree with both assessments, and with Sandberg's giving this question less priority than, say, study of mass drivers. Given the issues of AGI safety, a probe that works without AGI should be distinctly preferable. And (unlike an intelligent one) its computational components can be designed right now, down to the decision tree it should follow. While at it, and in order to use the publicity such a project might generate, give an argument for this design choice that highlights the AGI safety issues. A scenario where a self-replicating computer planet out there decides for itself should serve to highlight the dangers of AGI far more viscerally than conventional "self-aware desktop box" scenarios.
If we're not looking for an optimal design, but the bare minimum necessary to know we're past the filter, that gives us somewhat relaxed design constraints. This probe wouldn't necessarily need to travel at a significant fraction of light speed, and its first generation wouldn't need to be capable of journeys beyond, say, five parsec. It does have to be capable of interstellar travel, and of progressing to intergalactic travel at some point, say when it finds all nearby star systems to contain copies of itself. A non-interstellar probe fit to begin the self-replication process on a planet like Jupiter, refining resources and building launch facilities there, would be a necessary first step.
Baseline of my opinion on LW topics
To avoid repeatly saying the same I'd like to state my opinion on a few topics I expect to be relevant to my future posts here.
You can take it as a baseline or reference for these topics. I do not plan to go into any detail here. I will not state all my reasons or sources. You may ask for separate posts if you are interested. This is really only to provide a context for my comments and posts elsewhere.
If you google me you may find some of my old (but not that off the mark) posts about these position e.g. here:
http://grault.net/adjunct/index.cgi?GunnarZarncke/MyWorldView
Now my position on LW topics.
The Simulation Argument and The Great Filter
On The Simulation Argument I definitely go for
"(1) the human species is very likely to go extinct before reaching a “posthuman” stage"
Correspondingly on The Great Filter I go for failure to reach
"9. Colonization explosion".
This is not because I think that humanity is going to self-annihilate soon (though this is a possibility). Instead I hope that humanity will earlier or later come to terms with its planet. My utopia could be like that of the Pacifists (a short story in Analog 5).
Why? Because of essential complexity limits.
This falls into the same range as "It is too expensive to spread physically throughout the galaxy". I know that negative proofs about engineering are notoriously wrong - but that is currently my best guess. Simplified one could say that the low hanging fruits have been taken. I have lots of empirical evidence of this on multiple levels to support this view.
Correspondingly there is no singularity because progress is not limited by raw thinking speed but by effective aggregate thinking speed and physical feedback.
What could prove me wrong?
If a serious discussion would ruin my well-prepared arguments and evidence to shreds (quite possible).
At the very high end a singularity might be possible if a way could be found to simulate physics faster than physics itself.
AI
Basically I don't have the least problem with artificial intelligence or artificial emotioon being possible. Philosophical note: I don't care on what substrate my consciousness runs. Maybe I am simulated.
I think strong AI is quite possible and maybe not that far away.
But I also don't think that this will bring the singularity because of the complexity limits mentioned above. Strong AI will speed up some cognitive tasks with compound interest - but only until the physical feedback level is reached. Or a social feedback level is reached if AI should be designed to be so.
One temporary dystopia that I see is that cognitive tasks are out-sourced to AI and a new round of unemployment drives humans into depression.
- A simplified layered model of the brain; deep learning applied to free inputs (I cancelled this when it became clear that it was too simple and low level and thus computationally inefficient)
- A nested semantic graph approach with propagation of symbol patterns representing thought (only concept; not realized)
I'd really like to try a 'synthesis' of these where microstructure-of-cognition like activation patterns of multiple deep learning networks are combined with a specialized language and pragmatics structure acquisition model a la Unsupervised learning of natural languages. See my opinion on cognition below for more in this line.
What could prove me wrong?
On the low success end if it takes longer than I think it would take me given unlimited funding.
On the high end if I'm wrong with the complexity limits mentioned above.
Conquering space
Humanity might succeed at leaving the planet but at high costs.
With leaving the planet I mean permanently independent of earth but not neccessarily leaving the solar system any time soon (speculating on that is beyond my confidence interval).
I think it more likely that life leaves the planet - that can be
- artificial intelligence with a robotic body - think of curiosity rover 2.0 (most likely).
- intelligent life-forms bred for life in space - think of Magpies those are already smart, small, reproducing fast and have 3D navigation.
- actual humans in suitable protective environment with small autonomous biosperes harvesting asteroids or mars.
- 'cyborgs' - humans altered or bred to better deal with certain problems in space like radiation and missing gravity.
- other - including misc ideas from science fiction (least likely or latest).
For most of these (esp. those depending on breeding) I'd estimate a time-range of a few thousand years.
What could prove me wrong?
If I'm wrong on the singularity aspect too.
If I'm wrong on the timeline I will be long dead likely in any case except (1) which I expect to see in my lifetime.
Cognitive Base of Rationality, Vaguesness, Foundations of Math
How can we as humans create meaning out of noise?
How can we know truth? How does it come that we know that 'snow is white' when snow is white?
Cognitive neuroscience and artificial learning seems to point toward two aspects:
Fuzzy learning aspect
Correlated patterns of internal and external perception are recognized (detected) via multiple specialized layered neural nets (basically). This yields qualia like 'spoon', 'fear', 'running', 'hot', 'near', 'I'. These are basically symbols, but they are vague with respect to meaning because they result from a recognition process that optimizes for matching not correctness or uniqueness.
Semantic learning aspect
Upon the qualia builds the semantic part which takes the qualia and instead of acting directly on them (as is the normal effect for animals) finds patterns in their activation which is not related to immediate perception or action but at most to memory. These may form new qualia/symbols.
The use of these patterns is that the patterns allow to capture concepts which are detached from reality (detached in so far as they do not need a stimulus connected in any way to perception).
Concepts like ('cry-sound' 'fear') or ('digitalis' 'time-forward' 'heartache') or ('snow' 'white') or - and that is probably the demain of humans: (('one' 'successor') 'two') or (('I' 'happy') ('I' 'think')).
Concepts
The interesting thing is that learning works on these concepts like on the normal neuronal nets too. Thus concepts that are reinforced by positive feedback will stabilize and mutually with them the qualia they derive from (if any) will also stabilize.
For certain pure concepts the usability of the concept hinges not on any external factor (like "how does this help me survive") but on social feedback about structure and the process of the formation of the concepts themselves.
And this is where we arrive at such concepts as 'truth' or 'proposition'.
These are no longer vague - but not because they are represented differently in the brain than other concepts but because they stabilize toward maximized validity (that is stability due to absence of external factors possibly with a speed-up due to social pressure to stabilize). I have written elsewhere that everything that derives its utility not from some external use but from internal consistency could be called math.
And that is why math is so hard for some: If you never gained a sufficient core of self-consistent stabilized concepts and/or the usefulness doesn't derive from internal consistency but from external ("teachers password") usefulness then it will just not scale to more concepts (and the reason why science works at all is that science values internal consistency so highly and there is little more dangerous to science that allowing other incentives).
I really hope that this all makes sense. I haven't summarized this for quite some time.
A few random links that may provide some context:
http://www.blutner.de/NeuralNets/ (this is about the AI context we are talking about)
http://www.blutner.de/NeuralNets/Texts/mod_comp_by_dyn_bin_synf.pdf (research applicable to the above in particular)
http://c2.com/cgi/wiki?LeibnizianDefinitionOfConsciousness (funny description of levels of consciousness)
http://c2.com/cgi/wiki?FuzzyAndSymbolicLearning (old post by me)
http://grault.net/adjunct/index.cgi?VaguesDependingOnVagues (dito)
Note: Details about the modelling of the semantic part are mostly in my head.
What could prove me wrong?
Well. Wrong is too hard here. This is just my model and it is not really that concrete. Probably a longer discussion with someone more experienced with AI than I am (and there should be many here) might suffice to rip this appart (provided that I'd find time to prepare my model suitably).
God and Religion
I wasn't indoctrinated as a child. My truely loving mother is a baptised christian living it and not being sanctimony. She always hoped that I would receive my epiphany. My father has a scientifically influenced personal christian belief.
I can imagine a God consistent with science on the one hand and on the other hand with free will, soul, afterlife, trinity and the bible (understood as a mix of non-literal word of God and history tale).
I mean, it is not that hard if you can imagine a timeless (simulation of) the universe. If you are god and have whatever plan on earth but empathize with your creations, then it is not hard to add a few more constraints to certain aggregates called existences or 'person lifes'. Constraints that realize free-will in the sense of 'not subject to the whole universe plan satisfaction algorithm'.
Surely not more difficult than consistent time-travel.
And souls and afterlife should be easy to envision for any science fiction reader familiar with super intelligences.
But why? Occams razor applies.
There could be a God. And his promise could be real. And it could be a story seeded by an emphatizing God - but also a 'human' God with his own inconsistencies and moods.
But it also could be that this is all a fairy tale run amok in human brains searching for explanations where there are none. A mass delusion. A fixated meme.
Which is right? It is difficult to put probabilities to stories. I see that I have slowly moved from 50/50 agnosticism to tolerent atheism.
I can't say that I wait for my epiphany. I know too well that my brain will happily find patterns when I let it. But I have encouraged to pray for me.
My epiphanies - the aha feelings of clarity that I did experience - have all been about deeply connected patterns building on other such patterns building on reliable facts mostly scientific in nature.
But I haven't lost my morality. It has deepend and widened. I have become even more tolerant (I hope).
So if God does against all odds exists I hope he will understand my doubts, weight my good deeds and forgive me. You could tag me godless christian.
What could prove me wrong?
On the atheist side I could be moved a bit further by more proofs of religion being a human artifact.
On the theist side there are two possible avenues:
- If I'd have an unsearched for epiphany - a real one where I can't say I was hallucinating but e.g. a major consistent insight or a proof of God.
- If I'd be convinced that the singularity is possible. This is because I'd need to update toward being in a simulation as per Simulation argument option 3. That's because then the next likely explanation for all this god business is actually some imperfect being running the simulation.
Thus I'd like to close with this corollary to the simulation argument:
Arguments for the singularity are also (weak) arguments for theism.
Asteroids and spaceships are kinetic bombs and how to prevent catastrophe
A reality of physics, and one that doesn't get much play in science fiction, is that as soon as humanity gains space travel, anyone in the asteroid mining or space travel business will have city-busting capabilities at their fingertips.
It's there in classic sci-fi, but not so much recently.
This discussion was started in the comments to:
http://lesswrong.com/lw/gln/a_brief_history_of_ethically_concerned_scientists/
In the "Ethically Concerned Scientists" post, Izeinwinter commented:
, I have given some thought to this specific problem - not just asteroids, but the fact that any spaceship is potentially a weapon, and as working conditions go, extended isolation does not have the best of records on the mental stability front.
Likely solutions: Full automation and one-time-pad locked command and control - This renders it a weapon as well controlled as nuclear arsenals, except with longer lead times on any strike, so even safer from a MAD perspective. (... and no fully private actor ever gets to run them. ) Or if full automation is not workable, a good deal of effort expended on maintaining crew sanity - Psyc/political officers - called something nice, fluffy, and utterly anodyne to make people forget just how much authority they have, backed up with a remote controlled self destruct. Again, one time pad com lock. It's not going to be a libertarian free for-all as industries go, more a case of "Extremely well paid, to make up for the conditions and the sword that will take your head if you crack under the pressure" Good story potential in that, though.
A great start to a discussion here.
You've considered people going loons and some general security, but it would then become a hacker war along the lines of who could break the security and gain control of the space ships.
It doesn't address the problem of the leaders using the ships as threat weapons, since they have legitimate control, but can still make terrorist decisions.
And I'm terrified of your idea of turning spaceflight, which I see as the ultimate freedom, along the lines of Niven's Belters, into a state-controlled affair like the Soviet navy with political officers.
Now, one thing I think is a useful safety control that doesn't lead to worse problems is the destruct option. All major rockets have them right now, since if it goes out of control it's a huge hazard for a great distance. And although I don't like the idea of all personal spaceships being under a safety officers thumb, it might be better than the alternative of terrorist groups gaining control of asteroid mines and holding the world hostage.
You're right about great story potential though, in any of these scenarios.
[Link] Colonisation of Venus
I was wondering what people thought of this paper by Geoffrey Landis on colonising Venus. In it he suggests that cloud-top Venus is one of the most benign environments in the Solar System. Temperature and gravity are similar to Earth, there's some radiation shielding and useful resources, and aerostats filled only with breathable air would float at that height. I'm no expert so can't speak to how accurate it is, but it's certainly very thought-provoking for such a short paper.
Implications of an infinite versus a finite universe
Hi gang,
for the last several months, I've intermittently been wondering about a curious fact I learned.
You see, I was under the impression that the universe (as opposed to just the currently observable universe or our Hubble volume) must be finite in its spa[t|c]ial dimensions. I figured that starting with a finite area of space, expanding with a finite (even if accelerating) expansion rate, could only yield a finitely sized volume of space (from any reference frame), a fraction of which constitutes our little Hubble bubble.
Turns out - honi soit qui mal y pense - my layman's understanding was wrong: "This [WMAP data] suggests that the Universe is infinite in extent (...)"
Now, most (non-computer-scientist) people I've bothered with that answered along the lines of "well, it's really big alright? (geez)".
However, going from any finite amount of matter/energy to an actual infinite amount (even when looking at just e.g. baryonic matter from the infinite amount of galaxies) still seems like a game-changer for all sorts of contemplations:
For example, any event with any non-zero probability of happening, no matter how large the negative exponent, would be assured of actually happening an infinite amount of times somewhere in the our very own universe (follows straightforwardly from induction over the law of large numbers).
Such as a planet turning into a giant petunia for a moment, before turning back.
The universe being infinite doesn't make that event any more likely in our observable universe, of course, but would the knowledge that given our laws of physics, there is an infinite amount of Hubble spaces governed by any sorts of "weird" occurrences - e.g. ruled by your evil twin brother - trouble you? Do we need to qualify "there probably is no Christian-type/FSM god" with "... in our Hubble volume. Elsewhere, yes."?
The difference, if you allow me a final rephrase, would be in going from a MWI-style "there may be another version - if the MWI interpretation is correct - that I cannot causally interact with" to a "in our own universe, just separated by space, there is an infinite amount of actual planets turning into actual petuniae (albeit all of which I also cannot interact with)".
Neil Armstrong died before we could defeat death
The sad news broke tonight : Neil Armstrong, the first human to ever walk another world, died today. We lost him forever. He died before we could defeat death.
Once again the horror of death strikes. This time, in addition from wiping from us forever a hero of humanity, he wiped from us forever a memory that will never exist again. Never again will a human being be able to experience being the first to walk another world. That beautiful experience is lost forever too, along with all the memories, dreams, desires and wishes that made Neil Armstrong.
But thanks to him, humanity made a giant leap. We'll fill the stars and conquer death. The spark of intelligence and sentience will not extinguish. That's the best we can do to honour him.
Source : http://www.reuters.com/article/2012/08/25/us-usa-neilarmstrong-idUSBRE87O0B020120825
Space-worthiness
A recent post about the Fermi paradox left me wondering about relative difficulties of getting into space, though I don't think it affects those specific arguments.
People establishing a presence in space is difficult but at least plausible-- I'm talking about biological people as we are now, and being able to live and reproduce indefinitely without returning to Earth.
It would be easier if we were less massive, or our planet was less massive, or if we were more radiation resistant. It would be harder if these qualities were reversed, or if we needed a much denser atmosphere.There might come a point where it just isn't feasible for a species to get itself off its planet.
Is there any reasonable speculation about where we are likely to be on the ease-of-getting-into-space spectrum?
[Link] Space Stasis: What the strange persistence of rockets can teach us about innovation
http://www.slate.com/id/2283469/pagenum/all/
It's a long article, but the most relevant stuff is at the end, about how we're pretty much locked into the existing rocket technologies:
That is not, however, the most important way that rockets generate lock-in. In order to understand this, it's necessary to know a few things about (1) the physical environment of rocket launches, (2) the economics of the industry, and (3) the way it is regulated, or, to be more precise, the way it interacts with government.
1. The designer of a rocket payload, such as a communications satellite, has much more to worry about than merely limiting the payload to a given size, shape, and weight. The payload must be designed to survive the launch and the transition through various atmospheric regimes into outer space. As we all know from watching astronauts on movies and TV, there will be acceleration forces, relatively modest at the beginning, but building to much higher values as fuel is burned and the rocket becomes lighter relative to its thrust. At some moments, during stage separation, the acceleration may even reverse direction for a few moments as one set of engines stops supplying thrust and atmospheric resistance slows the vehicle down. Rockets produce intense vibration over a wide range of frequencies; at the upper end of that range we would identify this as noise (noise loud enough to cause physical destruction of delicate objects), at the lower range, violent shaking. Explosive bolts send violent shocks through the vehicle's structure. During the passage through the ionosphere, the air itself becomes conductive and can short out electrical gear. Enclosed spaces must be vented so that pressure doesn't build up in them as the vehicle passes into vacuum. Once the satellite has reached orbit, sharp and intense variations in temperature as it passes in and out of the earth's shadow can cause problems if not anticipated in the engineering design. Some of these hazards are common to all things that go into space, but many are unique to rockets.
2. If satellites and launches were cheap, a more easygoing attitude toward their design and construction might prevail. But in general they are, pound for pound, among the most expensive objects ever made even before millions of dollars are spent launching them into orbit. Relatively mass-produced satellites, such as those in the Iridium and Orbcomm constellations, cost on the order of $10,000/lb. The communications birds in geostationary orbit—the ones used for satellite television, e.g.—are two to five times as expensive, and ambitious scientific/defense payloads are often $100,000 per pound. Comsats can only be packed so close together in orbit, which means that there is a limited number of available slots—this makes their owners want to pack as much capability as possible into each bird, helping jack up the cost. Once they are up in orbit, comsats generate huge amounts of cash for their owners, which means that any delays in launching them are terribly expensive. Rockets of the old school aren't perfect—they have their share of failures—but they have enough of a track record that it's possible to buy launch insurance. The importance of this fact cannot be overestimated. Every space entrepreneur who dreams of constructing a better mousetrap sooner or later crunches into the sickening realization that, even if the new invention achieved perfect technical success, it would fail as a business proposition simply because the customers wouldn't be able to purchase launch insurance.
3. Rockets—at least, the kinds that are destined for orbit, which is what we are talking about here—don't go straight up into the air. They mostly go horizontally, since their purpose is to generate horizontal velocities so high that centrifugal force counteracts gravity. The initial launch is vertical because the thing needs to get off the pad and out of the dense lower atmosphere, but shortly afterwards it bends its trajectory sharply downrange and begins to accelerate nearly horizontally. Consequently, all rockets destined for orbit will pass over large swathes of the earth's surface during the 10 minutes or so that their engines are burning. This produces regulatory and legal complications that go deep into the realm of the absurd. Existing rockets, and the launch pads around which they have been designed, have been grandfathered in. Space entrepreneurs must either find a way to negotiate the legal minefield from scratch or else pay high fees to use the existing facilities. While some of these regulatory complications can be reduced by going outside of the developed world, this introduces a whole new set of complications since space technology is regulated as armaments, and this imposes strict limits on the ways in which American rocket scientists can collaborate with foreigners. Moreover, the rocket industry's status as a colossal government-funded program with seemingly eternal lifespan has led to a situation in which its myriad contractors and suppliers are distributed over the largest possible number of congressional districts. Anyone who has witnessed Congress in action can well imagine the consequences of giving it control over a difficult scientific and technological program.
Dr. Jordin Kare, a physicist and space launch expert to whom I am indebted for some of the details mentioned above, visualizes the result as a triangular feedback loop joining big expensive launch systems; complex, expensive, long-life satellites; and few launch opportunities. To this could be added any number of cultural factors (the engineers populating the aerospace industry are heavily invested in the current way of doing things); the insurance and regulatory factors mentioned above; market inelasticity (cutting launch cost in half wouldn't make much of a difference); and even accounting practices (how do you amortize the nonrecoverable expenses of an innovative program over a sufficiently large number of future launches?).
To employ a commonly used metaphor, our current proficiency in rocket-building is the result of a hill-climbing approach; we started at one place on the technological landscape—which must be considered a random pick, given that it was chosen for dubious reasons by a maniac—and climbed the hill from there, looking for small steps that could be taken to increase the size and efficiency of the device. Sixty years and a couple of trillion dollars later, we have reached a place that is infinitesimally close to the top of that hill. Rockets are as close to perfect as they're ever going to get. For a few more billion dollars we might be able to achieve a microscopic improvement in efficiency or reliability, but to make any game-changing improvements is not merely expensive; it's a physical impossibility.
There is no shortage of proposals for radically innovative space launch schemes that, if they worked, would get us across the valley to other hilltops considerably higher than the one we are standing on now—high enough to bring the cost and risk of space launch down to the point where fundamentally new things could begin happening in outer space. But we are not making any serious effort as a society to cross those valleys. It is not clear why.
A temptingly simple explanation is that we are decadent and tired. But none of the bright young up-and-coming economies seem to be interested in anything besides aping what the United States and the USSR did years ago. We may, in other words, need to look beyond strictly U.S.-centric explanations for such failures of imagination and initiative. It might simply be that there is something in the nature of modern global capitalism that is holding us back. Which might be a good thing, if it's an alternative to the crazy schemes of vicious dictators. Admittedly, there are many who feel a deep antipathy for expenditure of money and brainpower on space travel when, as they never tire of reminding us, there are so many problems to be solved on earth. So if space launch were the only area in which this phenomenon was observable, it would be of concern only to space enthusiasts. But the endless BP oil spill of 2010 highlighted any number of ways in which the phenomena of path dependency and lock-in have trapped our energy industry on a hilltop from which we can gaze longingly across not-so-deep valleys to much higher and sunnier peaks in the not-so-great distance. Those are places we need to go if we are not to end up as the Ottoman Empire of the 21st century, and yet in spite of all of the lip service that is paid to innovation in such areas, it frequently seems as though we are trapped in a collective stasis. As described above, regulation is only one culprit; at least equal blame may be placed on engineering and management culture, insurance, Congress, and even accounting practices. But those who do concern themselves with the formal regulation of "technology" might wish to worry less about possible negative effects of innovation and more about the damage being done to our environment and our prosperity by the mid-20th-century technologies that no sane and responsible person would propose today, but in which we remain trapped by mysterious and ineffable forces.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)