I hardly think many here would object to love, joy, and laughter being not outlived by the stars themselves: as you say, the critics are not dishonest. As steven points out, any disagreement would seem to stem from differing assessments of the probabilities of stagnation risk and existential risk. If the future is going to be dominated by a hard takeoff Singularity, then it is incredibly important to make sure to get that first AGI exactly, perfectly right at the expense of all else. If the future is to be one of "traditional" space colonization and catostrophic risk from AI, MNT, &c. is negligible, then it's incredibly important to develop techs as quickly as possible. While the future does depend on what "we" decide to do now (bearing in mind that there is no unitary we), this is largely an empirical issue: how does the tech tree actually look? What does it take to colonize the stars? Is hard takeoff possible, and what would that take? &c. I think that these are the sorts of questions we need to be asking and trying to answer, rather then pledging ourselves to the "pro-safety" or "pro-technology" side. Since we all want more-or-less the same thing, it's in all of our best interests to try to reach the most accurate conclusions possible.
Technological advance is strongly dependent on "mass production for the planetary civilization", because otherwise most people are busy doing agriculture and don't have time to become PhDs.
That's only because the power of technology wasn't realized until the industry was way under development. Roughly speaking, you can always tax everyone 10%, and have 10% of the population do science.
Resources don't become scarce overnight. It happens a little more slowly, the price of the scarce resource rises. People find ways to use it more efficiently; they find or invent substitutes.
Nor would unprecedented levels of resource scarcity be likely to lead to a war between major powers. Our political systems may be imperfect, but the logic of mutually assured destruction would be clear and compelling even to the general public.
So basically you're saying that when Leo Szilard wanted to hide the true neutron cross section of purified graphite and Enrico Fermi wanted to publish it, you'd have published it.
Would you have hidden it?
I hope so. It was the right decision in hindsight, since the Nazi nuclear weapons program shut down when the Allies, at cost of some civilian lives, destroyed their source of deuterium. If they'd known they could've used purified graphite... well, they probably still wouldn't have gotten nuclear weapons in this Everett branch but they might have somewhere else.
Before 2001 I would probably have been on Fermi's side, but that's when I still believed deep down that no true harm could come to someone who was only faithfully trying to do science. (I.e. supervised universe thinking.)
This is not how truly fundamental breakthroughs are made.
Hmm---now that you mention it, I realize my domain knowledge here is weak. How are truly fundamental breakthroughs made? I would guess that it depends on the kind of breakthrough---that there are some things that can be solved by a relatively small number of core insights (think Albert Einstein in the patent office) and some things that are big collective endeavors (think Human Genome Project). I would guess furthermore that in many ways AGI is more like the latter than the former, see below.
Why do you assume that AGI lies beyond the capabilities of any single intelligent person armed with a modern computer and a sufficiently unorthodox idea?
Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system. In either case, it takes more than an unorthodox idea.
Huh. Looking back, it actually seems that I already wrote up the complete reply to this post and it is "Raised in Technophilia" (Sep 17 '08).
I don't agree with your conclusion or the connection to AI research. But the segment about civilizations collapsing for unknown reasons is brilliant and well written, and really stands on it's own.
I wish people were more scared of the dangers that cant yet be measured, like the chance a very large gamma ray could hit Earth for a short time then be aimed somewhere else. How do we know major extinctions in the past werent related to unknown behaviors of spacetime from outside where we measure? Or maybe the "constants" in the wave equations of physics sometimes vary. Is it really a good deal to let individual businesses hold the pieces of this knowledge to themselves instead of putting all our knowledge together to figure out whats possible?
Actually, there is a science fiction story very similar to your opening section. I'm putting author and title in rot13 because the story got much of its effect because I read it under normal science fiction protocols. Surely humanity would eventually get into space-- but it doesn't and dies out. Well, then aliens will manage-- but there aren't any.
On the other hand, that's just one story in a large field, and I think it's only been reprinted once.
Zhecul'f Unyy ol Cbhy Naqrefba.
I agree that groups/societies get stuck for fairly long periods of time and that independence and competition between groups/societies is often beneficial. But I think stagnation is unlikely unless we end up with a totalitarian world government. See Bryan Caplan's essay
But the one reason above all others is that the window of opportunity we are currently given may be the last step in the Great Filter, that we cannot know when it will close or if it does, whether it will ever open again.
Rather than running harder toward this window of yours, we should take special care to check what floor it's on.
This seems to apply more to the space program than any field where progress is impeded by safety concerns, or such impediments are advocated here. Yes, we should have Mars colonies by now. That's not a shocking revelation, it's a near-universal belief among nerdy types, including LWers. But, since we don't, we need to minimize existential risks to life on Earth until we do, and we will always need to minimize existential risks capable of crossing interplanetary distances (i.e. uFAI, maybe nanotech or memetic plagues, although we don't know enough to even know if those are a danger.)
June 14, 3009
Twilight still hung in the sky, yet the Pole Star was visible above the trees, for it was a perfect cloudless evening.
"We can stop here for a few minutes," remarked the librarian as he fumbled to light the lamp. "There's a stream just ahead."
The driver grunted assent as he pulled the cart to a halt and unhitched the thirsty horse to drink its fill.
It was said that in the Age of Legends, there had been horseless carriages that drank the black blood of the earth, long since drained dry. But then, it was said that in the Age of Legends, men had flown to the moon on a pillar of fire. Who took such stories seriously?
The librarian did. In his visit to the University archive, he had studied the crumbling pages of a rare book in Old English, itself a copy a mere few centuries old, of a text from the Age of Legends itself; a book that laid out a generation's hopes and dreams, of building cities in the sky, of setting sail for the very stars. Something had gone wrong - but what? That civilization's capabilities had been so far beyond those of his own people. Its destruction should have taken a global apocalypse of the kind that would leave unmistakable record both historical and archaeological, and yet there was no trace. Nobody had anything better than mutually contradictory guesses as to what had happened. The librarian intended to discover the truth.
Forty years later he died in bed, his question still unanswered.
The earth continued to circle its parent star, whose increasing energy output could no longer be compensated by falling atmospheric carbon dioxide concentration. Glaciers advanced, then retreated for the last time; as life struggled to adapt to changing conditions, the ecosystems of yesteryear were replaced by others new and strange - and impoverished. All the while, the environment drifted further from that which had given rise to Homo sapiens, and in due course one more species joined the billions-long roll of the dead. For what was by some standards a little while, eyes still looked up at the lifeless stars, but there were no more minds to wonder what might have been.
Were I to submit the above to a science fiction magazine, it would be instantly rejected. It lacks a satisfying climax in which the hero strikes down the villain, for it has neither hero nor villain. Yet I ask your indulgence for a short time, for it may yet possess one important virtue: realism.
The reason we relate to stories with villains is easy enough to understand. In our ancestral environment, if a leopard or an enemy tribesman escaped your attention, failure to pass on your genes was likely. Violence may or may not have been the primary cause of death, depending on time and place; but it was the primary cause that you could do something about. You might die of malaria, you might die of old age, but there was little and nothing respectively that you could do to avoid these fates, so there was no selection pressure to be sensitive to them. There was certainly no selection pressure to be good at explaining the distant past or predicting the distant future.
Looked at that way, it's a miracle we possess as much general intelligence as we do; and certainly our minds have achieved a great deal, and promise more. Yet the question lurks in the background: are there phenomena, not in distant galaxies or inside the atomic nucleus beyond the reach of our eyes but in our world at the same scale we inhabit, nonetheless invisible to us because our minds are not so constructed as to perceive them?
In search of an answer to that question, we may ask another one: why is this document written in English instead of Chinese?
As late as the 15th century, this was by no means predictable. The great civilizations of Europe and China were roughly on par, the former having almost caught up over the previous few centuries; yet Chinese oceangoing ships were arguably still better than anything Europe could build. Fleets under Admiral Zheng He reached as far as East Africa. Perhaps China might have reached the Americas before Europeans did, and the shape of the world might have been very different.
The centuries had brought a share of disasters to both continents. War had ravaged the lands, laying waste whole cities. Plague had struck, killing millions, men, women and children buried in mass graves. Shifts of global air currents brought the specter of famine. Civilization had endured; more, it had flourished.
The force that put an end to the Chinese arc of progress was deadlier by far than all of these together, yet seemingly intangible as metaphysics. By the 16th century, the fleets had vanished, the proximate cause political; to this day there is no consensus on the underlying factors. It seems what saved Europe was its political disunity. Why was that lost in China? Some writers have blamed flat terrain, which others have disputed; some have blamed rice agriculture and its need for irrigation systems. Likely there were factors nobody has yet understood; perhaps we never will.
An entire future that might have been, was snuffed out by some terrible force compared to which war, plague and famine were mere pinpricks - and yet even with the benefit of hindsight, we still don't truly understand what it was.
Nor is this an isolated case. From the collapse of classical Mediterranean civilization to the divergent fates of the US and Argentina, whose prospects looked so similar as recently as the early 20th century, we find more terrible than any war or ordinary disaster are forces which operate unseen in plain sight and are only dimly understood even after the fact.
The saving grace has always been the outside: when one nation, one civilization, faltered, another picked up the torch and carried on; but with the march of globalization, there may soon be no more outside.
Unless of course we create a new one. Within this century, if we continue to make progress as quickly as possible, we may develop the technology to break our confinement, to colonize first the solar system and then the galaxy. And then our kind may truly be immortal, beyond the longest reach of the Grim Reaper, and love and joy and laughter be not outlived by the stars themselves.
If we continue to make progress as quickly as possible.
Yet at every turn, when risks are discussed, ten voices cry loudly about the violence that may be done with new technology for every one voice that quietly observes that we cannot afford to be without it, and we may not have as much time as we think we have. It is not that anyone is being intentionally selfish or dishonest. The critics believe what they are saying. It is that to the human mind, the dangers of progress are vivid even when imaginary; the dangers of its lack are scarcely perceptible even when real.
There are many reasons why we need more advanced technology, and we need it as soon as possible. Every year, more than fifty million people die for its lack, most in appalling suffering. But the one reason above all others is that the window of opportunity we are currently given may be the last step in the Great Filter, that we cannot know when it will close or if it does, whether it will ever open again.
Less Wrong is about bias, and the errors to which it leads us. I present then what may be the most lethal of all our biases: that we react instantly to the lesser death that comes in blood and fire, but the greater death that comes in the dust of time, is to our minds invisible.
And I ask that you remember, next time you contemplate alleged dangers of technology.