You know, his scenario of erasing humanity as a byproduct of an optimization process indifferent to human values amounts to the unfriendly AI scenarios we discuss, just relaxing the requirement that the optimization process be sentient.
I wonder if the following is a valid generalization of the specific problem that motivates the MIRI folks:
Our ability to scale up and speed up achievement of goals has outpaced or will soon outpace our ability to find goals that we won't regret.
I was browsing my RSS feed, as one does, and came across a New York Times article, "A Village With the Numbers, Not the Image, of the Poorest Place", about the Satmar Hasidic Jews of Kiryas Joel (NY).
Their interest lies in their extraordinarily high birthrate & population growth, and their poverty - which are connected. From the article:
From Wikipedia:
Robin Hanson has argued that uploaded/emulated minds will establish a new Malthusian/Darwinian equilibrium in "IF UPLOADS COME FIRST: The crack of a future dawn" - an equilibrium in comparison to which our own economy will look like a delusive dreamtime of impossibly unfit and libertine behavior. The demographic transition will not last forever. But despite our own distaste for countless lives living at near-subsistence rather than our own extreme per-capita wealth (see the Repugnant Conclusion), those many lives will be happy ones (even amidst disaster).
So. Are the inhabitants of Kiryas Joel unhappy?