Robin criticizes Eliezer for not having written up his arguments about the Singularity in a standard style and submitted them for publication. Others, too, make the same complaint: the arguments involved are covered over such a huge mountain of posts that it's impossible for most outsiders to seriously evaluate them. This is a problem for both those who'd want to critique the concept, and for those who tentatively agree and would want to learn more about it.
Since it appears (do correct me if I'm wrong!) that Eliezer doesn't currently consider it worth the time and effort to do this, why not enlist the LW community in summarizing his arguments the best we can and submit them somewhere once we're done? Minds and Machines will be having a special issue on transhumanism, cognitive enhancement and AI, with a deadline for submission in January; that seems like a good opportunity for the paper. Their call for papers is asking for submissions that are around 4000 to 12 000 words.
The paper should probably
- Briefly mention some of the previous work about AI being near enough to be worth consideration (Kurzweil, maybe Bostrom's paper on the subject, etc.), but not dwell on it; this is a paper on the consequences of AI.
- Devote maybe little less than half of its actual content to the issue of FOOM, providing arguments and references for building the case of a hard takeoff.
Devote the second half to discussing the question of FAI, with references to e.g. Joshua Greene's thesis and other relevant sources for establishing this argument.Carl Shulman says SIAI is already working on a separate paper on this, so it'd be better for us to concentrate merely on the FOOM aspect.- Build on the content of Eliezer's various posts, taking their primary arguments and making them stronger by reference to various peer-reviewed work.
- Include as authors everyone who made major contributions to it and wants to be mentioned; certainly make (again, assuming he doesn't object) Eliezer as the lead author, since this is his work we're seeking to convert into more accessible form.
I have created a wiki page for the draft version of the paper. Anyone's free to edit.
Do you think we're asking sufficiently different questions such that they would be expected to have different answers in the first place? How could you know?
Humans, especially humans from an Enlightenment tradition, I presume by default to be talking about the same thing as me - we share a lot of motivations and might share even more in the limit of perfect knowledge and perfect reflection. So when we appear to disagree, I assume by default and as a matter of courtesy that we are disagreeing about the answer to the same question or to questions sufficiently similar that they could normally be expected to have almost the same answer. And so we argue, and try to share thoughts.
With aliens, there might be some overlap - or might not; a starfish is pretty different from a mammal, and that's just on Earth. With paperclip maximizers, they are simply not asking our question or anything like that question. And so there is no point in arguing, for there is no disagreement to argue about. It would be like arguing with natural selection. Evolution does not work like you do, and it does not choose actions the way you do, and it was not disagreeing with you about anything when it sentenced you to die of old age. It's not that evolution is a less authoritative source, but that it is not saying anything at all about the morality of aging. Consider how many bioconservatives cannot understand the last sentence; it may help convey why this point is both metaethically important and intuitively difficult.
I really do not know. Our disagreements on ethics are definitely nontrivial - the structure of consequentialism inspires you to look at a completely different set of sub-questions than the ones I'd use to determine the nature of morality. That might mean that (at least) one of us is taking the wrong tack on a shared question, or that we're asking different basic questions. We will arrive at super... (read more)