Robin criticizes Eliezer for not having written up his arguments about the Singularity in a standard style and submitted them for publication. Others, too, make the same complaint: the arguments involved are covered over such a huge mountain of posts that it's impossible for most outsiders to seriously evaluate them. This is a problem for both those who'd want to critique the concept, and for those who tentatively agree and would want to learn more about it.
Since it appears (do correct me if I'm wrong!) that Eliezer doesn't currently consider it worth the time and effort to do this, why not enlist the LW community in summarizing his arguments the best we can and submit them somewhere once we're done? Minds and Machines will be having a special issue on transhumanism, cognitive enhancement and AI, with a deadline for submission in January; that seems like a good opportunity for the paper. Their call for papers is asking for submissions that are around 4000 to 12 000 words.
The paper should probably
- Briefly mention some of the previous work about AI being near enough to be worth consideration (Kurzweil, maybe Bostrom's paper on the subject, etc.), but not dwell on it; this is a paper on the consequences of AI.
- Devote maybe little less than half of its actual content to the issue of FOOM, providing arguments and references for building the case of a hard takeoff.
Devote the second half to discussing the question of FAI, with references to e.g. Joshua Greene's thesis and other relevant sources for establishing this argument.Carl Shulman says SIAI is already working on a separate paper on this, so it'd be better for us to concentrate merely on the FOOM aspect.- Build on the content of Eliezer's various posts, taking their primary arguments and making them stronger by reference to various peer-reviewed work.
- Include as authors everyone who made major contributions to it and wants to be mentioned; certainly make (again, assuming he doesn't object) Eliezer as the lead author, since this is his work we're seeking to convert into more accessible form.
I have created a wiki page for the draft version of the paper. Anyone's free to edit.
The word "ought" means a particular thing, refers to a particular function, and once you realize that, ought-statements have truth-values. There's just nothing which says that other minds necessarily care about them. It is also possible that different humans care about different things, but there's enough overlap that it makes sense (I believe, Greene does not) to use words like "ought" in daily communication.
What would the universe look like if there were such a thing as an "objective standard"? If you can't tell me what the universe looks like in this case, then the statement "there is an objective morality" is not false - it's not that there's a closet which is supposed to contain an objective morality, and we looked inside it, and the closet is empty - but rather the statement fails to have a truth-condition. Sort of like opening a suitcase that actually does contain a million dollars, and you say "But I want an objective million dollars", and you can't say what the universe would look like if the million dollars were objective or not.
I should write a post at some point about how we should learn to be content with happiness instead of "true happiness", truth instead of "ultimate truth", purpose instead of "transcendental purpose", and morality instead of "objective morality". It's not that we can't obtain these other things and so must be satisfied with what we have, but rather that tacking on an impressive adjective results in an impressive phrase that fails to mean anything. It is not that there is no ultimate truth, but rather, that there is no closet which might contain or fail to contain "ultimate truth", it's just the word "truth" with the sonorous-sounding adjective "ultimate" tacked on in front. Truth is all there is or coherently could be.
Just a minor thought: there is a great deal of overlap on human "ought"s, but not so much on formal philosphical &qu... (read more)