I hope such a document addresses the Mañana response: yeah, sure, figuring out how to control the AIs and make sure they're friendly is important, but there's no time pressure. It's not like powerful AI is going to be here anytime soon. In fact it's probably impossible to figure out how to control the AI, since we still have no idea how it will work.
I expect this kind of response is common among AI researchers, who do believe in the possibility of AI, but, having an up-close view of the sorry state of the field, have trouble getting excited about prophecies of doom.
It's as though no one here has ever heard of the bystander effect. The deadline is January 15th. Setting up a wiki page and saying "Anyone's free to edit." is the equivalent to killing this thing.
Also this is a philosophy, psychology, and technology journal, which means that despite the list of references for Singularity research you will also need to link this with the philosophical and/or public policy issues that the journal wants you to address (take a look at the two guest editors).
Another worry to me is that in all the back issues of this journal I looked over, the papers were almost always monographs (and baring that 2). I suspect that having many authors might kill the chances for this paper.
Devote the second half to discussing the question of FAI, with references to e.g. Joshua Greene's thesis and other relevant sources for establishing this argument.
(Now that this is struck out it might not matter, but) I wonder if, in addition to possibly overrating Greene's significance as an exponent of moral irrealism, we don't overrate the significance of moral realism as an obstacle to understanding FAI ("shiny pitchforks"). I would expect the academic target audience of this paper, especially the more technical subset, to be metaethically...
The arguments that I found most compelling for a hard takeoff are found in LOGI part 3 and the Wiki interview with Eliezer from 2003 or so, for anyone who needs help on references or argument ideas from outside of the sequences.
This is a great idea Kaj, thanks for taking the initiative.
As noted by others, one issue with the AI Risks chapter is that it attempts to cover so much ground. I would suggest starting with just hard take-off, or local take-off, and presenting a focused case for that, without also getting into the FAI questions. This could also cut back on some duplication of effort, as SIAI folk were already planning to submit a paper (refined from some work done for a recent conference) for that issue on "machine ethics for superintelligence", which will be dis...
I would be surprised if Eliezer would cite Joshua Greene's moral anti-realist view with approval.
Technically, you would need to include a caveat in all of those like, "unless to do so would advance paperclip production" but I assume that's what you meant.
The word "ought" means a particular thing, refers to a particular function, and once you realize that, ought-statements have truth-values. There's just nothing which says that other minds necessarily care about them. It is also possible that different humans care about different things, but there's enough overlap that it makes sense (I believe, Greene does not) to use words like "ought" in daily communication.
What would the universe look like if there were such a thing as an "objective standard"? If you can't tell me what the universe looks like in this case, then the statement "there is an objective morality" is not false - it's not that there's a closet which is supposed to contain an objective morality, and we looked inside it, and the closet is empty - but rather the statement fails to have a truth-condition. Sort of like opening a suitcase that actually does contain a million dollars, and you say "But I want an objective million dollars", and you can't say what the universe would look like if the million dollars were objective or not.
I should write a post at some point about how we should learn to be content with happiness ...
This is a problem for both those who'd want to critique the concept, and for those who are more open-minded and would want to learn more about it.
Nit: This implies that people who disagree are closed minded.
I think this is very needed. When reviewing singularity models for a paper I wrote I could not find many readily citable references to certain areas that I know exist as "folklore". I don't like mentioning such ideas because it makes it look (to outsiders) as I have come up with them, and the insiders would likely think I was trying to steal credit.
There are whole fields like friendly AI theory that need a big review. Both to actually gather what has been understood, and in order to make it accessible to outsiders so that the community thinking ...
A problem with this proposal is whether this paper can be seen as authorative. A critic might worry that if they study and respond to this paper they will be told it does not represent the best pro-Singularity arguments. So the paper would need to be endorsed enough to gain enough status to become worth criticizing.
This is a problem for both those who'd want to critique the concept, and for those who are more open-minded and would want to learn more about it.
Anyone who is sufficiently technically minded undoubtedly finds frustration in reading books which give broad brush stroked counterfactuals to decision making and explanation without delving into the details of their processes. I am thinking of books like Freakonomics, Paradox of Choice, Outliers, Nudge etc..
These books are very accessible but lack the in depth analysis which are expected to be thoroughly cri...
Robin criticizes Eliezer for not having written up his arguments about the Singularity in a standard style and submitted them for publication. Others, too, make the same complaint: the arguments involved are covered over such a huge mountain of posts that it's impossible for most outsiders to seriously evaluate them.
Did everyone forget about "Artificial Intelligence as a Positive and Negative Factor in Global Risk"?
As a Foom skeptic, what would convince me of taking the concept seriously, is an argument that intelligence/power is a quantity that we reason with in the same way as we reason about the number of neutrons in a nuclear reactor/bomb. Power seems like a slippery ephemeral concept, optimisation power appears to be able evaporate at the drop of a hat (if someone comes to know an opponents source code and can emulate them entirely).
Any thoughts on what the impact of the http://www.research.ibm.com/deepqa/ IBM Watson Deepqa project would be on a Foom timescale, if it is successful (in the sense of approximate parity with human competitors)? My impression was that classical AI failed primarily because of brittle closed-world approximations, and this project looks like it (if successful) would largely overcome those obstacles. For instance, it seems like one could integrate a deepqa engine with planning and optimization engines in a fairly straightforward way. To put it another wa...
Hmmm... Maybe you could base some of it off of Eliezer's Future Salon talk (http://singinst.org/upload/futuresalon.pdf)? That's only about 11K words (sans references), while his book chapters are ~40K words and his OB/LW posts are hundreds of thousands of words.
Any thoughts on what the impact of the IBM Watson Deepqa project would be on a Foom timescale, if it is successful (in the sense of approximate parity with human competitors. My impression was that classical AI failed primarily because of brittle closed-world approximations, and this project looks like it (if successful) would largely overcome those obstacles. For instance, it seems like one could integrate a deepqa engine with planning and optimization engines in a fairly straightforward way. To put it another way, in the form of an idea futures propo...
Kaj, great idea
in a standard style and submitted them for publication
This will be one of the greater challenges; we know the argument and how to write well, but each academic discipline has rigid rules for style in publications. This particular journal, with its wide scope, may be a bit more tolerant, but in general learning the style is important if one wants to influence academia.
I imagine that one will have to go beyond the crowdsourcing approach in achieving this.
If you are coordinating, let me know if and how I can help.
More recent criticism comes from Mike Treder - managing director of the Institute for Ethics and Emerging Technologies in his article "Fearing the Wrong Monsters" => http://ieet.org/index.php/IEET/more/treder20091031/
Aubrey argues for "the singularity" here:
"The singularity and the Methuselarity: similarities and differences" - by Aubrey de Grey
http://www.sens.org/files/sens/FHTI07-deGrey.pdf
He uses the argument from personal incredulity though - one of the weakest forms of argument known.
He says:
"But wait – who’s to say that progress will remain “only” exponential? Might not progress exceed this rate, following an inverse polynomial curve (like gravity) or even an inverse exponential curve? I, for one, don’t see why it shouldn’t. If we conside...
Very constructive proposal Kaj. But...
Since it appears (do correct me if I'm wrong!) that Eliezer doesn't currently consider it worth the time and effort to do this, why not enlist the LW community in summarizing his arguments the best we can and submit them somewhere once we're done?
If Eliezer does not find it a worthwhile investment of his time - why should we?
==Re comments on "Singularity Paper"== Re comments, I had been given to understand that the point of the page was to summarize and cite Eliezer's arguments for the audience of ''Minds and Machines''. Do you think this was just a bad idea from the start? (That's a serious question; it might very well be.) Or do you think the endeavor is a good one, but the writing on the page is just lame? --User:Zack M. Davis 20:19, 21 November 2009 (UTC)
(this is about my opinion on the writing in the wiki page)
No, just use his writing as much as possible- direct...
Robin criticizes Eliezer for not having written up his arguments about the Singularity in a standard style and submitted them for publication. Others, too, make the same complaint: the arguments involved are covered over such a huge mountain of posts that it's impossible for most outsiders to seriously evaluate them. This is a problem for both those who'd want to critique the concept, and for those who tentatively agree and would want to learn more about it.
Since it appears (do correct me if I'm wrong!) that Eliezer doesn't currently consider it worth the time and effort to do this, why not enlist the LW community in summarizing his arguments the best we can and submit them somewhere once we're done? Minds and Machines will be having a special issue on transhumanism, cognitive enhancement and AI, with a deadline for submission in January; that seems like a good opportunity for the paper. Their call for papers is asking for submissions that are around 4000 to 12 000 words.
The paper should probably
Devote the second half to discussing the question of FAI, with references to e.g. Joshua Greene's thesis and other relevant sources for establishing this argument.Carl Shulman says SIAI is already working on a separate paper on this, so it'd be better for us to concentrate merely on the FOOM aspect.I have created a wiki page for the draft version of the paper. Anyone's free to edit.