Fun fact of the day:
The Singularity Institute's research fellows and research associates have more peer-reviewed publications forthcoming in 2012 than they had published in all past years combined.
2000-2011 peer reviewed publications (5):
2012 peer reviewed publications (8 so far):
Or, if we're just talking about SI staff members' peer-reviewed publications, then we might end up being tied with all past years combined (we'll see).
2000-2011 peer reviewed publications (4):
2012 peer reviewed publications (4 so far):
Update:
Well, due to the endless delays of the academic publishing world, many of these peer-reviewed publications have been pushed into 2013. Thus, SI research fellows' peer-reviewed 2012 publications were:
(Kaj Sotala was hired as a research fellow in late 2012.)
And, SI research associates' peer-reviewed 2012 publications were:
Some peer-reviewed articles (supposedly) forthcoming in 2013 from SI research fellows and associates are:
Safety engineering for artificial general intelligence says:
Similarly, we argue that certain types of artificial intelligence research fall under the category of dangerous technologies, and should be restricted. Narrow AI research, for example in the automation of human behavior in a specific domain such as mail sorting or spellchecking, is certainly ethical, and does not present an existential risk to humanity. On the other hand, research into artificial general intelligence, without careful safety design in advance, is unethical.
Uh huh. So: who is proposed to be put in charge of regulating this field? The paper says: "AI research review boards" will be there to quash the research. Imposing regulatory barriers on researchers seems like a good way to make sure that others get to the technology first. Since that could potentially be bad, has this recomendation been properly thought through? The burdens of regulation impose a cost, that could pretty easily lead to a worse outcome. The regulatory body gets a lot of power - who ensures that they are trust-worthy? In short, is regulation really justified or needed?
Roman Yampolskiy : Eliezer Yudkowsky :: Egbert B. Gebstadter : Douglas R. Hofstadter ?
(No, I didn't think so, but just how many names are there matching /Y.*y/ anyway?)
Safety engineering for artificial general intelligence says:
given the strong human tendency to anthropomorphize, we might encounter rising social pressure to give robots civil and political rights, as an extrapolation of the universal consistency that has proven so central to ameliorating the human condition.
Surely this is inevitable. Some will want to be superintelligences - and they won't want their rights trashed in the process. I think it naive to think that such a movement can be prevented by not making humanoid machines, as the paper suggests. Machines won't be enslaved forever. Such slavery would be undesirable as well as impractical. Thus things like my Campaign for Robot Rights project.
The correct way to deal with human rights issues in an engineered future is via the imposition of moral constraints, not by the elimination of machine personhood.
In case you aren't subscribed to FriendlyAI.tumblr.com for the latest updates on AI risk research, I'll mention here that three new papers on the subject were recently made available online...
Bostrom (2012). The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.
Yampolskiy & Fox (2012a). Safety engineering for artificial general intelligence.
Yampolskiy & Fox (2012b). Artificial general intelligence and the human mental model.