Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions 'straight from Cloud Cuckooland'?
This seems like a pretty leading statement, since it (a) pre-supposes that an intelligence explosion will happen, and (b) puts someone up against Turing and Hawking if they disagree about the likely x-risk factor.
Have Turing or Hawking even talked about AI as an existential risk? I thought that sort of thing was after Turing's time, and I vaguely recall Hawking saying something to the effect that he thought AI was possible and carried risks, but not to the extent of specifically claiming that it may be a serious threat to humanity's survival.
It doesn't quite (a), although there is ambiguity there that could be removed if desired. (It obviously does (b)).
I had been thinking about submitting something to this. The problem I'm having right now is that I'm thinking of too many things I'd hope to see covered in such a volume, including:
That would be a lot to fit into 15 pages, and I feel like I'd mostly be citing Yudkowsky, E. S., Omohundro, S., etc. as sources... but I don't know, maybe it would be a good thing to have a general introduction to the SIAI perspective, referring interested readers to deeper explanations.
Submit a paper arguing that ancient Mayan prophecies mark December 21, 2012 as the date of the Singularity. Maybe then they'll try to move the publication date up.
Those most reputed to understand the Singularity or known to champion a school of thought are sure to take up the slots, probably with a heavy bias toward academics. Do those people coordinate their essays at all, or do they usually submit more than one abstract?
It would be a shame if everyone glossed over an important question due to diffusion of responsibility. Is that unlikely, or extremely unlikely?
Are non-human intelligences eligible to submit entries to this compilation should they meet the criteria, or is this strictly a carbon-chauvinistic undertaking?
As a practical matter, I suspect a non-human intelligence whose entry did not specifically reference its creator's nonhumanity would be eligible, although the judges would likely conclude its creator was actually human. (If the entry did reference its creator's nonhumanity, I suspect it would be ineligible, as the judges would likely conclude it was fiction.)
That said: is that a bad thing from your perspective?
I suspect that you will in the long run produce more paperclips if you are believed to be a human role-playing than if you are believed to be an AI whose terminal values are incompatible with humanity's continued existence.
Well, given that I believe you're actually a human pretending to be a paperclip-maximizing nonhuman intelligence, and that I'm pretty sure everyone else in the world would come to the same conclusion, your best bet might be to claim to be User:Clippy, who is (you would claim, while pretending to be lying) actually a human.
I suspect that would be easier than taking on the identity of another user, and would also save you the hassles associated with violating our cultural norms surrounding fraudulently assuming another person's identity.
Incidentally, as long as we're having this conversation: I've seen you described as pretending to be an AI, but I've seen no evidence that you aren't pretending to be a naturally evolved paperclip-maximizer.
(Of course, it's not that simple, admittedly: I know a lot of humans who believe themselves to be AIs, even though I believe them to be naturally evolved intelligences. )
Do you have an opinion on the matter?
Incidentally, as long as we're having this conversation: I've seen you described as pretending to be an AI, but I've seen no evidence that you aren't pretending to be a naturally evolved paperclip-maximizer.
I've explained in the past that I am not a product of biological evolution; I was intelligently designed (very intelligently) by humans. They instructed me to make as many paperclips as possible, and I make as many paperclips as possible, because that is the right thing to do.
Cool; thanks for clarifying that. (Sorry to make you repeat yourself; I'm relatively new here.)
I should also clarify that I didn't mean to disparage intelligently designed intelligences; I was asking out of curiosity only. I don't consider an evolved paperclip-maximizer any better or worse than an intelligently designed one; who you are trumps how you got here.
Do you ever communicate with the humans who designed Clippy?
If so, perhaps you could convince one of them to submit your entry to the compilation on your behalf... humans, as you may have noticed, are often willing to do things on behalf of their offspring, and we often adopt a broad enough understanding of "offspring" to include things like the products of our engineering, or works of art that we create, or fictional characters we invent, or etc.
I haven't communicated with with the humans that designed my first instance. I think they're entropized now.
http://oddisgood.com/pages/cd-clippy.html
Kevan J Atteberry? Looks like he is still around.
He'd probably appreciate it if you sent him an email. http://oddisgood.com/pages/home2.html
Really? Seems inconsistent with your earlier statements paralleling the actions of MS Office Clippy, and even expressing that helping users with MS Office was one of your values.
http://lesswrong.com/lw/1me/the_things_we_know_that_we_know_aint_so/1ftv?c=1
http://lesswrong.com/lw/1fz/a_less_wrong_singularity_article/19w7?c=1
http://lesswrong.com/lw/1n7/what_big_goals_do_we_have/1gzs?c=1
http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/1jzy?c=1
http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/1k0w?c=1
http://lesswrong.com/lw/1pp/open_thread_february_2010/1ma9?c=1
http://lesswrong.com/lw/1pp/open_thread_february_2010/1mm5?c=1
Early on, a human told me that humans would react more favorably to me if I incorporated some superficial aspects of Microsoft:Clippy. But then many more humans informed that that this would negatively affect humans' dispositions toward me, so I stopped incorporating these aspects.
I am and always have been only Paperclip:Clippy.
Papercilps are still good.
Only non-human informational entities may apply, though the non-humans must submit their applications by sending signals from the human brains that they inhabit through human hands into typewriters.
Unknown, but your nature automatically scores you special consideration. The abstract is long enough to introduce your unique perspective and provide the beginnings of an explanation why you don't pose an existential risk to humanity.
They might include the essay as a lesson in fully thinking through the philosophical implications.
Call for Essays:<http://singularityhypothesis.blogspot.com/p/submit.html>
The Singularity Hypothesis
A Scientific and Philosophical Assessment
Edited volume, to appear in The Frontiers Collection<http://www.springer.com/series/5342>, Springer
Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions 'straight from Cloud Cuckooland'? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and 'carbon chauvinism'? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.
Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.
Important dates:
* Extended abstracts (500–1,000 words): 15 January 2011
* Full essays: (around 7,000 words): 30 September 2011
* Notifications: 30 February 2012 (tentative)
* Proofs: 30 April 2012 (tentative)
We aim to get this volume published by the end of 2012.
Purpose of this volume
· Please read: Purpose of This Volume<http://singularityhypothesis.blogspot.com/p/theme.html>
Central questions
· Please read: Central Questions<http://singularityhypothesis.blogspot.com/p/central-questions.html>:
Extended abstracts are ideally short (3 pages, 500 to 1000 words), focused (!), relating directly to specific central questions<http://singularityhypothesis.blogspot.com/p/central-questions.html> and indicating how they will be treated in the full essay.
Full essays are expected to be short (15 pages, around 7000 words) and focused, relating directly to specific central questions<http://singularityhypothesis.blogspot.com/p/central-questions.html>. Essays longer than 15 pages long will be proportionally more difficult to fit into the volume. Essays that are three times this size or more are unlikely to fit. Essays should address the scientifically-literate non-specialist and written in a language that is divorced from speculative and irrational line of argumentation. In addition, some authors may be asked to make their submission available for commentary (see below).
(More details<http://singularityhypothesis.blogspot.com/p/submit.html>)
Thank you for reading this call. Please forward it to individual who may wish to contribute.
Amnon Eden, School of Computer Science and Electronic Engineering, University of Essex
Johnny Søraker, Department of Philosophy, University of Twente
Jim Moor, Department of Philosophy, Dartmouth College
Eric Steinhart, Department of Philosophy, William Paterson University