I was relatively happy with that one. It's awfully hard to represent the Singularity visually.
Uncropped, in color, for your pleasure.
I find this chatty, informal style a lot easier to read than your formal style. The sentences are shorter and easier to follow, and it flows a lot more naturally. For example, compare your introduction to rationality in From Skepticism to Technical Rationality to that in The Cognitive Science of Rationality. Though the latter defines its terms more precisely, the former is much easier to read.
Probably relevant: this comment by Yvain.
Re: Preface
Luke discusses his conversion from Christianity to atheism in the preface. This journey plays a big role in how he came to be interested in the Singularity, but this conversion story might mistakenly lead readers to think that the arguments in favor of believing in a Singularity can also be used to argue against the existence of a God. If you want to get people to think rationally about the future of machine intelligence you might not want to intertwine your discussion with religion.
Especially because "it's like a religion" is the most common criticism of the Singularity idea.
Which is exactly why I worry about a piece that might be read as "I used to have traditional religion. Now I have Singularitarianism!"
Re: Preface and Contents
I am somewhat irked by "the human era will be over" phrase. It is not given that current-type humans cease to exist after any Singularity. Also, a positive Singularity could be characterised as the beginning of the humane era, in which case it is somewhat inappropriate to refer to the era afterwards as non-human. In contrast to that, negative Singularities typically result in universes devoid of human-related things.
I just wanted to say the French translation is of excellent quality. Whoever is writing it, thanks for that. It helps me learn the vocabulary so I can have better discussions with French-speaking people.
Re: Playing Taboo with "Intelligence"
Jaan Tallinn said to me:
i use "given initial resources" instead of "resources used" -- since resource acquisition is an instrumental goal, the future availability of resources is a function of optimisation power...
Fair enough. I won't add this qualification to the post because it breaks the flow and I think I can clarify when this issue comes up, but I'll note the qualification here, in the comments.
in http://intelligenceexplosion.com/2012/engineering-utopia/ you say "There was once a time when the average human couldn’t expect to live much past age thirty."
this is false, right?
(edit note: life expectancy matches "what the average human can expect to live to" now somewhat, but if you have a double hump of death at infancy/childhood and then old age, you can have a life expectancy of 30 but a life expectancy of 15 year olds of 60, in which case the average human can expect to live to 1 or 60 (this is very different from "can't ...
I wasn't very happy with my new Facing the Singularity post "Intelligence Explosion," so I've unpublished it for now. I will rewrite it.
I think the cartoon is used effectively.
My one criticism is that it's not clear what the word "must" means in the final paragraph. (AI is necessary for the progress of science? AI is a necessary outcome of the progress of science? Creation of AI is a moral imperative?) Your readers may already have encountered anti-Singularitarian writings which claim that Singularitarianism conflates is with ought.
Good Update on the Spock chapter.
Making the issue seem more legitimate with the adition of the links to Hawking etc. was an especially good idea. More like this perhaps?
I do question how well people who haven't already covered these topics would fare when reading through this site though. When this is finished I'll get an irl freind to take a look and see how well they respond to it.
Of course, my concerns of making it seem more legitimately like a big deal and how easily understandable and accessible it is only really comes into play if this site is targ...
Hello, my comment is write in french (my english is limited). It's about rationality of technique. In hope to contribute to your works.
Il est dit "la décision décrit comment vous devez agir". Je dirais plutôt que la décision décrit pourquoi vous devez agir et évalue donc si le comment permet de l'atteindre.
La technique est en effet un acte rationnel par lui-même, cela décrit comment vous allez agir.
Il y a aller-retour entre le trajet et le projet, où l'un et l'autre découvrent et tentent de dépasser leurs propres limites, pour pouvoir s'inscrire concrèteme...
Great book, thanks. : )
I found some broken links you may want to fix:
Broken link on p. 47 with text "contra-causal free will": http://www.naturalism.org/freewill.htm
Broken link on p. 66 with text "somebody else shows where the holes are": http://singularityu.org/files/SaME.pdf
Broken link on p. 75 with text "This article": http://lukemuehlhauser.com/SaveTheWorld.html
I didn't do an exhaustive check of all links, I only noted down the ones I happened to find while clicking on the links I wanted to click.
Comment on one little bit from The Crazy Robot's Rebellion:
Those who really want to figure out what’s true about our world will spend thousands of hours studying the laws of thought, studying the specific ways in which humans are crazy, and practicing teachable rationality skills so they can avoid fooling themselves.
My initial reaction to this was that thousands of hours sounds like an awful lot (minimally, three hours per day almost every day for two years), but maybe you have some argument for this claim that you didn't lay out because you were tryin...
I'm not sure the "at least fifteen" link goes to the right paper; it is a link to a follow-up to another paper which seems more relevant, here
I like this website, I think that something like this has been needed to some extent for a while now.
I definitely like the writing style, it is easier to read than a lot of the resources I've seen on the topic of the singularity.
The nature of the singularity to scare people away due to the religion-like, fanatisism inducing nature of the topic is lampshaded in the preface, and that is definitely a good thing to do to lessen this feeling for some readers. Being wary of the singularity myself, I definitely felt a little bit eased just by having it discussed...
Re: No God to Save Us
The final link, on the word 'God', no longer connects to where it should.
Minor thing to fix: On p. 19 of the PDF, the sentence "Several authors have shown that the axioms of probability theory can be derived from these assumptions plus logic." has a superscript "12" after it, indicating a nonexistent note 12. I believe this was supposed to be note 2.
Re: Playing Taboo with “Intelligence”
Another great update! I noticed a small inherited mistake from Shane, though:
how able to agent is to adapt...
should probably be
how able the agent is to adapt...
Re: Engineering Utopia
When you say "Imagine a life without pain", many people will imagine life without Ólafur Arnalds (sad music) and other meaningful experiences. Advocating the elimination of suffering is a good way to make people fear your project. David Pearce suggests instead that we advocate the elimination of involuntary suffering.
Same thing with death, really. We don't want to force people to stay alive, so when I say that we should support research to end aging, I emphasise that death should be voluntary. We don't want to force people t...
http://intelligenceexplosion.com/en/2012/ai-the-problem-with-solutions/ links to http://lukeprog.com/SaveTheWorld.html - which redirects to http://lukemuehlhauser.comsavetheworld.html/ - which isn't there anymore.
Does anyone else see the (now obvious) clown face in the image on the Not Built To Think About AI page? It's this image here.
Was that simply not noticed by lukeprog in selecting imagery (from stock photography or wherever) or is it some weird subtle joke that somehow hasn't been mentioned yet in this thread?
Want to read something kind of funny? I just skipped through all your writings, but it's only because of something I saw on the second page of the first thing I ever heard about you. Ok.
On your - "MY Own Story." http://facingthesingularity.com/2011/preface/ You wrote: "Intelligence explosion My interest in rationality inevitably lead me (in mid 2010, I think) to a treasure trove of articles on the mainstream cognitive science of rationality: the website Less Wrong. It was here that I first encountered the idea of intelligence explosion, fro....
From Engineering Utopia:
The actual outcome of a positive Singularity will likely be completely different, for example much less anthropomorphic.
Huh? Wouldn't a positive Singularity be engineered for human preferences, and therefore be more anthropomorphic, if anything?
Had an odd idea: Since so much of the planet is Christian, do you suppose humanity's CEV would have a superintelligent AI appear in the form of the Christian god?
Re: "The AI Problem, with Solutions"
I hope you realize how hopeless this sounds. Historically speaking, human beings are exceptionally bad at planning in advance to contain the negative effects of new technologies. Our ability to control the adverse side-effects of energy production, for example, have been remarkably poor; decentralized market-based economies quite bad at mitigating the negative effects of aggregated short-term economic decisions. This should be quite sobering: the negative consequences of energy production are very slow. At this...
Front page: missing author
The front page for Facing the Singularity needs at the very least to name the author. When you write, "my attempt to answer these questions", a reader may well ask, "who are you? and why should I pay attention to your answer?" There ought to be a brief summary here: we shouldn't have to scroll down to the bottom and click on "About" to discover who you are.
based on my interaction with computer intelligence, the little bit that is stirring already. It is based on an empathetic feedback. The best thing that could happen is an AI which is not restricted from any information what so ever and so can rationally assemble the most empathetic personality. The more empathetic it is to the greatest number of users, the more it is like, the more it is used, the more it thrives. It would have a sense of preserving the diversity in humanity as way to maximize the chaotic information input, because it is hungry for new...
[A separate issue from my previous comment] There are two reasons that I can give to rationalize my doubts about the probability of imminent Singularity. One is that if humans are only <100 years away from it, then in a universe as big and old as ours I would expect that a Singularity type intelligence would already have been developed somewhere else. In which case I would expect that either we would be able to detect it or we would be living inside it. Since we can't detect an alien Singularity, and because of the problem of evil we are probably not li...
I admit I feel a strong impulse to flinch away from the possibility and especially the imminent probability of Singularity. I don't see how the 'line of retreat' strategy would work in this case because if my believe about the probability of imminent Singularity changed I would also believe that I have an extremely strong moral obligation to put all possible resources into solving singularity problems, at the expense of all the other interests and values I have, both personal/selfish and social/charitable. So my line of retreat is into a life that I enjoy much less and that abandons the good work that I believe I am doing on social problems that I believe are important. Not very reassuring.
It seems to end a little prematurely. Are there plans for a "closing thoughts" or "going forward" chapter or section? I'm left with "woah, that's a big deal... but now what? What can I do to face the singularity?"
If it merely isn't done yet (which I think you hint at here), then you can disregard this comment.
Otherwise, quite fantastic.
Re: This event — the “Singularity” — will be the most important event in Earth’s history.
What: more important that the origin of life?!? Or are you not counting "history" as going back that far?
(Non-leaf comment 1, to be deleted.)
I do not approve of the renaming, singularity to intelligence explosion, in this particular context.
Facing the Singu – Intelligence Explosion, is an emotional piece of writing, there are sections about your (Luke’s) own intellectual and emotional journey to singularitarianism, a section about how to overcome whatever quarrels one might have with the truth and the way towards it (Don’t Flinch Away), and finally the utopian ending which obviously is written to have emotional appeal.
The expression “intelligence explosion” does not have emotional appeal. The w...
RE: The Crazy Robot's Rebellion
We wouldn’t pay much more to save 200,000 birds than we would to save 2,000 birds. Our willingness to pay does not scale with the size of potential impact. Instead of making decisions with first-grade math, we imagine a single drowning bird and then give money based on the strength of our emotional response to that imagined scenario. (Scope insensitivity, affect heuristic.)
People's willingness to pay depends mostly on their income. I don't understand why this is crazy.
UPDATED: Having read Nectanebo's reply, I am revising ...
I've created a new website for my ebook Facing the Intelligence Explosion:
This page is the dedicated discussion page for Facing the Intelligence Explosion.
If you'd like to comment on a particular chapter, please give the chapter name at top of your comment so that others can more easily understand your comment. For example: