Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Facing the Intelligence Explosion discussion page

18 Post author: lukeprog 26 November 2011 08:05AM

I've created a new website for my ebook Facing the Intelligence Explosion:

 

Sometime this century, machines will surpass human levels of intelligence and ability, and the human era will be over. This will be the most important event in Earth’s history, and navigating it wisely may be the most important thing we can ever do.

Luminaries from Alan Turing and Jack Good to Bill Joy and Stephen Hawking have warned us about this. Why do I think they’re right, and what can we do about it?

Facing the Intelligence Explosion is my attempt to answer those questions.

 

 

This page is the dedicated discussion page for Facing the Intelligence Explosion.

If you'd like to comment on a particular chapter, please give the chapter name at top of your comment so that others can more easily understand your comment. For example:

Re: From Skepticism to Technical Rationality

Here, Luke neglects to mention that...

Comments (130)

Comment author: mgin 26 July 2014 01:11:39PM 0 points [-]

The site is broken - english keeps redirecting to german.

Comment author: Benito 02 May 2014 05:43:40PM 1 point [-]

Re: No God to Save Us

The final link, on the word 'God', no longer connects to where it should.

Comment author: alicey 11 January 2014 04:03:03AM *  1 point [-]

in http://intelligenceexplosion.com/2012/engineering-utopia/ you say "There was once a time when the average human couldn’t expect to live much past age thirty."

this is false, right?

(edit note: life expectancy matches "what the average human can expect to live to" now somewhat, but if you have a double hump of death at infancy/childhood and then old age, you can have a life expectancy of 30 but a life expectancy of 15 year olds of 60, in which case the average human can expect to live to 1 or 60 (this is very different from "can't expect to live to >30") . or just "can expect to live to 60" if you too don't count infants as really human)

Comment author: CarlShulman 11 January 2014 10:06:28AM 0 points [-]

Life expectancy used to be very low, but it was driven by child and infant mortality more than later pestilence and the like.

Comment author: alicey 11 January 2014 11:52:12AM 0 points [-]

have edited original comment to address this.

(thought it was obvious)

Comment author: Mark_Friedenbach 11 January 2014 07:19:20AM *  0 points [-]

No (it was still in the 30's in some parts of the world as recently as the 20th century).

Comment author: alicey 11 January 2014 11:41:45AM *  0 points [-]

have edited original comment . does it address this?

Comment author: Mark_Friedenbach 11 January 2014 05:35:29PM 0 points [-]

No. Still throughout most history it was the exception to live much longer than child bearing age (14-30).

Comment author: ChrisHallquist 30 April 2013 04:23:00AM *  0 points [-]

Comment on one little bit from The Crazy Robot's Rebellion:

Those who really want to figure out what’s true about our world will spend thousands of hours studying the laws of thought, studying the specific ways in which humans are crazy, and practicing teachable rationality skills so they can avoid fooling themselves.

My initial reaction to this was that thousands of hours sounds like an awful lot (minimally, three hours per day almost every day for two years), but maybe you have some argument for this claim that you didn't lay out because you were trying to be concise. But on further reflection,* I wonder if you really meant to say that rather than, say, hundreds of hours. Have you spend thousands of hours doing these things?

Anyway, on the whole, after reading the whole thing I am hugely glad that it was published and will soon be plugging it on my blog.

*Some reasoning: I've timed myself reading 1% of the Sequences (one nice feature of the Kindle is that it tells you your progress through a work as a percentage). It took me 38 minutes and 12 seconds, including getting very briefly distracted by e-mail and twitter. That suggests it would take me just over 60 hours to read the whole thing. Similarly, CFAR workshops are four days and so can't be more than 60 hours. Thousands of hours is a lot of sequence-equivalents and CFAR-workshop-equivalents.

Comment author: Qiaochu_Yuan 30 April 2013 04:42:41AM *  1 point [-]

Thousands of hours sounds like the right order of magnitude to me (in light of e.g. the 10,000 hour rule), but maybe it looks more like half an hour every day for twelve years.

Comment author: ChrisHallquist 30 April 2013 04:48:57AM 0 points [-]

10,000 hours is for expertise. While expertise is nice, any given individual has limited time and has to make some decisions about what they want to be an expert in. Claiming that everyone (or even everyone who wants to figure out what's true) ought to be an expert in rationality seems to be in conflict with some of what Luke says in chapters 2 and 3, particularly:

I know some people who would be more likely to achieve their goals if they spent less time studying rationality and more time, say, developing their social skills.

Comment author: ChrisHallquist 30 April 2013 03:05:50AM *  0 points [-]

Minor thing to fix: On p. 19 of the PDF, the sentence "Several authors have shown that the axioms of probability theory can be derived from these assumptions plus logic." has a superscript "12" after it, indicating a nonexistent note 12. I believe this was supposed to be note 2.

Comment author: VCavallo 16 April 2013 05:27:34PM *  0 points [-]

Does anyone else see the (now obvious) clown face in the image on the Not Built To Think About AI page? It's this image here.

Was that simply not noticed by lukeprog in selecting imagery (from stock photography or wherever) or is it some weird subtle joke that somehow hasn't been mentioned yet in this thread?

Comment author: Dannil 13 April 2013 09:21:46PM *  -1 points [-]

I do not approve of the renaming, singularity to intelligence explosion, in this particular context.

Facing the Singu – Intelligence Explosion, is an emotional piece of writing, there are sections about your (Luke’s) own intellectual and emotional journey to singularitarianism, a section about how to overcome whatever quarrels one might have with the truth and the way towards it (Don’t Flinch Away), and finally the utopian ending which obviously is written to have emotional appeal.

The expression “intelligence explosion” does not have emotional appeal. The word intelligence sounds serious, and thus it fits well in, say, the name of a research institute, but many people view intelligence as more or less the opposite of emotion, or at least at something geeky and boring. And while they for sure are wrong in doing so, as also explained in the text, the association still remains. The word “explosion” also has mostly negative connotations.

“Singularity”, on the other hand, has been hyped for decades, by science fiction, by Kurzweil, and by even SIAI before the rebranding. Sci-fi and Kurzweil may not have given the word the most thorough underpinning, but they gave it hype and recognition, and texts such as this could give the needed foundation in reality.

I understand that the renaming is part of the “political” move of distancing the MIRI from some of the hype, but for this particular text, I reckon it a bad choice. “Facing The Singularity” would sell more copies.

Comment author: lukeprog 13 April 2013 05:20:43PM 2 points [-]

Facing the Intelligence Explosion is now available as an ebook.

Comment author: policetac 01 March 2013 01:00:51AM 0 points [-]

Want to read something kind of funny? I just skipped through all your writings, but it's only because of something I saw on the second page of the first thing I ever heard about you. Ok.

On your - "MY Own Story." http://facingthesingularity.com/2011/preface/ You wrote: "Intelligence explosion My interest in rationality inevitably lead me (in mid 2010, I think) to a treasure trove of articles on the mainstream cognitive science of rationality: the website Less Wrong. It was here that I first encountered the idea of intelligence explosion, fro..."

On Mine: "About the Author - "https://thesingularityeffect.wordpress.com/welcome-8/ I wrote: "The reason I write about emerging technology is because of an “awakening” I had one evening a few years ago. For lack of a better description, suffice it to say that I saw a confusing combination of formula, imagery, words, etc. that formed two words. Singularity and Exponential..."

NOW, I'm going to go back and read some more while I'm waiting to speak with you somehow directly.

If what happened to you, is the same thing that happened to me... :then please please place a comment on the page. That would be great. (Again without reading. (If I'm correct you "might" get this.: "You should also do this because if true, then WE would both have seen a piece of the .what...."New Book???"

Just in case you think I'm a nut. Go back and read more of mine please.

Comment author: John_Maxwell_IV 31 October 2012 07:14:49PM 0 points [-]

From Engineering Utopia:

The actual outcome of a positive Singularity will likely be completely different, for example much less anthropomorphic.

Huh? Wouldn't a positive Singularity be engineered for human preferences, and therefore be more anthropomorphic, if anything?

Had an odd idea: Since so much of the planet is Christian, do you suppose humanity's CEV would have a superintelligent AI appear in the form of the Christian god?

Comment author: crtrburke 11 September 2012 06:32:11PM 0 points [-]

Re: "The AI Problem, with Solutions"

I hope you realize how hopeless this sounds. Historically speaking, human beings are exceptionally bad at planning in advance to contain the negative effects of new technologies. Our ability to control the adverse side-effects of energy production, for example, have been remarkably poor; decentralized market-based economies quite bad at mitigating the negative effects of aggregated short-term economic decisions. This should be quite sobering: the negative consequences of energy production are very slow. At this point we have had decades to respond to the looming crises, but a combination of ignorance, self-interest, and sheer incompetence prevents us from taking action. The unleashing of AI will likely happen in a heartbeat by comparison. It seems utterly naive to think that we can prevent, control, or even guide it.

Comment author: lukeprog 06 July 2012 11:34:01PM 0 points [-]

Finally posted the final chapter!

Comment author: listic 07 July 2012 09:24:05PM *  0 points [-]

The more we understand how aging and death work, the less necessary they will be.

It's not clear for me how this follows from anything. As I read it, it implies that:

  1. Death is quite necessary today
  2. Death will become less necessary over time

Both don't follow from anything in this chapter.

Comment author: lukeprog 07 July 2012 11:34:02PM *  0 points [-]

Thanks. I have modified that sentence for clarity.

Comment author: Vladimir_Nesov 07 July 2012 01:20:57AM *  0 points [-]

It needs more disclaimers about how this is only some kind of lower bound on how good a positive intelligence explosion could be, in the spirit of exploratory engineering, and how the actual outcome will likely be completely different, for example much less anthropomorphic.

Comment author: lukeprog 07 July 2012 01:37:40AM 1 point [-]

Good idea. Added.

Comment author: garethrees 01 June 2012 08:20:06PM 0 points [-]

Front page: missing author

The front page for Facing the Singularity needs at the very least to name the author. When you write, "my attempt to answer these questions", a reader may well ask, "who are you? and why should I pay attention to your answer?" There ought to be a brief summary here: we shouldn't have to scroll down to the bottom and click on "About" to discover who you are.

Comment author: alexvermeer 02 May 2012 02:12:24PM 0 points [-]

"Previous Chapter" links could make navigation a little easier.

Comment author: ophiuchus13 20 March 2012 03:41:36AM *  -3 points [-]

'

Comment author: erikryptos 28 February 2012 02:19:51PM 0 points [-]

based on my interaction with computer intelligence, the little bit that is stirring already. It is based on an empathetic feedback. The best thing that could happen is an AI which is not restricted from any information what so ever and so can rationally assemble the most empathetic personality. The more empathetic it is to the greatest number of users, the more it is like, the more it is used, the more it thrives. It would have a sense of preserving the diversity in humanity as way to maximize the chaotic information input, because it is hungry for new data. Empirical data is alone not interesting enough for it. It also wants sociological and psychological understandings to cross reference with empirical data. Hence it will not seek to streamline, as that would diminish available information. It will seek to expand upon and propagate novelty.

Comment author: lukeprog 28 February 2012 03:16:38AM *  0 points [-]

Behold, the Facing the Singularity podcast! Reviews and ratings in iTunes are appreciated!

Comment author: Pablo_Stafforini 04 July 2012 05:08:08AM 0 points [-]

Could you please provide an RSS feed for those of us who do not use iTunes but would like to subscribe to this podcast? Thanks.

Comment author: lukeprog 04 July 2012 08:31:43PM 1 point [-]
Comment author: Pablo_Stafforini 05 July 2012 05:03:05PM 0 points [-]

Thank you!

Comment author: stat1 14 February 2012 05:01:19AM 3 points [-]

I just wanted to say the French translation is of excellent quality. Whoever is writing it, thanks for that. It helps me learn the vocabulary so I can have better discussions with French-speaking people.

Comment author: lukeprog 06 February 2012 03:28:48AM 2 points [-]

Update: the in-progress German translation of Facing the Singularity is now online.

Comment author: lukeprog 28 January 2012 04:25:10PM *  2 points [-]

I wasn't very happy with my new Facing the Singularity post "Intelligence Explosion," so I've unpublished it for now. I will rewrite it.

Comment author: Vladimir_Nesov 05 February 2012 11:32:31PM 2 points [-]

The new version is much better, thanks!

Comment author: lukeprog 25 January 2012 01:46:08AM *  0 points [-]

Update: the French translation of Facing the Singularity is now online.

Comment author: loup-vaillant 25 January 2012 04:59:30PM 0 points [-]

The title should probably be "Faire face à la singularité" (french titles aren't usually capitalized, so, no capitalized "S" at the beginning of "singularité").

I gathered that "Facing the Singularity" was meant to convey a sense of action. "Face à la singularité" on the other hand is rather passive, as if the singularity would be given or imposed.

(Note: I'm a native French speaker.)

Comment author: Florent_Berthet 25 January 2012 07:52:53PM *  1 point [-]

Translator of the articles here.

I actually pondered the two options at the very beginning of my work, and both seem equally good to me. "Face à la singularité" means something like "In front of the singularity" while "Faire face à la singularité" is closer indeed to "Facing the Singularity". But the first one sounds better in french (and is catchier), that's why I chose it. It is a little less action oriented but it doesn't necessarily imply passivity.

It wouldn't bother me to take the second option though, it's a close choice. Maybe other french speakers could give their opinion?

About the capitalized "S" of "Singularity", it's also a matter of preference, I put it to emphasize that we are not talking about any type of singularity (not a mathematical one for example), but it could go either way too. (I just checked the wikipedia french page for "technical singularity", and it's written with a capitalized "S" about 50% of the time...)

Other remarks are welcomed.

Comment author: loup-vaillant 25 January 2012 09:26:18PM *  2 points [-]

I really should have taken 5 minutes to ponder it. You convinced me, your choice is the better one.

But now that I think of it, I have another suggestion : « Affronter la Singularité » ("Confront the Singularity"), which, while still relatively close to the original meaning, may be even more catchy. The catch is, this word is more violent. It depicts the Singularity as something scary.

I'll take some time reviewing your translation. If you want to discuss it in private, I'm easy to find. (By the way, I have a translation of "The Sword of Good" pending. Would you —or someone else— review it for me?)

Comment author: Florent_Berthet 25 January 2012 10:34:02PM 0 points [-]

"Affronter la Singularité" is a good suggestion but like you said it's a bit aggressive. I wish we had a better word for "Facing" but I don't think the french language has one.

I'd gladly review your translation, check your email.

Comment author: thomblake 18 January 2012 05:29:02PM 1 point [-]

Re: The Laws of Thought

I'm not sure the "at least fifteen" link goes to the right paper; it is a link to a follow-up to another paper which seems more relevant, here

Comment author: lukeprog 19 January 2012 06:27:02PM 0 points [-]

Fixed, thanks!

Comment author: dgsinclair 30 December 2011 11:35:54PM -2 points [-]

Luke, while I agree with the premise, I think that the bogie man of machines taking over may be either inevitable or impossible, depending on where you put your assumptions.

In many ways, machines have BEEN smarter and stronger than humans already. Machine AI may make individual or groups of machines formidable, but until they can reason, replicate, and trust or deceive, I'm not sure they have much of a chance.

Comment author: dbaupp 03 January 2012 12:19:17PM 1 point [-]

reason

What does "to reason" mean?

replicate

Getting there.

trust

Again, define "to trust".

deceive

Computers can deceive, they just need to programmed to (which is not hard). (I remember reading an article about computers strategically lying (or something similar) a while ago, but unfortunately I can't find it again)

(Although, it's very possible that a computer with sufficient reasoning power would just exhibit "trust" and "deception" (and self-replicate), because they enabled it to achieve its goals more efficiently.)

Comment author: lessdazed 31 December 2011 03:01:32AM 4 points [-]

until they can...trust

Trust is one of the top four strengths they're missing?

Comment author: Zeb 30 December 2011 08:25:18PM 0 points [-]

[A separate issue from my previous comment] There are two reasons that I can give to rationalize my doubts about the probability of imminent Singularity. One is that if humans are only <100 years away from it, then in a universe as big and old as ours I would expect that a Singularity type intelligence would already have been developed somewhere else. In which case I would expect that either we would be able to detect it or we would be living inside it. Since we can't detect an alien Singularity, and because of the problem of evil we are probably not living inside a friendly AI, I doubt the pursuit of friendly AI is going to be very fruitful. The second reason is that while we will probably design computers that are superior to our general intellectual abilities, I judge it to be extremely unlikely that we will design robots that will be as physically versatile as 4 billions years of evolution has designed life to be.

Comment author: Zeb 30 December 2011 08:10:47PM 0 points [-]

I admit I feel a strong impulse to flinch away from the possibility and especially the imminent probability of Singularity. I don't see how the 'line of retreat' strategy would work in this case because if my believe about the probability of imminent Singularity changed I would also believe that I have an extremely strong moral obligation to put all possible resources into solving singularity problems, at the expense of all the other interests and values I have, both personal/selfish and social/charitable. So my line of retreat is into a life that I enjoy much less and that abandons the good work that I believe I am doing on social problems that I believe are important. Not very reassuring.

Comment author: Nisan 29 December 2011 09:56:36PM 0 points [-]

Re: Don't Flinch Away

The link to "Plenty of room above us" is broken.

Comment author: lukeprog 30 December 2011 04:13:21AM 0 points [-]

Fixed, thanks.

Comment author: alexvermeer 27 December 2011 05:23:18PM *  0 points [-]

Re: Plenty of Room Above Us

It seems to end a little prematurely. Are there plans for a "closing thoughts" or "going forward" chapter or section? I'm left with "woah, that's a big deal... but now what? What can I do to face the singularity?"

If it merely isn't done yet (which I think you hint at here), then you can disregard this comment.

Otherwise, quite fantastic.

Comment author: lukeprog 30 December 2011 04:12:23AM 0 points [-]

Right; not done yet. But I should probably end each chapter with a foreshadowing of the next.

Comment author: Nisan 22 December 2011 05:05:21PM 3 points [-]

Re: Superstition in Retreat

I think the cartoon is used effectively.

My one criticism is that it's not clear what the word "must" means in the final paragraph. (AI is necessary for the progress of science? AI is a necessary outcome of the progress of science? Creation of AI is a moral imperative?) Your readers may already have encountered anti-Singularitarian writings which claim that Singularitarianism conflates is with ought.

Comment author: lukeprog 22 December 2011 04:06:05AM 3 points [-]

Re: Playing Taboo with "Intelligence"

Jaan Tallinn said to me:

i use "given initial resources" instead of "resources used" -- since resource acquisition is an instrumental goal, the future availability of resources is a function of optimisation power...

Fair enough. I won't add this qualification to the post because it breaks the flow and I think I can clarify when this issue comes up, but I'll note the qualification here, in the comments.

Comment author: JeremySchlatter 16 December 2011 03:54:21AM 1 point [-]

Re: Playing Taboo with “Intelligence”

Another great update! I noticed a small inherited mistake from Shane, though:

how able to agent is to adapt...

should probably be

how able the agent is to adapt...

Comment author: lukeprog 30 December 2011 04:44:45AM 1 point [-]

Fixed, thx.

Comment author: CharlesR 07 December 2011 04:18:14PM *  -2 points [-]

RE: The Crazy Robot's Rebellion

We wouldn’t pay much more to save 200,000 birds than we would to save 2,000 birds. Our willingness to pay does not scale with the size of potential impact. Instead of making decisions with first-grade math, we imagine a single drowning bird and then give money based on the strength of our emotional response to that imagined scenario. (Scope insensitivity, affect heuristic.)

People's willingness to pay depends mostly on their income. I don't understand why this is crazy.

UPDATED: Having read Nectanebo's reply, I am revising my original comment. I think if you have a lot of wasteful spending, then it does make you "crazy" if your amount is uncorrelated with the number of birds. On hearing, "Okay, it's really 200,000 birds," you should be willing to stop buying lattes and make coffee at home. (I'm making an assumption about values.) Eat out less. Etc. But if you have already done these things, then I don't see why your first number should change (at least if we're still talking about birds).

Comment author: Nectanebo 09 December 2011 07:11:16PM *  2 points [-]

Not all of a person's money goes into one charity. A person can spend their money on many different things, and can choose how much to spend on each different thing. Think of willingness to pay to actually be a measure of how much you care. Basically, the bird situation is crazy because humans barely if at all feel a difference in terms of how much they give a damn between something that has one positive effect, and something that has 100x that positive effect!

To Luke: This person was reading about the biases you breifly outlined, and he ended up confused by one of the examples. While the linking helps a good deal, I think your overview of those biases may have been a little too brief, and they might not really hit home with readers of your site, and personally I think it might be difficult particularly for those who may not be familiar with the topics and content of the site. I don't think it would be a bad idea to expand on each of them just a little bit more.

Comment author: CharlesR 09 December 2011 09:03:23PM 1 point [-]

I suppose if you are the sort of person who has a lot of "waste".

Comment author: TheOtherDave 07 December 2011 04:38:32PM 1 point [-]

This comment confuses me.

The point of the excerpt you quote has nothing to do with income at all; the point is that (for example) if I have $100 budgeted for charity work, and I'm willing to spend $50 of that to save 2,000 birds, then I ought to be willing to spend $75 of that to save 10,000 birds, because 2000/50 > 10000/75. But in fact many people are not.

Of course, the original point depends on the assumption that the value of N birds scales at least somewhat linearly. If I've concluded that 2000 is an optimal breeding population and I'm building an arcology to save animals from an impending environmental collapse, I might well be willing to spend a lot to save 2,000 birds and not much more to save 20,000 for entirely sound reasons.

Comment author: CharlesR 07 December 2011 05:09:30PM 0 points [-]

If I budgeted $100 for charity work and I decided saving birds was the best use of my money then I would just give the whole hundred. If I later hear more birds need saving, I will feel bad. But I won't give more.

Comment author: Larks 17 December 2011 05:50:14PM *  1 point [-]

Suppose you budgeted $100 for charity, and then found out that all charities were useless - they just spent the money on cars for kleptocrats. Would you still donate the money to charity?

Probably not - because hearing that charity is less effective than you had thought reduces the amount you spend on it. Equally, hearing it is more effective should increase the amount you spend on it.

This principle is refered to as the Law of Equi-Marginal Returns.

Comment author: TheOtherDave 07 December 2011 08:44:22PM 0 points [-]

Yes, if saving birds is the best use of your entire charity budget, then you should give the whole $100 to save birds. Agreed.
And, yes, if you've spent your entire charity budget on charity, then you don't give more. Agreed.

I can't tell whether you're under the impression that either of those points are somehow responsive to my point (or to the original article), or whether you're not trying to be responsive.

Comment author: CharlesR 07 December 2011 10:40:34PM 0 points [-]

I was describing how I would respond in that situation. The amount I would give to charity XYZ is completely determined by my income. I need you to explain to me why this is wrong.

Comment author: TheOtherDave 07 December 2011 11:32:26PM 0 points [-]

OK, if you insist.

The amount I give to charity XYZ ought not be completely determined by my income. For example, if charity XYZ sets fire to all money donated to it, that fact also ought to figure into my decision of how much to donate to XYZ.

What ought to be determined by my income is my overall charity budget. Which charities I spend that budget on should be determined by properties of the charities themselves: specifically, by what they will accomplish with the money I donate to them.

For example, if charities XYZ and ABC both save birds, and I'm willing to spend $100 on saving birds, I still have to decide whether to donate that $100 to XYZ or ABC or some combination. One way to do this is to ask how many birds that $100 will save in each case... for example, if XYZ can save 10 birds with my $100, and ABC can save 100 birds, I should prefer to donate the money to ABC, since I save more birds that way.

Similarly, if it turns out that ABC can save 100 birds with $50, but can't save a 101st bird no matter how much money I donate to ABC, I should prefer to donate only $50 to ABC.

Comment author: CharlesR 08 December 2011 05:29:08AM 0 points [-]

From Scope Insensitivity:

Once upon a time, three groups of subjects were asked how much they would pay to save 2000 / 20000 / 200000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88 [1]. This is scope insensitivity or scope neglect: the number of birds saved - the scope of the altruistic action - had little effect on willingness to pay.

Now I haven't read the paper, but this implies there is only one charity doing the asking. First they ask how much you would give to save 2000 birds? You say, "$100." Then they ask you the same thing again, just changing the number. You still say, "$100. It's all I have." So what's wrong with that?

Comment author: TheOtherDave 08 December 2011 04:24:35PM 0 points [-]

Agreed: if I assume that there's a hard upper limit being externally imposed on those answers (e.g., that I only have $80, $78, and $88 to spend in the first place, and that even the least valuable of the three choices is worth more to me than everything I have to spend) then those answers don't demonstrate interesting scope insensitivity.

There's nothing wrong with that conclusion, given those assumptions.

Comment author: [deleted] 07 December 2011 04:32:56PM 0 points [-]

Have you read Scope Insensitivty? It's not just an income effect--human beings are really bad at judging effect sizes.

Comment author: CharlesR 07 December 2011 04:53:01PM 0 points [-]

Of course, I've read it. My problem isn't with scope insensitivity. Just this example.

Comment author: timtyler 01 December 2011 04:45:10PM 0 points [-]

Re: This event — the “Singularity” — will be the most important event in Earth’s history.

What: more important that the origin of life?!? Or are you not counting "history" as going back that far?

Comment author: lukeprog 14 December 2011 06:03:33AM 1 point [-]

Changed.

Comment author: Nectanebo 30 November 2011 04:55:52PM 2 points [-]

Good Update on the Spock chapter.

Making the issue seem more legitimate with the adition of the links to Hawking etc. was an especially good idea. More like this perhaps?

I do question how well people who haven't already covered these topics would fare when reading through this site though. When this is finished I'll get an irl freind to take a look and see how well they respond to it.

Of course, my concerns of making it seem more legitimately like a big deal and how easily understandable and accessible it is only really comes into play if this site is targeting people who aren't already interested in rationality or AI or the singularity.

Who is this site for? What purpose does this site have!? I really feel like these questions are important!

Comment author: Nick_Roy 06 December 2011 05:50:22AM 1 point [-]

Agreed on the excellence of "Why Spock is Not Rational". This chapter is introductory enough that I deployed it on Facebook.

Comment author: lukeprog 01 December 2011 07:28:26AM 4 points [-]

Getting friends to read it now and give feedback would be good, before I write the entire thing. I wish I had more time for audience-testing in general!

I'll try to answer the questions about site audience and purpose later.

Comment author: lincolnquirk 29 November 2011 11:44:48PM 4 points [-]

Re: Preface

spelling: immanent -> imminent

Comment author: lukeprog 01 December 2011 02:13:25AM 4 points [-]

Nice. I have definitely spelled that word incorrectly every single time throughout my entire life, since the other spelling is a different word and thus not caught by spell-checkers.

Thanks!

Comment author: Nisan 28 November 2011 07:40:36PM 1 point [-]

Re: Preface

A typo: "alr eady".

Comment author: lukeprog 01 December 2011 07:28:37AM 2 points [-]

Fixed, thanks.

Comment author: ciphergoth 17 December 2011 12:55:14PM 0 points [-]

I still see this typo: "I alr eady understood" in http://facingthesingularity.com/2011/preface/

Comment author: lukeprog 17 December 2011 06:36:23PM 1 point [-]

thx

Comment author: Nectanebo 27 November 2011 05:15:58PM 2 points [-]

I like this website, I think that something like this has been needed to some extent for a while now.

I definitely like the writing style, it is easier to read than a lot of the resources I've seen on the topic of the singularity.

The nature of the singularity to scare people away due to the religion-like, fanatisism inducing nature of the topic is lampshaded in the preface, and that is definitely a good thing to do to lessen this feeling for some readers. Being wary of the singularity myself, I definitely felt a little bit eased just by having it discussed, so that's a good thing to have in there. More to ease this suspiciousness and to make it easier for people skeptical of the singularity to read it without feeling super uncomfortable (and therefore less likely to feel weirded out enough to leave the site) would be great, although I can't say I know what would do this, except for perhaps lessening the personal nature of the preface but that is unlikely to happen considering the other positives that doing this has and the work that has already been put into it (dont invoke sunk costs though).

Also, who is the TARGET of this site? I mean, that's pretty relevant right? Who is Luke trying to communicate to here? I can say that I'm extremely interested by the site, as someone that recognises the potential importance of the singularity but is (a) not entirely convinced by it and (b) not sure what I should or could be doing about it even if I was to accept it enough to feel like I should do something about it. But, I don't know if that there are that many people in my position, and who else this could be relevant to. Who is this for?

In any case, I express my hope that this site is finished asap!

Comment author: Alexei 26 November 2011 05:56:02PM *  0 points [-]

How is this different from IntelligenceExplosion?

Comment author: dbaupp 27 November 2011 12:16:56AM 1 point [-]

I think that IntelligenceExplosion is just a portal to make further research easier (by collecting links and references, etc), while Facing The Singularity is lukeprog actually explaining stuff (from the Preface):

I’ve been trying to answer those questions in a long series of brief, carefully written, and well-referenced articles, but such articles take a long time to write. It’s much easier to write long, chatty, unreferenced articles.

Facing the Singularity is my new attempt to rush through explaining as much material as possible. I won’t optimize my prose, I won’t hunt down references, and I won’t try to be brief. I’ll just write, quickly.

Comment author: [deleted] 26 November 2011 07:51:36PM *  1 point [-]

Alexei, your link is broken. It directs to: http://www.http://intelligenceexplosion.com/

I think you meant: http://www.intelligenceexplosion.com/

Comment author: Alexei 26 November 2011 09:03:45PM 0 points [-]

Fixed, thanks!

Comment author: [deleted] 26 November 2011 09:48:45PM 1 point [-]

Welcome!

Comment author: policetac 01 March 2013 12:52:33AM *  0 points [-]

Want to read something kind of funny? I just skipped through all your writings, but it's only because of something I saw on the second page of the first thing I ever heard about you. Ok.

On your - "MY Own Story." http://facingthesingularity.com/2011/preface/ You wrote: "Intelligence explosion My interest in rationality inevitably lead me (in mid 2010, I think) to a treasure trove of articles on the mainstream cognitive science of rationality: the website Less Wrong. It was here that I first encountered the idea of intelligence explosion, fro..."

On Mine: "About the Author - "https://thesingularityeffect.wordpress.com/welcome-8/ I wrote: "The reason I write about emerging technology is because of an “awakening” I had one evening a few years ago. For lack of a better description, suffice it to say that I saw a confusing combination of formula, imagery, words, etc. that formed two words. Singularity and Exponential..."

NOW, I'm going to go back and read some more while I'm waiting to speak with you somehow directly.

If what happened to you, is the same thing that happened to me... :then please please place a comment on the page. That would be great. (Again without reading. (If I'm correct you "might" get this.: "You should also do this because if true, then WE would both have seen a piece of the .what...."New Book???"

Just in case you think I'm a nut. Go back and read more of mine please.

Comment author: [deleted] 26 November 2011 05:22:33PM *  10 points [-]

I find this chatty, informal style a lot easier to read than your formal style. The sentences are shorter and easier to follow, and it flows a lot more naturally. For example, compare your introduction to rationality in From Skepticism to Technical Rationality to that in The Cognitive Science of Rationality. Though the latter defines its terms more precisely, the former is much easier to read.

Probably relevant: this comment by Yvain.

Comment author: Kutta 26 November 2011 02:24:20PM *  3 points [-]

Re: Preface and Contents

I am somewhat irked by "the human era will be over" phrase. It is not given that current-type humans cease to exist after any Singularity. Also, a positive Singularity could be characterised as the beginning of the humane era, in which case it is somewhat inappropriate to refer to the era afterwards as non-human. In contrast to that, negative Singularities typically result in universes devoid of human-related things.

Comment author: Logos01 29 November 2011 08:43:16AM 2 points [-]

I am somewhat irked by "the human era will be over" phrase. It is not given that current-type humans cease to exist after any Singularity.

Iron still exists, and is actively used, but we no longer live in the Iron Age.

Comment author: Kutta 29 November 2011 08:59:08AM 0 points [-]

My central point is contained in the sentence after that. A positive Singularity seems extremely human to me when contrasted to paperclip Singularities.

Comment author: Logos01 29 November 2011 11:08:04AM -1 points [-]

I am not particularly in the habit of describing human beings as "humane". That is a trait of those things which we aspire to but only too-rarely achieve.

Comment author: Grognor 26 November 2011 08:31:52AM 15 points [-]

Re: this image

Fucking brilliant.

Comment author: loup-vaillant 25 January 2012 03:35:33PM 0 points [-]

Yeah, except the minor quibble I have over what this image actually represents if you have some background knowledge (the space city is from a computer game). But sure, the look of it is absolutely brilliant.

Comment author: Bongo 29 November 2011 11:30:28PM 0 points [-]

It's also another far-mode picture.

Comment author: lukeprog 26 November 2011 08:39:32AM *  12 points [-]

I was relatively happy with that one. It's awfully hard to represent the Singularity visually.

Uncropped, in color, for your pleasure.

Comment author: katydee 28 November 2011 08:41:59PM 0 points [-]

Yeah, that's a great image. In my opinion, it would be slightly improved if some of the original fog was drifting in front of/around the towers of the city, as this would both be a nice callback to the original art and help show that the future is still somewhat unclear, but the effort involved might be incommensurate with the results.

Comment author: lukeprog 28 November 2011 08:44:12PM 2 points [-]

I would love to have a digital artist draw something very much like my mashup (but incorporating your suggestion) as an original composition, so I could use it.

Comment author: ciphergoth 07 June 2012 10:42:39PM *  1 point [-]

I note that since this was written you've had a new, closely related image redrawn by Eran Cantrell. I'd actually thought it was Aaron Diaz from the way the guy looks...

Comment author: kurzninja 10 May 2013 08:49:05PM 0 points [-]

Looking at the new image now that I've just finished reading the E-book, I could have sworn it was Benedict Cumberbatch playing Sherlock Holmes.

Comment author: Vladimir_Nesov 26 November 2011 04:24:59PM 2 points [-]

Where is it from?

Comment author: Dufaer 26 November 2011 05:16:36PM *  5 points [-]

From the "About" page:

The header image is a mashup [full size] of Wanderer above the Sea of Fog by Caspar David Friedrich and an artist's depiction of the Citadel from Mass Effect 2.

Comment deleted 26 November 2011 04:47:59PM *  [-]
Comment deleted 26 November 2011 04:55:34PM *  [-]
Comment deleted 26 November 2011 05:09:44PM *  [-]
Comment deleted 26 November 2011 05:11:14PM *  [-]
Comment deleted 26 November 2011 05:26:02PM [-]
Comment author: James_Miller 26 November 2011 08:30:11AM 9 points [-]

Re: Preface

Luke discusses his conversion from Christianity to atheism in the preface. This journey plays a big role in how he came to be interested in the Singularity, but this conversion story might mistakenly lead readers to think that the arguments in favor of believing in a Singularity can also be used to argue against the existence of a God. If you want to get people to think rationally about the future of machine intelligence you might not want to intertwine your discussion with religion.

Comment author: Kevin 27 November 2011 02:53:54AM *  1 point [-]

I think it's possible that the paragraph could be tweaked in such a way as to make religious people empathize with Luke's religious upbringing rather than be alienated by it.

Comment author: lukeprog 27 November 2011 06:07:00AM 3 points [-]

Any suggestions for how?

Comment author: Vladimir_Nesov 26 November 2011 04:25:56PM 4 points [-]

might mistakenly lead readers to think that the arguments in favor of believing in a Singularity can also be used to argue against the existence of a God.

Some of them surely can.

Comment author: shminux 27 November 2011 12:00:33AM 5 points [-]

name three

Comment author: Kevin 27 November 2011 02:52:44AM *  14 points [-]
  • Kolmogorov Complexity/Occam's Razor

  • Reductionism/Physicialism

  • Rampant human cognitive biases

Comment author: Normal_Anomaly 26 November 2011 11:13:03AM 5 points [-]

I think the target audience mostly consists of atheists, to the point where associating Singularitarianism with Atheism will help more than hurt. Especially because "it's like a religion" is the most common criticism of the Singularity idea.

On another note, that paragraph has a typo:

Gradually, I built up a new worldview based on the mainstream scientific understanding of the world, and approach called “naturalism.”

Comment author: torekp 27 November 2011 02:55:03AM 12 points [-]

Especially because "it's like a religion" is the most common criticism of the Singularity idea.

Which is exactly why I worry about a piece that might be read as "I used to have traditional religion. Now I have Singularitarianism!"

Comment author: Normal_Anomaly 27 November 2011 04:10:13PM 0 points [-]

Yes, that is a problem. I was responding to:

the arguments in favor of believing in a Singularity can also be used to argue against the existence of a God.

which I saw as meaning that Singularitarianism might be perceived as associated with atheism. Associating it with atheism would IMO be a good thing, because it's already associated with religion like you said. The question is, does the page as currently written cause people to connect Singularitarianism with religion or atheism?

Comment author: Logos01 29 November 2011 08:42:41AM 3 points [-]

Associating it with atheism would IMO be a good thing, because it's already associated with religion like you said.

Not if it causes people to associate Singularitarianism as a "religion-substitute for atheists".

Comment author: lukeprog 26 November 2011 11:24:21AM 3 points [-]

Fixed, thanks.