XiXiDu comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: Yvain 16 March 2012 07:03:08PM *  5 points [-]

It's probably not entirely fair to compare my case to yours because I started reading the Sequences before I was part of this community, and so I was much less familiar with the idea of Friendly AI than you are. But to answer your questions:

  1. Before reading the Sequences, I assumed unfriendly AI was one more crazy speculative idea about the future, around the level of "We'll discover psionics and merge into a single cosmic consciousness" and not really worthy of any more consideration.

  2. I think you believe that superintelligent AI may not be possible, that it's unlikely to "go foom", and that in general it's not a great use of our time to worry about it.

  3. That's a good question. Looking over the post list I'm surprised that I can't find any that look like the sort of thing that would do that directly (there's a lot about how it's important to build a Friendly AI as opposed to just throw one together and assume it will be Friendly, but if I understand you right we don't disagree there). It could have been an indirect effect of realizing that the person who wrote these was very smart and he believed in it. It could have been that they taught me enough rationality to realize I might be wrong about this and should consider changing my mind. And it could have been just very gradual worldview change. You said you were reading the debate with Robin, and that seems like a good starting point. The two dependency thingies labelled "Five Sources of Discontinuity" and "Optimization and the Singularity" here also give me vague memories of being good. But I guess that either I was wrong about the Sequences being full of brilliant pro-Singularity arguments, or they're more complicated than I thought. Maybe someone else who's read them more recently than I have can answer this better?

...which shouldn't discourage you from reading the Sequences. They're really good. Really. They might or might not directly help you on this question, but they'll be indirectly helpful on this and many other things. It's a really good use of your time (debating with me isn't; I don't claim any special insight on this issue beyond what I've picked up from the Sequences and elsewhere, and I don't think I've ever posted any articles on AI simply because I wouldn't even meet this community's lax standards for expertise).

Comment author: XiXiDu 17 March 2012 12:25:50PM *  1 point [-]

(Addendum to my other comment)

Here is why I believe that reading the Sequences might not be worth the effort:

1) According to your survey, 38.5% of all people have read at least 75% of the Sequences yet only 16.5% think that unfriendly AI is the most fearsome existential risk.

2) The following (smart) people have read the Sequences, and more, but do not agree about risks from AI:

  • Robin Hanson
  • Katja Grace (who has been a visiting fellow)
  • John Baez (who interviews Eliezer Yudkowsky)
  • Holden Karnofsky
  • Ben Goertzel
Comment author: Yvain 17 March 2012 12:54:32PM 2 points [-]

I hope I didn't claim the Sequences, or any argument were 100% effective in changing the mind of every single person who read them.

Also, Ben Goertzel has read all the Sequences? That makes that recent conversation with Luke kind of sad.

Comment author: XiXiDu 17 March 2012 01:55:08PM -1 points [-]

I hope I didn't claim the Sequences, or any argument were 100% effective in changing the mind of every single person who read them.

No, but in the light of an expected utility calculation. Why would I read the Sequences?

Comment author: Gabriel 17 March 2012 11:57:00PM 2 points [-]

They contain many insights unrelated to AI (looking at the sequences wiki page, it seems that most AI-ish things are concentrated in the second half). And many people had fun reading them. I think it would be a better use of time than trying to generically improve your math education that you speak of elsewhere (I don't think it makes sense to learn math as an instrumental goal without a specific application in mind -- unless you simply like math, in which case knock yourself out).

From a theoretical standpoint, you should never expect that observing something will shift your beliefs in some particular direction (and, guess what, there's a post about that). This doesn't work for humans -- we can be convinced of things and we can expect to be convinced even if we don't want to. But then, the fact that the sequences fail to convince many people shouldn't be an argument against reading them. At least now you can be sure that they're safe to read and won't brainwash you.

Comment author: wedrifid 18 March 2012 01:54:11AM 3 points [-]

No, but in the light of an expected utility calculation. Why would I read the Sequences?

Assuming you continue to write posts authoritatively about subjects related to said sequences - including criticisms of the contents therein - having read the sequences may reduce the frequency of you humiliating yourself.

Comment author: ArisKatsaris 17 March 2012 12:39:58PM *  2 points [-]

According to your survey, 38.5% of all people have read at least 75% of the Sequences yet only 16.5% think that unfriendly AI is the most fearsome existential risk.

So what? I'm not even sure that Eliezer himself considers uFAI the most likely source of extinction. It's just that Friendly AI would help save us from most the other possible sources of extinction too (not just from uFAI), and from several other sources of suffering too (not just extinction), so it kills multiple birds with one stone to figure it out.

As a point of note, I myself didn't place uFAI as the most likely existential risk in that survey. That doesn't mean I share your attitude.