You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

paulfchristiano comments on Superintelligence Reading Group 2: Forecasting AI - Less Wrong Discussion

10 Post author: KatjaGrace 23 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (109)

You are viewing a single comment's thread. Show more comments above.

Comment author: paulfchristiano 23 September 2014 02:27:38AM 2 points [-]

Some thoughts on this perspective:

  1. Most people are not so exclusively interested in existential risk reduction; their decisions depend on how the development of AI compares to more pressing concerns. I think you can make a good case that normal humanitarians are significantly underestimating the likely impact of AI; if that's true, then by making that case one might be able to marshall a lot of additional effort.

  2. Echoing Katja: general improvements in individual and collective competence are also going to have a material effect on how the development of AI is handled. If AI is far off (e.g. if we were having this discussion in 1600) then it seems that those effects will tend to dominate the achievable direct impacts. Even if AI is developed relatively soon, it's still plausible to me that institutional quality will be a big determinant of outcomes relative to safety work (though it's less plausible on the margin, given just how little safety work there is).

I can imagine a future where all of the low-hanging fruit is taken in many domains, so that the best available interventions for altrusits concerned with long-term trajectories is focusing on improbable scenarios that are being neglected by the rest of the world because they don't care as much. For better or worse, I don't think we are there yet.

Comment author: Jeff_Alexander 23 September 2014 07:28:13AM 1 point [-]

how the development of AI compares to more pressing concerns

Which concerns are more pressing? How was this assessed? I don't object to other things being more important, but I do find the suggestion there are more pressing concerns if AI is a bit further out one of the least persuasive aspects of the readings given the lack of comparison & calculation.

2.

I agree with all of this, more or less. Perhaps I didn't state my caveats strongly enough. I just want an explicit comparison attempted (e.g., given a 10% chance of AI in 20 years, 50% in 50 years, 70% within 100 years, etc., the expected value of working on AI now vs. synthetic biology risk reduction, healthy human life extension, making the species multi-planetary, raising the rationality waterline, etc.) and presented before accepting that AI is only worth thinking about if it's near.