JoshuaFox comments on Stupid Questions Thread - January 2014 - Less Wrong

10 Post author: RomeoStevens 13 January 2014 02:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (293)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaFox 14 January 2014 07:47:45AM *  10 points [-]

I think the whole MIRI/LessWrong memeplex is not massively confused.

But conditional on it turning out to be very very wrong, here is my answer:

A. MIRI

  1. The future does indeed take radical new directions, but these directions are nothing remotely like the hard-takeoff de-novo-AI intelligence explosion which MIRI now treats as the max-prob scenario. Any sci-fi fan can imagine lots of weird futures, and maybe some other one will actually emerge.

  2. MIRI's AI work turns out to trigger a massive negative outcome -- either the UFAI explosion they are trying to avoid, or something else almost as bad. This may result from fundamental mistakes in understanding, or because of some minor bug.

  3. It turns out that the UFAI explosion really is the risk, but that MIRI's AI work is just the wrong direction; e.g., it turns out that that building a community of AIs in rough power balance; or experimenting by trial-and-error with nascent AGIs is the right solution.

B. CfAR

  1. It turns out that the whole CfAR methodology is far inferior to instrumental outcomes than, say, Mormonism. Of course, CfAR would say that if another approach is instrumentally better, they would adopt it. But if they only find this out years down the road, this could be a massive failure scenario.

  2. It turns out that epistemologically non-rational techniques are instrumentally valuable. Cf. Mormonism. Again, CfAR knows this, but in this failure scenario, they fail to reconcile the differences between the two types of rationality they are trying for.

Again, I think that the above scenarios are not likely, but they're my best guess at what "massively wrong" would look like.

Comment author: John_Maxwell_IV 15 January 2014 05:44:51AM *  7 points [-]

MIRI failure modes that all seem likely to me:

  • They talk about AGI a bunch and end up triggering an AGI arms race.

  • AI doesn't explode the way they talk about, causing them to lose credibility on the importance of AI safety as well. (Relatively slow-moving) disaster ensues.

  • The future is just way harder to predict than everyone thought it would be... we're cavemen trying to envision the information age and all of our guesses are way off the mark in ways we couldn't have possibly forseen.

  • Uploads come first.