JoshuaFox comments on AALWA: Ask any LessWronger anything - Less Wrong

28 Post author: Will_Newsome 12 January 2014 02:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (611)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaFox 12 January 2014 07:47:26PM *  1 point [-]

Is there any uptake of MIRI ideas in the AI community? Of HPMOR?

Comment author: jsteinhardt 13 January 2014 08:08:20AM *  6 points [-]

I wouldn't presume to know what the field as a whole thinks, as I think views vary a lot from place to place and I've only spent serious time at a few universities. However, I can speculate based on the data I do have.

I think a sizable number (25%?) of AI graduate students I know are aware of LessWrong's existence. Also a sizeable (although probably smaller) number have read at least a few chapters of HPMOR; for the latter I'm mostly going off of demographics, I don't know that many who have told me they read HPMOR.

There is very little actual discussion of MIRI or LessWrong. From what I would gather most people silently disagree with MIRI, a few people probably silently agree. I would guess almost no one knows what MIRI is, although more would have heard of the Singularity Institute (but might confuse it with Singularity University). People do occasionally wonder whether we're going to end up killing everyone, although not for too long.

To address your comment in the grandchild, I certainly don't speak for Norvig but I would guess that "Norvig takes these [MIRI] ideas seriously" is probably false. He does talk at the Singularity Summit, but the tone when I attended his talk sounded more like "Hey you guys just said a bunch of stuff, based on what people in AI actually do, here's the parts that seem true and here's the part that seem false." It's also important to note that the notion of the singularity is much more widespread as a concept than MIRI in particular. "Norvig takes the singularity seriously" seems much more likely to be true to me, though again, I'm far from being in a position to make informed statements about his views.

Comment author: JoshuaFox 13 January 2014 08:40:38AM *  0 points [-]

Thanks. I was basing my comments about Norvig on what he says in the intro to his AI textbook, which does address UFAI risk.

Comment author: jsteinhardt 13 January 2014 08:47:05AM 1 point [-]

What's the quote? You may very well have better knowledge of Norvig's opinions in particular than I do. I've only talked to him in person twice briefly, neither time about AGI, and I haven't read his book.

Comment author: JoshuaFox 13 January 2014 09:06:25AM *  0 points [-]

Russell and Norvig, Artificial Intelligence: A Modern Approach. Third Edition, 2010, pp. 1037 - 1040. Available here.

Comment author: jaibot 13 January 2014 02:03:46PM *  3 points [-]

I think the key quote here is:

Arguments for and against strong are inconclusive. Few mainstream researchers believe that anything significant hinges on the outcome of the debate.

Comment author: jsteinhardt 14 January 2014 09:19:20AM 1 point [-]

Hm...I personally find it hard to divine much about Norvig's personal views from this. It seems like a relatively straightforward factual statement about the state of the field (possibly hedging to the extent that I think the arguments in favor of strong AI being possible are relatively conclusive, i.e. >90% in favor of possibility).

Comment author: lukeprog 15 January 2014 12:27:08AM 4 points [-]

When I spoke to Norvig at the 2012 Summit, he seemed to think getting good outcomes from AGI could indeed be pretty hard, but also that AGI was probably a few centuries away. IIRC.

Comment author: IlyaShpitser 15 January 2014 12:52:16AM 0 points [-]

Interesting, thanks.

Comment author: jsteinhardt 13 January 2014 04:30:05AM 1 point [-]

Like Mark, I'm not sure I was able to parse your question, can you please clarify?

Comment author: JoshuaFox 13 January 2014 07:30:16AM *  1 point [-]

Right, there was a typo. I've fixed it now. I'm just wondering if MIRI-like ideas are spreading among AI researchers. We see that Norvig take these ideas seriously.

And separately, I wonder if HPMOR is a fad in elite AI circles. I have heard that it's popular in top physics departments.

Comment author: [deleted] 12 January 2014 08:57:11PM 1 point [-]

What does that question mean?

Comment author: JoshuaFox 13 January 2014 07:53:24AM 0 points [-]

Sorry, typo now fixed. See my response to jsteinhardt below.