jacob_cannell comments on MIRI's 2015 Summer Fundraiser! - Less Wrong

42 Post author: So8res 19 August 2015 12:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 22 July 2015 12:21:15AM *  10 points [-]

With this level of funding, we would be able to begin building an entirely new AI alignment research team working in parallel to our current team, working on different problems and taking a different approach. Our current technical agenda is not the only way to approach the challenges that lie ahead, and we would be thrilled to get the opportunity to spark a second research group.

Hi Nate, can you briefly describe this second approach? (Not that I have $6M, but I'm curious what other FAI approach MIRI considers promising.)

On another note, do you know anything about Elon Musk possibly having changed his mind about the threat of AI and how that might affect future funding of work in this area? From this report of a panel discussion at ICML 2015:

Apparently Hassabis of DeepMind has been at the core of recent AI fear from prominent figures such as Elon Musk, Stephen Hawking and Bill Gates. Hassabis introduced AI to Musk, which may have alarmed him. However, in recent months, Hassabis has convinced Musk, and also had a three-hour-long chat with Hawking about this. According to him, Hawking is less worried now. However, he emphasized that we must be ready, not fear, for the future.

Comment author: jacob_cannell 22 July 2015 01:07:22AM *  2 points [-]

That paragraph almost makes sense, but it seems to be missing a key sentence or two. Hassabis is "at the core of recent AI fear" and introduced AI to Musk, but then Hassabis changed his mind and proceeded to undo his previous influence? Its hard to imagine those talks - "Oh yeah you know this whole AI risk thing I got you worried about? I was wrong, it's no big deal now."

Comment author: Vaniver 22 July 2015 01:51:39AM 3 points [-]

It seems more likely to me that Hassabis said something like "with things as they stand now, a bad end seems most likely." They start to take the fear seriously, act on it, and then talk to Hassabis again, and he says "with things as they stand now, a bad end seems likely to be avoided."

In particular, we seem to have moved from a state where AI risk needed more publicity to a state where AI risk has the correct amount of publicity, and more might be actively harmful.