Kaj_Sotala comments on Outside View(s) and MIRI's FAI Endgame - Less Wrong

16 Post author: Wei_Dai 28 August 2013 11:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (60)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 30 August 2013 05:53:30AM 1 point [-]

Personally, I didn't respond to this post because my reaction to it was mostly "yes, this is a problem, but I don't see a way by which talking about it will help at this point; we'll just have to wait and see". In other words, I feel that MIRI will just have to experiment with a lot of different strategies and see which ones look like they'll have promise, and then that experimentation will maybe reveal a way by which issues like this one can be solved, or maybe MIRI will end up pursuing an entirely different strategy. But I expect that we'll actually have to try out the different strategies before we can know.

Comment author: Wei_Dai 30 August 2013 06:39:28AM 2 points [-]

I'm not sure what kind of strategies you are referring to. Can you give some examples of strategies that you think MIRI should experiment with?

Comment author: Kaj_Sotala 30 August 2013 08:19:48AM 2 points [-]

For instance, MIRI's 2013 strategy mostly involves making math progress and trying to get mathematicians in academia interested in these kinds of problems, which is a different approach from the "small FAI team" one that you focus on in your post. As another kind of approach, the considerations outlined in AGI Impact Experts and Friendly AI Experts would suggest a program of generally training people with an expertise in AI safety questions, in order to have safety experts involved in many different AI projects. There have also been various proposals about eventually pushing for regulation of AI, though MIRI's comparative advantage is probably more on the side of technical research.

Comment author: Wei_Dai 30 August 2013 09:00:04AM 2 points [-]

I thought "making math progress and trying to get mathematicians in academia interested in these kinds of problems" was intended to be preparation for eventually doing the "small FAI team" approach, by 1) enlarging the talent pool that MIRI can eventually hire from, and 2) offloading the subset of problems that Eliezer thinks are safe onto the academic community. If "small FAI team" is not a good idea, then I don't see what purpose "making math progress and trying to get mathematicians in academia interested in these kinds of problems" serves, or how experimenting with it is useful. The experiment could be very "successful" in making lots of math progress and getting a lot of mathematicians interested, but that doesn't help with the endgame problem that I point out in the OP.

Generally training people with an expertise in AI safety questions and pushing for regulation of AI both sound good to me, and I'd be happy to see MIRI try them. You could consider my post as an argument for redirecting resources away from preparing for "small FAI team" and into such experiments.

Comment author: Kaj_Sotala 30 August 2013 02:14:13PM 3 points [-]

I thought "making math progress and trying to get mathematicians in academia interested in these kinds of problems" was intended to be preparation

Yes, I believe that is indeed the intention, but it's worth noting that the things that MIRI's currently doing really allow them to pursue either strategy in the future. So if they give up on the "small FAI team" strategy because it turns out to be too hard, they may still pursue the "big academic research" strategy, based on the information collected at this and other steps.

If "small FAI team" is not a good idea, then I don't see what purpose "making math progress and trying to get mathematicians in academia interested in these kinds of problems" serves, or how experimenting with it is useful.

"Small FAI team" might turn out to be a bad idea because the problem is too difficult for a small team to solve alone. In that case, it may be useful to actually offload most of the problems to a broader academic community. Of course, this may or may not be safe, but there may come a time when it turns out that it is the least risky alternative.

Comment author: Wei_Dai 01 September 2013 07:15:41AM 1 point [-]

I think "big academic research" is almost certainly not safe, for reasons similar to my argument to Paul here. There are people who do not care about AI safety due to short planning horizons or because they think they have simple, easy to transmit values, and will deploy the results of such research before the AI safety work is complete.

Of course, this may or may not be safe, but there may come a time when it turns out that it is the least risky alternative.

This would be a fine argument if there weren't immediate downsides to what MIRI is currently doing, namely shortening AI timelines and making it harder to create a singleton (or get significant human intelligence enhancement, which could help somewhat in the absence of a singleton) before AGI work starts ramping up.

Comment author: ESRogs 25 September 2013 06:21:04PM 0 points [-]

immediate downsides to what MIRI is currently doing, namely shortening AI timelines

To be clear, based on what I've seen you write elsewhere, you think they are shortening AI timelines because the mathematical work on reflection and decision theory would be useful for AIs in general, and are not specific to the problem of friendliness. Is that right?

This isn't obvious to me. In particular, the reflection work seems much more relevant to creating stable goal structures than to engineering intelligence / optimization power.