You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong Discussion

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 29 March 2012 08:26:07PM *  3 points [-]

There seems to be a tradeoff here. An open project has more chances to develop the necessary theory faster, but having such project in the open looks like a clearly bad idea towards the endgame. So on one hand, an open project shouldn't be cultivated (and becomes harder to hinder) as we get closer to the endgame, but on the other, a closed project will probably not get off the ground, and fueling it by an initial open effort is one way to make it stronger. So there's probably some optimal point to stop encouraging open development, and given the current state of the theory (nil) I believe the time hasn't come yet.

The open effort could help the subsequent closed project in two related ways: gauge the point where the understanding of what to actually do in the closed project is sufficiently clear (for some sense of "sufficiently"), and form enough of background theory to be able to convince enough young Conways (with necessary training) to work on the problem on the closed stage.

Comment author: Wei_Dai 29 March 2012 09:51:09PM *  3 points [-]

So there's probably some optimal point to stop encouraging open development, and given the current state of the theory (nil) I believe the time hasn't come yet.

Your argument seems premised on the assumption that there will be an endgame. If we assume some large probability that we end up deciding not to have an endgame at all (i.e., not to try to actually build FAI with unenhanced humans), then it's no longer clear "the time hasn't come yet".

Even if we assume that with probability ~1 there will be an effort to directly build FAI, given the slippery slope effects we have to stop encouraging open research well before the closed project starts. The main deciding factors for "when" must be how large the open research community has gotten, how strong the slippery slope effects are, and how much "pull" SingInst has against those effects. The "current state of the theory" seems to have little to do with it. (Edit: No that's too strong. Let me amend it to "one consideration among many".)

Comment author: Vladimir_Nesov 29 March 2012 10:12:01PM *  3 points [-]

If we assume some large probability that we end up deciding not to have an endgame at all (i.e., not to try to actually build FAI with unenhanced humans), then it's no longer clear "the time hasn't come yet".

This is something we'll know better further down the road, so as long as it's possible to defer this decision (i.e. while the downside is not too great, however that should be estimated), it's the right thing to do. I still can't rule out that there might be a preference definition procedure (that refers to humans) simple enough to be implemented pre-WBE, and decision theory seems to be an attack on this possibility (clarifying why this is naive, for example, in which case it'll also serve as an argument to the powerful in the WBE race).

The "current state of the theory" seems to have little to do with it. (Edit: No that's too strong. Let me amend it to "one consideration among many".)

Well, maybe not specifically current, but what can be expected eventually, for the closed project to benefit from, which does seem to me like a major consideration in the possibility of its success.