You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong Discussion

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 19 April 2012 10:29:43AM 1 point [-]

My main "pressure point" is pushing UFAI development towards OAI. ie I don't advocate building OAI, but making sure that the first AGIs will be OAIs. And I'm using far too many acronyms.

Comment author: Wei_Dai 19 April 2012 10:39:12AM 5 points [-]

What does it matter that the first AGIs will be OAIs, if UFAIs follow immediately after? I mean, once knowledge of how to build OAIs start to spread, how are you going to make sure that nobody fails to properly contain their Oracles, or intentionally modifies them into AGIs that act on their own initiatives? (This recent post of mine might better explain where I'm coming from, if you haven't already read it.)

Comment author: cousin_it 19 April 2012 09:52:15PM *  2 points [-]

We can already think productively about how to win if oracle AIs come first. Paul Christiano is working on this right now, see the "formal instructions" posts on his blog. Things are still vague but I think we have a viable attack here.

Comment author: Stuart_Armstrong 20 April 2012 08:37:17AM 1 point [-]

Wot cousin_it said.

Of course the model "OAIs are extremely dangerous if not properly contained; let's let everyone have one!" isn't going to work. But there are many things we can try with an OAI (building a FAI, for instance), and most importantly, some of these things will be experimental (the FAI approach relies on getting the theory right, with no opportunity to test it). And there is a window that doesn't exist with a genie - a window where people realise superintelligence is possible and where we might be able to get them to take safety seriously (and they're not all dead). We might also be able to get exotica like a limited impact AI or something like that, if we can find safe ways of experimenting with OAIs.

And there seems no drawback to pushing an UFAI project into becoming an OAI project.

Comment author: Wei_Dai 20 April 2012 06:29:34PM 2 points [-]

Cousin_it's link is interesting, but it doesn't seem to have anything to do with OAI, and instead looks like a possible method of directly building an FAI.

Of course the model "OAIs are extremely dangerous if not properly contained; let's let everyone have one!" isn't going to work.

Hmm, maybe I'm underestimating the amount of time it would take for OAI knowledge to spread, especially if the first OAI project is a military one (on the other hand, the military and their contractors don't seem to be having better luck with network security than anyone else). How long do you expect the window of opportunity (i.e., the time from the first successful OAI to the first UFAI, assuming no FAI gets built in the mean time) to be?

some of these things will be experimental

I'd like to have FAI researchers determine what kind of experiments they want to do (if any, after doing appropriate benefit/risk analysis), which probably depends on the specific FAI approach they intend to use, and then build limited AIs (or non-AI constructs) to do the experiments. Building general Oracles that can answer arbitrary (or a wide range of) questions seems unnecessarily dangerous for this purpose, and may not help anyway depending on the FAI approach.

And there seems no drawback to pushing an UFAI project into becoming an OAI project.

There may be, if the right thing to do is to instead push them to not build an AGI at all.

Comment author: Stuart_Armstrong 23 April 2012 11:11:41AM 0 points [-]

One important fact I haven't been mentioning: OAI help tremendously with medium speed takeoffs (fast takeoffs are dangerous for the usual reasons, slow takeoffs mean that we will have moved beyond OAIs by the time the intelligence level hits dangerous), because we can then use them to experiment.

There may be, if the right thing to do is to instead push them to not build an AGI at all.

Interacting with AGI people at the moment (organising a jointish conference), will have a clearer idea of how they react to these ideas at a later stage.

Comment author: Vladimir_Nesov 23 April 2012 11:57:12AM *  0 points [-]

slow takeoffs mean that we will have moved beyond OAIs by the time the intelligence level hits dangerous

Moved where/how? Slow takeoff means we have more time, but I don't see how it changes the nature of the problem. Low time to WBE makes (not particularly plausible) slow takeoff similar to the (moderately likely) failure to develop AGI before WBE.

Comment author: Vladimir_Nesov 19 April 2012 11:28:38AM *  1 point [-]

Together with Wei's point that OAI doesn't seem to help much, there is the downside that existence of OAI safety guidelines might make it harder to argue against pushing AGI in general. So on net it's plausible that this might be a bad idea, which argues for weighing this tradeoff more carefully.

Comment author: Stuart_Armstrong 20 April 2012 08:38:33AM 1 point [-]

there is the downside that existence of OAI safety guidelines might make it harder to argue against pushing AGI in general.

Possibly. But in my experience even getting the AGI people to admit that there might be safety issues is over 90% of the battle.

Comment author: Vladimir_Nesov 20 April 2012 10:44:06AM *  0 points [-]

It's useful for AGI researchers to notice that there are safety issues, but not useful for them to notice that there are "safety issues" which can be dealt with by following OAI guidelines. The latter kind of understanding might be worse than none at all, as it seemingly resolves the problem. So it's not clear to me that getting people to "admit that there might be safety issues" is in itself a worthwhile milestone.