You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong Discussion

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 20 April 2012 06:29:34PM 2 points [-]

Cousin_it's link is interesting, but it doesn't seem to have anything to do with OAI, and instead looks like a possible method of directly building an FAI.

Of course the model "OAIs are extremely dangerous if not properly contained; let's let everyone have one!" isn't going to work.

Hmm, maybe I'm underestimating the amount of time it would take for OAI knowledge to spread, especially if the first OAI project is a military one (on the other hand, the military and their contractors don't seem to be having better luck with network security than anyone else). How long do you expect the window of opportunity (i.e., the time from the first successful OAI to the first UFAI, assuming no FAI gets built in the mean time) to be?

some of these things will be experimental

I'd like to have FAI researchers determine what kind of experiments they want to do (if any, after doing appropriate benefit/risk analysis), which probably depends on the specific FAI approach they intend to use, and then build limited AIs (or non-AI constructs) to do the experiments. Building general Oracles that can answer arbitrary (or a wide range of) questions seems unnecessarily dangerous for this purpose, and may not help anyway depending on the FAI approach.

And there seems no drawback to pushing an UFAI project into becoming an OAI project.

There may be, if the right thing to do is to instead push them to not build an AGI at all.

Comment author: Stuart_Armstrong 23 April 2012 11:11:41AM 0 points [-]

One important fact I haven't been mentioning: OAI help tremendously with medium speed takeoffs (fast takeoffs are dangerous for the usual reasons, slow takeoffs mean that we will have moved beyond OAIs by the time the intelligence level hits dangerous), because we can then use them to experiment.

There may be, if the right thing to do is to instead push them to not build an AGI at all.

Interacting with AGI people at the moment (organising a jointish conference), will have a clearer idea of how they react to these ideas at a later stage.

Comment author: Vladimir_Nesov 23 April 2012 11:57:12AM *  0 points [-]

slow takeoffs mean that we will have moved beyond OAIs by the time the intelligence level hits dangerous

Moved where/how? Slow takeoff means we have more time, but I don't see how it changes the nature of the problem. Low time to WBE makes (not particularly plausible) slow takeoff similar to the (moderately likely) failure to develop AGI before WBE.